Bad Things Happen to Good Signals oise, broadly construed, is any change to the signal from its expected value, x[n] h[n], when it arrives at the receiver. We ll look at additive noise and assume the noise in our systems is independent in value and timing from the nominal signal, y nf [n], and that the noise can be described by a random variable with a known probability distribution. 6. Spring Lecture 6 Mean, power, energy, SR Metrics for random processes ormal PDF, CDF Calculating p(error), BER vs. SR 6. Spring Lecture 6, Slide We ll model the received signal as y nf [n] + noise[n]. noise-free signal at receiver, i.e., x[n] h[n] y nf [n] + noise[n] y[n] Independent random noise noisy signal receiver must process 6. Spring Lecture 6, Slide Definition of Mean, Power, Energy Some interesting statistical metrics for x[n]: Mean: Power: Energy: µ x = P x = E x = x[n] x[n] x[n] P x = ( x[n] µ x ) 6. Spring Lecture 6, Slide 3 E x = x[n] µ x ( ) Slides 3-6 derived from 6. slides by Mike Perrott In analyzing our systems, we often use metrics where the mean has been factored out. Signal-to-oise Ratio (SR) The Signal-to-oise ratio (SR) is useful in judging the impact of noise on system performance: SR = P signal P noise SR is often measured in decibels (db): P SR (db) =log signal $ P & noise % 3db is a factor of logx 6. Spring Lecture 6, Slide 4 X 9 8 7 6 5 4 3 -. -. -3. -4. -5. -6. -7. -8. -9. -.
SR Example Analysis of Random Processes Random processes, such as noise, take on different sequences for different trials Think of trials as different measurement intervals from the same experimental setup (as in lab) For a given trial, we can apply our standard analysis tools and metrics mean and power calculations, etc... Changing the amplification factor (gain) A leads to different SR values: Lower A lower SR Signal quality degrades with lower SR When trying to analyze the ensemble (i.e., all trials) of possible outcomes, we find ourselves in need of new tools and metrics 6. Spring Lecture 6, Slide 5 6. Spring Lecture 6, Slide 6 Stationary and Ergodic Random Processes Experiment to See Statistical Distribution Stationary statistical behavior is independent of shifts in time in a given trial. Implies noise[k] is statistically indistinguishable from noise[k+] Ergodic statistical sampling can be performed at one sample time (i.e., n=k) across different trials, or across different sample times of the same trial with no change in the measured result 6. Spring Lecture 6, Slide 7 Experiment: create histograms of sample values from trials of increasing lengths. Assumption of stationarity implies histogram should converge to a shape known as a probability density function (PDF) 6. Spring Lecture 6, Slide 8
Formalizing the PDF Concept Formalizing Probability Define x as a random variable whose PDF has the same shape as the histogram we just obtained. Denote the PDF of x as f x (x) and scale f x (x) such that its overall area is : (x) = f x The probability that random variable x takes on a value in the range of x to x is calculated from the PDF of x as: x p(x x x ) = f x ote that probability values are always in the range of to. x 6. Spring Lecture 6, Slide 9 6. Spring Lecture 6, Slide Example Probability Calculation Examination of Sample Value Distribution This shape is referred to as a uniform PDF. Verify that overall area is : =.5dx = f x Probability that x takes on a value between.5 and : p(.5 x.) =.5dx =.5 6. Spring Lecture 6, Slide.5 Assumption of ergodicity implies the value occurring at a given time sample, noise[k], across many different trials has the same PDF as estimated in our previous experiment of many time samples and one trial. Thus we can model noise[k] using the random variable x. 6. Spring Lecture 6, Slide
Probability Calculation Mean and Variance In a given trial, the probability that noise[k] takes on a value in the range of x to x is computed as p(x x x ) = f x 6. Spring Lecture 6, Slide 3 x x The mean of a random variable x, μ x, corresponds to its average value and computed as: µ x = x f x The variance of a random variable x, x, gives an indication of its variability and is computed as: Compare with power calculation x = (x µ x ) f x 6. Spring Lecture 6, Slide 4 Visualizing Mean and Variance Example Mean and Variance Calculation Mean: µ x = x f x = x dx = 4 x = Variance: Changes in mean shift the center of mass of PDF Changes in variance narrow or broaden the PDF (but area is always equal to ) 6. Spring Lecture 6, Slide 5 x = (x µ x ) f x = x dx = 6 x ( ) ( ) 3 6. Spring Lecture 6, Slide 6 = 3
oise on a Communication Channel The net noise observed at the receiver is often the sum of many small, independent random contributions from the electronics and transmission medium. If these independent random variables have finite mean and variance, the Central Limit Theorem says their sum will be normally distributed. The figure below shows the histograms of the results of, trials of summing random samples draw from [-,] using two different distributions. The ormal Distribution A normal or Gaussian distribution with mean μ and variance has a PDF described by f x (x) = ( ) xµ e The normal distribution with μ= and = is called the standard or unit normal. Triangular Uniform.5 - PDF PDF - 6. Spring Lecture 6, Slide 7 6. Spring Lecture 6, Slide 8 Cumulative Distribution Function (x) = CDF for Unit ormal PDF When analyzing the effects of Gaussian noise, we ll often want to determine the probability that the noise is larger or smaller than a given value x. From slide : (xµ ) x p(x x ) = e $ dx % & µ, (x ) Φ μ, (x ) x μ Most math libraries don t provide Φ(x) but they do have a related function, erf(x), the error function: erf(x) = x e t dt (xµ ) p(x x ) = e $ dx = % x µ, (x ) Where Φ μ, (x) is the cumulative distribution function (CDF) for the normal distribution with mean μ and variance. The CDF for µ, (x) = x µ & % ( $ ' the unit normal is usually written as just Φ(x). μ -Φ μ, (x ) x For Python hackers: from math import sqrt from scipy.special import erf CDF for ormal PDF def Phi(x,mu=,sigma=): t = erf((x-mu)/(sigma*sqrt())) return.5 +.5*t lim x $(x) = () =.5 lim (x) = x 6. Spring Lecture 6, Slide 9 6. Spring Lecture 6, Slide
Bit Error Rate The bit error rate (BER), or perhaps more appropriately the bit error ratio, is the number of bits received in error divided by the total number of bits transferred. We can estimate the BER by calculating the probability that a bit will be incorrectly received due to noise. Using our normal signaling strategy (V for, V for ), on a noise-free channel with no ISI, the samples at the receiver are either V or V. Assuming that s and s are equally probable in the transmit stream, the number of V samples is approximately the same as the number of V samples. So the mean and power of the noise-free received signal are µ ynf = P ynf = y nf [n] = = y nf [n] & % ( = $ ' 6. Spring Lecture 6, Slide & % ( = $ ' 4 = 4 p(bit error) ow assume the channel has Gaussian noise with μ= and variance. And we ll assume a digitization threshold of.5v. We can calculate the probability that noise[k] is large enough that y[k] = y nf [k] + noise[k] is received incorrectly: p(error transmitted ): p(error transmitted ): 6. Spring Lecture 6, Slide.5 Φ μ, (.5) = Φ((.5-)/) = Φ(-.5/).5 -Φ μ, (.5) = Φ μ, (-.5) = Φ((-.5-)/) = Φ(-.5/) p(bit error) = p(transmit )*p(error transmitted ) + p(transmit )*p(error transmitted ) =.5*Φ(-.5/) +.5*Φ(-.5/) = Φ(-.5/) Plots of noise-free voltage + Gaussian noise BER (no ISI) vs. SR We calculated the power of the noise-free signal to be.5 and the power of the Gaussian noise is its variance, so P SR (db) = log signal $.5$ P & = log & noise % % Given an SR, we can use the formula above to compute and then plug that into the formula on the previous slide to compute p(bit error) = BER. The BER result is plotted to the right for various SR values. 6. Spring Lecture 6, Slide 3