EE 2 - Introduction to Digital Communications Homework 8 Solutions May 7, 2008. (a) he error probability is P e = Q( SNR). 0 0 0 2 0 4 0 6 P e 0 8 0 0 0 2 0 4 0 6 0 5 0 5 20 25 30 35 40 SNR (db) (b) SNR min = 2 2C. hus to reliably communicate bit per channel use, we need SNR > 3 = 5 db. At an error probability of 0 4 2-PAM requires roughly 2dB, an additional 7 db beyond the minimum. At an error probability of 0 5 it requires 3 db, an additional 8 db beyond the minimum. As the target error probability decreases, the gap to the minimum SNR increases without bound (albeit very slowly). his is consistent with what was learnt in class, that coding is required to achieve arbitrarily low error probabilities with finite SNR. (c) For M-PAM, P e = 2( /M)Q( SNR/(M )), see above figure.
( ) (d) For M-PPM, define SNR = E b /σ 2 log, then P e < (M )Q 2 M SNR. As M, 2 the gap decreases. 0 0 0 2 M=2 M=4 M=8 M=6 0 4 0 6 P e 0 8 0 0 0 2 0 4 0 6 0 5 0 5 20 25 30 35 40 SNR (db) 2. (a) With Mhz of bandwidth and ideal sinc pulses we have 2Msamples/s. o achieve R = 4Mbits/s we need to send 2 bits per sample. his requires either 4-PAM with no repetition coding, 8-PAM with x.5 repetition, or 6-PAM with x2 repetition. PPM can only achieve rates of less than bit per sample so we rule this scheme out. From the graph plotted in Q, we see that using 4-PAM to achieve an error probability of 0 4 requires an SNR of around 22 db. For this scheme E b /N 0 22 + 0 log 0 (/2) = 9 db. As using M-PAM with repetition does not improve the error probability for the same amount of total energy expended, 8-PAM with.5x repetition and 6-PAM with 2x repetition will use the same energy per bit to achieve a 0 4 error probability. hus either 4-PAM with no repetition, 8-PAM with.5x repetition, or 6-PAM with 2x repetition are all best choices in terms of minimizing energy use whilst meeting the required error probability and data rate. (b) o achieve R = 50 kbits/sec we need to send 0.075 bits per sample (or bit every 3.33 samples). his can be achieved using 2-PAM and x3 repetition. he required SNR to achieve the target error probability is 2 db - 0 log 0 (3) db. he corresponding E b /N 0 2 + 0 log 0 (3) + 0 log 0 () = 2 db. If we use M-PAM with M > 2, E b /N 0 remains unchanged. If we use M-PPM we need log 2 M/M 0.075. From the plot in Q we see that at an error probability of 0 4 the larger the value of M the smaller the required SNR. hus we choose M = 64, the maximum value in the proscribed range, in which case we can compute the required E b /N 0 to be around 8 db. So this is the best option. Another possibility is to use only a fraction of the allocated bandwidth. Say we use a 2
bandwidth of 0.075W, then the noise variance is reduced by a factor of /0.075 and the SNR is boosted by 0 log 0(/0.075) db. With 2-PAM we would now be able to communicate at R = 50 kbits/sec using an SNR of 2 = db and an E b /N 0 db. his is less energy per bit than 64-PPM, so it is a more desirable option if we are not restricted to using the full bandwidth allocated. 3. (a) A system diagram is shown below, where We have g(t) sinc(t/ ). y(t) = m>0 x(m) g(t m ) h(t) + w(t) ỹ(t) = m>0 x(m) g(t m ) g ( t) h(t) + w(t) y[n] = ỹ(n ) = x(m) g(t m ) g ( t) h(t) + w[n] t=n m>0 m>0 (t-m) w(t) x[m] x(t) y(t) g(t) h(t) g(-t) (t) y[m] (b) As the set of functions {sinc(t/ m)} m Z forms an orthonormal basis for the space L 2, which has the inner product defined by < f(t), g(t) >= we can write the filtered noise in terms of this basis as w(t) = + m= f(t)g(t)dt, α m sinc(t/ m), for some coefficients α m. Solving for α m we find α m = w[m], which yields τ/2 [ lim E[ w(t)] 2 τ/2 + dt E [ τ/2 + E + m= n= m= m= n= + w[m]sinc(t/ m)] 2 dt + E [w[m]w[n]] 3 w[m]w[n]sinc(t/ m)sinc(t/ n) sinc(t/ m)sinc(t/ n)dt ] dt
Now if for any ɛ > 0, m > ( + ɛ)τ/2 or n > ( + ɛ)τ/2, then sinc(t/ m)sinc(t/ n)dt sinc(t/ m)sinc(t/ n) dt as τ. hus lim n n E[ w(t)] 2 dt +(+ɛ)τ/2 +(+ɛ)τ/2 m= (+ɛ)τ/2 n= (+ɛ)τ/2 +(+ɛ)τ/2 +(+ɛ)τ/2 m= (+ɛ)τ/2 n= (+ɛ)τ/2 +(+ɛ)τ/2 m= (+ɛ)τ/2 +(+ɛ)τ/2 m= (+ɛ)τ/2 +(+ɛ)n/2 m= (+ɛ)n/2 E [ w[m] 2] E [ w[m] 2] E [ w[m] 2] < π = π [ t ( + ɛ)τ/2 dt ( + ɛ)τ/2 t dt = π ((2 + ɛ)τ/2) 2 (ɛτ/2) 2 π( + ɛ/2) 2 τ 2 0 E [w[m]w[n]] sinc(t/ m)sinc(t/ n)dt E [w[m]w[n]] δ nm As this result for holds for any ɛ > 0 we let ɛ 0 to get lim E[ w(t)] 2 dt n n +n/2 m= n/2 E [ w[m] 2]. ] (c) We can essentially use the same proof, we just need to show that if for any ɛ > 0, m > ( + ɛ)τ/2 or n > ( + ɛ)τ/2, then g(t/ m)g(t/ n)dt 0 () 4
as τ. As the functions g(t/ m) are orthonormal, they have finite energy, i.e. his guarantees the property () holds. g 2 (t)dt <. 4. (a) D is also measured in seconds. he variable t represents the number of seconds after the signal enters the channel. he parameter D dictates the amount of memory in the channel. (b) he impulse response is non-zero for all t > 0, an infinite duration. However, we can define the ɛ-delay spread as the smallest value of t such that a fraction ɛ of the total energy is contained in the impulse response up to time t, i.e. ɛ-delay spread = For the channel in question, the ɛ-delay spread is D 2 log 2 ɛ which is proportional to D, as expected. 0 min s.t. t 0 h(τ) 2 dτ>( ɛ) 0 h(τ) 2 dτ c) he discrete time channel response is ( h k = D e t/d sinc k t ) dt with = 0 6 seconds, we have the following plot t D = 0 5 D = 0 6 D = 0 7 D = 0 8 0.8 0.6 h k 0.4 0.2 0 0 2 3 4 5 6 7 8 9 0 k 5
he discrete time channel is effectively memoryless for D = 0 7 and D = 0 8. his is consistent with what was taught in class, as for these values of D, most of the symbol energy arrives in the first tap. 6