Introduction I We have focused on the problem of deciding which of two possible signals has been transmitted. I Binary Signal Sets I We will generalize the design of optimum (MPE) receivers to signal sets with M signals. I M-ary signal sets. I With binary signal sets one bit can be transmitted in each signal period T. I With M-ary signal sets, log (M) bits are transmitted simultaneously per T seconds. I Example (M = 4): 00! s 0 01! s 1 10! s 11! s 3 018, B.-P. Paris ECE 630: Statistical Communication Theory 191
M-ary Hypothesis Testing Problem I We can formulate the optimum receiver design problem as a hypothesis testing problem: H 0 : R t = s 0 +N t H 1 : R t = s 1 +N t. H M 1 : R t = s M 1 +N t with a priori probabilities p i = Pr{H i }, i = 0, 1,..., M 1. I Note: I With more than two hypotheses, it is no longer helpful to consider the (likelihood) ratio of pdfs. I Instead, we focus on the hypothesis with the maximum a posteriori (MAP) probability or the maximum likelihood (ML). 018, B.-P. Paris ECE 630: Statistical Communication Theory 19
AWGN Channels I Of most interest in communications are channels where N t is a white Gaussian noise process. I Spectral height N 0. I For these channels, the optimum receivers can be found by arguments completely analogous to those for the binary case. I Note that with M-ary signal sets, the subspace containing all signals will have up to M dimensions. I We will determine the optimum receivers by generalizing the optimum binary receivers for AWGN channels. 018, B.-P. Paris ECE 630: Statistical Communication Theory 193
Starting Point: Binary MPE Decision Rule I We have shown, that the binary MPE decision rule can be expressed equivalently as I either hr t, (s 1 s 0 )i H 1 N? 0 H 0 ln p0 + ks 1k ks 0 k p 1 I or kr t s 0 k N 0 ln(p 0 ) H 1? H 0 kr t s 1 k N 0 ln(p 1 ) I The first expression is most useful for deriving the structure of the optimum receiver. I The second form is helpful for interpreting the decision rule in signal space. 018, B.-P. Paris ECE 630: Statistical Communication Theory 194
M-ary MPE Receiver I The decision rule hr t, (s 1 s 0 )i H 1 N? 0 H 0 ln p0 + ks 1k ks 0 k p 1 can be rewritten as z } { N Z 1 = hr t, s 1 i+ 0 ln(p ks 1 k 1) g 1 H 1? H 0 hr t, s 0 i + N 0 ln(p ks 0 k 0) = Z 0 {z } g 0 018, B.-P. Paris ECE 630: Statistical Communication Theory 195
M-ary MPE Receiver I The decision rule is easily generalized to M signals: ˆm = arg max n=0,...,m 1 Z n z } { hr t, s n i + N 0 ln(p ks n k n) {z } g n I The optimum detector selects the hypothesis with the largest decision statistic Z n. 018, B.-P. Paris ECE 630: Statistical Communication Theory 196
M-ary MPE Receiver I The bias terms g n account for unequal priors and for differences in signal energy E n = ks n k. I Common terms can be omitted I For equally likely signals, I g n = ks nk. For equal energy signals, g n = N 0 ln(p n) I For equally likely, equal energy signal, g n = 0 018, B.-P. Paris ECE 630: Statistical Communication Theory 197
M-ary MPE Receiver M-ary Correlator Receiver R T 0 dt Z 0 s 0 R t. R T 0 dt g 0. Z M 1 arg max ˆm s M 1 g M 1 018, B.-P. Paris ECE 630: Statistical Communication Theory 198
Decision Statistics I The optimum receiver computes the decision statistics Z n = hr t, s n i + N 0 ln(p n) ks n k. I Conditioned on the m-th signal having been transmitted, I I All Z n are Gaussian random variables. Expected value: I E[Z n H m ]=hs m, s n i + N 0 ln(p n) (Co)Variance: ks n k E[Z j Z k H m ] E[Z j H m ]E[Z k H m ]=hs j, s k i N 0 018, B.-P. Paris ECE 630: Statistical Communication Theory 199
Exercise: QPSK Receiver I Find the optimum receiver for the following signal set with M = 4 signals: s n = r E T cos(pt/t + np/) for 0 apple t apple T and n = 0,..., 018, B.-P. Paris ECE 630: Statistical Communication Theory 00
Decision Regions I The decision regions G n and error probabilities are best understood by generalizing the binary decision rule: kr t s 0 k N 0 ln(p 0 ) H 1? H 0 kr t s 1 k N 0 ln(p 1 ) I For M-ary signal sets, the decision rule generalizes to ˆm = arg min kr t s n k N 0 ln(p n ). n=0,...,m 1 I I This simplifies to ˆm = arg min kr t s n k n=0,...,m 1 for equally likely signals. The optimum receiver decides in favor of the signal s n that is closest to the received signal. 018, B.-P. Paris ECE 630: Statistical Communication Theory 01
Decision Regions (equally likely signals) I For discussing decision regions, it is best to express the decision rule in terms of the representation obtained with the orthonormal basis {F k }, where I basis signals F k span the space that contains all signals s n, with n = 0,..., M 1. I I Recall that we can obtain these basis signals via the Gram-Schmidt procedure from the signal set. There are at most M orthonormal bases. I Because of Parseval s relationship, an equivalent decision rule is ˆm = arg min k ~ R ~s n k, n=0,...,m 1 where ~ R has elements Rk = hr t, F k i and~s n has element s n,k = hs n, F k i. 018, B.-P. Paris ECE 630: Statistical Communication Theory 0
Decision Regions I The decision region G n where the detector decides that the n-th signal was sent is G n = {~r : k~r ~s n k < k~r ~s m kfor all m 6= n}. I The decision region G n is the set of all points~r that are closer to~s n than to any other signal point. I The decision regions are formed by linear segments that are perpendicular bisectors between pairs of signal points. I The resulting partition is also called a Voronoi partition. 018, B.-P. Paris ECE 630: Statistical Communication Theory 03
Example: QPSK 0.8 0.6 s 1 s 0 0.4 0. Φ 1 0-0. -0.4-0.6 s s 3-0.8-0.8-0.6-0.4-0. 0 0. 0.4 0.6 0.8 Φ 0 s n = p /T cos(pf c t + n p/ + p/4), for n = 0,..., 3. 018, B.-P. Paris ECE 630: Statistical Communication Theory 04
Example: 8-PSK 1 s 0.8 0.6 s 3 s 1 0.4 0. Φ 1 0 s 4 s 0-0. -0.4-0.6-0.8 s 5 s 7-1 -1-0.5 s 0 6 0.5 1 Φ 0 s n = p /T cos(pf c t + n p/4), for n = 0,..., 7. 018, B.-P. Paris ECE 630: Statistical Communication Theory 05
Example: 16-QAM 4 3 s 3 s 7 s 11 s 15 1 s s 6 s 10 s 14 Φ 1 0-1 s 1 s 5 s 9 s 13 - -3 s 0 s 4 s 8 s 1-4 -5-4 -3 - -1 0 1 3 4 5 Φ 0 s n = p /T (A I cos(pf c t)+a Q sin(pf c t)) with A I, A Q { 3, 1, 1, 3}. 018, B.-P. Paris ECE 630: Statistical Communication Theory 06
Symbol Energy and Bit Energy I We have seen that error probabilities decrease when the signal energy increases. I Because the distance between signals increase. I We will see further that error rates in AWGN channels depend only on I the signal-to-noise ratio E b N, where E 0 b is the average energy per bit, and I the geometry of the signal constellation. I To focus on the impact of the signal geometry, we will fix either I the average energy per symbol E s = M 1 ÂM 1 n=0 ks nk or I the average energy per bit E b = E s log (M) 018, B.-P. Paris ECE 630: Statistical Communication Theory 07
Example: QPSK I QPSK signals are given by s n = r Es T cos(pf ct + n p/ + p/4), for n = 0,..., 3. I Each of the four signals s n has energy E n = ks n k = E s. I Hence, I the average symbol energy is E s I the average bit energy is E b = E s log (4) = E s 018, B.-P. Paris ECE 630: Statistical Communication Theory 08
Example: 8-PSK I 8-PSK signals are given by s n = p E s /T cos(pf c t + n p/4), for n = 0,..., 7. I Each of the eight signals s n has energy E n = ks n k = E s. I Hence, I the average symbol energy is E s I the average bit energy is E b = E s log (8) = E s 3 018, B.-P. Paris ECE 630: Statistical Communication Theory 09
Example: 16-QAM I 16-QAM signals can be written as s n = r E0 T (a I cos(pf c t)+a Q sin(pf c t)) with a I, a Q { 3, 1, 1, 3}. I There are I Hence, I 4 signals with energy (1 + 1 )E 0 = E 0 I 8 signals with energy (3 + 1 )E 0 = 10E 0 I 4 signals with energy ((3 + 3 )E 0 = 18E 0 I the average symbol energy is 10E 0 I the average bit energy is E b = E s log (16) = 5E 0 018, B.-P. Paris ECE 630: Statistical Communication Theory 10
Energy Efficiency I We will see that the influence of the signal geometry is captured by the energy efficiency h P = d min E b where d min is the smallest distance between any pair of signals in the constellation. I Examples: I QPSK: d min = p E s and E b = E s, thus h P = 4. q p I 8-PSK: d min = ( )Es and E b = E s 3, thus p h P = 3 ( ) 1.75. I 16-QAM: d min = p E 0 and E b = 5E 0, thus h P = 8 5. I Note that energy efficiency decreases with the size of the constellation for -dimensional constellations. 018, B.-P. Paris ECE 630: Statistical Communication Theory 11
Computing Probability of Symbol Error I When decision boundaries intersect at right angles, then it is possible to compute the error probability exactly in closed form. I The result will be in terms of the Q-function. I This happens whenever the signal points form a rectangular grid in signal space. I Examples: QPSK and 16-QAM I When decision regions are not rectangular, then closed form expressions are not available. I Computation requires integrals over the Q-function. I We will derive good bounds on the error rate for these cases. I For exact results, numerical integration is required. 018, B.-P. Paris ECE 630: Statistical Communication Theory 1
Illustration: -dimensional Rectangle I Assume that the n-th signal was transmitted and that the representation for this signal is~s n =(s n,0, s n,1 ) 0. I Assume that the decision region G n is a rectangle G n = {~r =(r 0, r 1 ) 0 :s n,0 a 1 < r 0 < s n,0 + a and s n,1 b 1 < r 1 < s n,1 + b }. I I Note: we have assumed that the sides of the rectangle are parallel to the axes in signal space. Since rotation and translation of signal space do not affect distances this can be done without affecting the error probability. I Question: What is the conditional error probability, assuming that s n was sent. 018, B.-P. Paris ECE 630: Statistical Communication Theory 13
Illustration: -dimensional Rectangle I In terms of the random variables R k = hr t, F k i, with k = 0, 1, an error occurs if z error event 1 } { (R 0 apple s n,0 a 1 or R 0 s n,0 + a ) or (R 1 apple s n,1 b 1 or R {z 1 s n,1 + b ). } error event I Note that the two error events are not mutually exclusive. I Therefore, it is better to consider correct decisions instead, i.e., ~ R Gn : s n,0 a 1 < R 0 < s n,0 + a and s n,1 b 1 < R 1 < s n,1 + b 018, B.-P. Paris ECE 630: Statistical Communication Theory 14
Illustration: -dimensional Rectangle I We know that R 0 and R 1 are I independent - because F k are orthogonal I with means s n,0 and s n,1, respectively I variance N 0. I Hence, the probability of a correct decision is Pr{c s n } = Pr{ a 1 < N 0 < a } Pr{ b 1 < N 1 < b } Z a = p R0 s n (r 0 ) dr 0 a 1 a1 =(1 Q p N0 / b1 (1 Q p N0 / Z b p R1 s n (r 1 ) dr 1 b 1 a Q p ) N0 / b Q p ). N0 / 018, B.-P. Paris ECE 630: Statistical Communication Theory 15
Exercise: QPSK I Find the error rate for the signal set s n = p E s /T cos(pf c t + n p/ + p/4), for n = 0,..., 3. I Answer: (Recall h P = d min E b = 4 for QPSK) s! s E s Pr{e} = Q Q N 0 = Q = Q s s E b N 0 h P E b N 0!! Q Q E s N 0! s! E b s N 0 h P E b N 0!. 018, B.-P. Paris ECE 630: Statistical Communication Theory 16
Exercise: 16-QAM = 8 5 (Recall h P = d min E b for 16-QAM) I Find the error rate for the signal set (a I, a Q { 3, 1, 1, 3}) s n = p E 0 /Ta I cos(pf c t)+ p E 0 /Ta Q sin(pf c t) I Answer: (h P = d min E b = 4) s! E Pr{e} = 0 3Q N 0 s! 4E = b 3Q 5N 0 s! h = P E b 3Q N 0 9 4 Q 9 4 Q 9 4 Q s! E 0 N 0 s! 4E b 5N 0 s h P E b N 0!. 018, B.-P. Paris ECE 630: Statistical Communication Theory 17