List Decoding: Geometrical Aspects and Performance Bounds Maja Lončar Department of Information Technology Lund University, Sweden Summer Academy: Progress in Mathematics for Communication Systems Bremen, Germany, July 007
Outline Maximum-likelihood decoding List decoding List configuration matrix List distance List error probability for a worst-case list New bound for the list decoding error probability Summary
Maximum-Likelihood Decoding Let S = {s 0, s 1,..., s M 1 } be a set of M distinct signal vectors used to communicate over the AWGN channel. Let s S denote the transmit signal. Then the received signal is r = s + n. Optimum decision strategy at the receiver: minimize the error probability { } { ŝ MAP =arg min Pr(ŝ s) =arg max Pr(ŝ = s) ŝ S ŝ S that is, for every r decide in favour the most probable signal s S } ŝ MAP = arg max s S {p(s r)} This is the maximum a posteriori probability decoder. { } =arg max Pr(ŝ = s r)p(r) ŝ S r From Bayes formula we have { p(r s)p(s) } ŝ MAP = arg max s S p(r) { } = arg max p(r s)p(s) s S
Maximum-Likelihood Decoding If the signals s S are a priori equiprobable: p(s) = 1/M. Then the MAP decoder reduces to ŝ ML = arg max {p(r s)} s S This is the maximum likelihood decoder. For the AWGN channel p(r s) e 1 N 0 r s ŝ ML = arg min s S { r s } ML decoder is the minimum Euclidean distance decoder. It chooses the point with the smallest d E (r, s) = r s. Decoding error occurs if d E (r, s i ) d E (r, s 0 ), for some i. Let ε i s 0 denote this event. Union bound on the ML error probability: M 1 P e s0 Pr(ε i s 0 ) i=1 d E (r, s 1 ) s 0 d E (r, s ) s 1 s d E (r, s 0 ) r d E (r, s 3 ) s 3
List Decoding Generalization of ML decoding for L 1 most likely codewords. List decoder finds a list of the L best codewords. For Gaussian channel, these are L codewords s i closest to the received vector r. List decoding error occurs if transmitted signal s 0 is not on the list, that is, if, d E (r, s i ) d E (r, s 0 ), i = 1,,..., L or, equivalently, if projections of the noise n along (s i s 0 ) are larger than d E (s i, s 0 )/ n, s i s 0 d }{{} E(s i, s 0 )/, i = 1,,..., L }{{} t i d E0i / that is, (t 1 t... t L ) (d E01 d E0... d E0L )/ t w/ d E (r, s 1 ) s 0 d E (r, s ) s 1 s d E (r, s 0 ) r d E (r, s 3 ) s 3
List Configuration Matrix Components of t, n, s i s 0, are Gaussian distributed with covariance matrix E[t T t] = σ W where W is the Gram matrix of the vectors (s i s 0 ), i = 1,,..., L with elements w ij = s i s 0, s j s 0 = (d E0i + d E0j d Eij )/ W = d E01 (d E01 + d E0 d E1 )/... (d E01 + d E0L d E1L )/ (d E01 + d E0 d E1 )/ d E0... (d E0 + d E0L d EL )/...... (d E01 + d E0L d E1L )/ (d E0 + d E0L d EL )/... d E0L W is a list configuration matrix. It determines the list error probability for a given list: P el (W ) = Pr(t w/), Union-type bound for list decoding error probability: P el N(W )P el (W ) W
List Distance List radius R L for a given signal set S L = {s 0, s 1,..., s L } and a given reference (transmit) signal s 0, is the point of the list-error region that is closest to s 0. This is a radius of the smallest sphere S that contains (encompasses) all the signal points from S L (points on or inside the sphere). Euclidean list distance for a given signal set S L is d EL = R L. Theorem 1: The radius R L of the sphere S that passes through all the points of S L (all the points on the sphere) is given by R L (W ) = 1 ww 1 w T Theorem : The list radius R L of the smallest sphere S that passes through s 0 and encompasses the points s i, i = 1,,..., L, is given by { } 1 R L (W ) = max w I W 1 I I : w I adj(w I ) 0 wt I
List Distance Example: L = s 0 = (0 0) s 1 = (0 ) s = (1 3) ( ) 4 6 W = 6 10 R L = 1 ww 1 w T = 5 3 1 s 1 R L s RL list error region w adj(w )=( 0 16) R L = 10/ s 0 1 d EL =R L = 10
List Distance Example: L = 3 s s 1 = (0 0) s 0 = (0 ) s = (1 3) ( ) 4 W = 1 s 0 R L = 1 ww 1 w T = 5 R L w adj(w )=(1 16) R L = R L = 5 s 1 1 d EL =R L = 0
Minimum List Distance Minimum list radius R L for a list size L is R L min = min {R L(W )} = min W W max I : w I adj(w I ) 0 { 1 } w I W 1 I wt I Minimum Euclidean list distance for a list size L is d ELmin = R L min. For binary linear codes and BPSK signaling, d Eij = 4E sd Hij. Minimum Hamming list distance of a code, for list size L is d HLmin = d ELmin /(4E s). Minimum list distance determines the performance of the list decoder at higher SNR, in the same way as the minimum distance determines the performance of the ML decoder. Theorem 3: For any binary linear code the minimum Hamming list distance is d HLmin L L + 1 For any binary linear code with odd, we have d HLmin L L + 1 + L 1 L + 1
Minimum List Distance For even Worst-case list configuration yielding minimum list distance d HLmin = L L + 1 consists of L codewords of weight at pairwise distances : W H = The codewords form an L-dimensional simplex....................
Minimum List Distance Example: even L = R L min = 1 3 d Emin d ELmin = 3 d Emin s 1 s R L min d E min s 0
Minimum List Distance For odd Worst-case list configuration yielding minimum list distance consists of d HLmin = L L + 1 + L 1 L + 1 (L + 1)/ codewords of weight (L 1)/ codewords of weight + 1 at pairwise distances and + 1. For example, for L = 5: W H = d 1 1 +1 Hmin 1 d 1 +1 Hmin 1 1 d +1 Hmin +1 +1 +1 +1 +1 +1 + 1 +1 +1 +1 +1 +1 + 1
List Error Probability for Worst-Case List Worst-case list configuration determines the performance of list decoder at high SNR Therefore, we want to estimate the list error probability Pr(t w/) for the worst-case list configuration Problem: Finding Pr(t w/) involves L-dimensional integration over the PDF of t Solution: Consider instead the orthogonalized vector q = tv The components of q are uncorrelated and its PDF breaks up into a product of L 1-dimensional PDFs By estimating the integration limits for q we obtain an upper bound on Pr(t w/).
List Error Probability for Worst-Case List For even Lemma 1: The list-error probability Pr(t w/), for a worst-case list configuration, for a code with even is upper-bounded by ( L ) ul (y) Pr(t w/) f(x)dx dy f(y) αl σ1 l= v l (y) with equality for L. f(x) and f(y) denote Gaussian N(0, 1) PDF. The integration intervals are determined by the eigenvalues of W. For odd Lemma : The list-error probability Pr(t w/), for a worst-case list configuration, for a code with odd is upper-bounded by Pr(t w/) φ(α,η) σl f(y 1 ) h(y 1 ) g(y 1 ) f(y )dy L 1 l=1 u l (y 1 ) v l (y 1 ) f(x)dx L l= L+1 z l (y 1 ) w l (y 1 ) f(x)dx dy 1
Generalized Tangential Bound on List Decoding Error Probability We want to improve the union bound on the list-error probability P el for binary codes Tangential-bound approach: Decompose noise vector n into one radial component x, along transmitted signal s 0 : x = n, s 0 L components y l orthogonal to the radial component: y l = n, s l (1 d H0l /N)x, l = 1,,..., L List decoding error probability P el Pr(ε) is upper-bounded by (few many errors) This yields P el min T P el = Pr (ε, x T) + Pr (ε, x > T) T T Pr (ε x) f(x)dx + Pr (x > T) min 1, N(K y )P el (K y, x) f(x)dx + Q(T) K y
Generalized Tangential Bound on List Decoding Error Probability New generalized tangential union bound for list decoding error probability { } P el min T T min 1, N(K y )P el (K y, x)+ W N(W )P el (d HL (W ), x) f(x)dx+q(t) The dominant term is estimated using the upper bound on worst-case-list error probability The remaining terms are upper-bounded only by using the list distance d HL (W ), which is equivalent to replacing the codeword sets with list configuration matrix W by an average codeword s at distance d HL (W ) from the transmitted codeword. s1 R L s D s D s 0
Generalized Tangential Bound on List Decoding Error Probability Comparison of bounds for (4, 1, 8) Golay code with list size L = 10 0 10 1 10 P el 10 3 10 4 10 5 10 6 Simulations Union bound Tangential bound New bound 0 1 3 4 5 6 E b /N 0 [db]
Summary List configuration matrix describes the list geometry and list error probability List radius and list distance are defined via the list configuration matrix Minimum list distance is determined by the worst-case list configuration Upper bound on the list error probability for the worst-case lists is derived New generalized tangential union bound for the list decoding error probability is derived