List Decoding: Geometrical Aspects and Performance Bounds

Similar documents
An improved bound on the list error probability and list distance properties. Bocharova, Irina; Kudryashov, Boris; Johannesson, Rolf; Loncar, Maja

A Simple Example Binary Hypothesis Testing Optimal Receiver Frontend M-ary Signal Sets Message Sequences. possible signals has been transmitted.

SIPCom8-1: Information Theory and Coding Linear Binary Codes Ingmar Land

These outputs can be written in a more convenient form: with y(i) = Hc m (i) n(i) y(i) = (y(i); ; y K (i)) T ; c m (i) = (c m (i); ; c m K(i)) T and n

BASICS OF DETECTION AND ESTIMATION THEORY

Optimum Soft Decision Decoding of Linear Block Codes

ECE 564/645 - Digital Communications, Spring 2018 Homework #2 Due: March 19 (In Lecture)

Digital Transmission Methods S

Lecture 12. Block Diagram

Tightened Upper Bounds on the ML Decoding Error Probability of Binary Linear Block Codes and Applications

UTA EE5362 PhD Diagnosis Exam (Spring 2011)

Chapter 7: Channel coding:convolutional codes

16.36 Communication Systems Engineering

Lecture 6 Channel Coding over Continuous Channels

Performance of small signal sets

Introduction to Low-Density Parity Check Codes. Brian Kurkoski

EE4304 C-term 2007: Lecture 17 Supplemental Slides

Estimation techniques

Digital Modulation 1

Introduction to Convolutional Codes, Part 1

Decoding the Tail-Biting Convolutional Codes with Pre-Decoding Circular Shift

Applications of Lattices in Telecommunications

Chapter 7. Error Control Coding. 7.1 Historical background. Mikael Olofsson 2005

Problem Set 7 Due March, 22

392D: Coding for the AWGN Channel Wednesday, January 24, 2007 Stanford, Winter 2007 Handout #6. Problem Set 2 Solutions

Performance Enhancements for Algebraic Soft Decision Decoding of Reed-Solomon Codes

Chapter 2 Signal Processing at Receivers: Detection Theory

Coding Techniques for Data Storage Systems

Lecture 17: Perfect Codes and Gilbert-Varshamov Bound

THIS paper is aimed at designing efficient decoding algorithms

Adaptive Cut Generation for Improved Linear Programming Decoding of Binary Linear Codes

A First Course in Digital Communications

On the Joint Decoding of LDPC Codes and Finite-State Channels via Linear Programming

EE456 Digital Communications

LECTURE 10. Last time: Lecture outline

Lecture 4. Capacity of Fading Channels

A Hilbert Space for Random Processes

Summary: SER formulation. Binary antipodal constellation. Generic binary constellation. Constellation gain. 2D constellations

Linear Block Codes. Saravanan Vijayakumaran Department of Electrical Engineering Indian Institute of Technology Bombay

Received Signal, Interference and Noise

Lecture 15: Thu Feb 28, 2019

Lecture 9: Diversity-Multiplexing Tradeoff Theoretical Foundations of Wireless Communications 1

A General Procedure to Design Good Codes at a Target BER

Analytical Bounds on Maximum-Likelihood Decoded Linear Codes: An Overview

University of Siena. Multimedia Security. Watermark extraction. Mauro Barni University of Siena. M. Barni, University of Siena

Optimal Receiver for MPSK Signaling with Imperfect Channel Estimation

Exercise 1. = P(y a 1)P(a 1 )

Lecture Notes 1 Probability and Random Variables. Conditional Probability and Independence. Functions of a Random Variable

ECEN 655: Advanced Channel Coding

Lecture Notes 1 Probability and Random Variables. Conditional Probability and Independence. Functions of a Random Variable

Lecture 9: Diversity-Multiplexing Tradeoff Theoretical Foundations of Wireless Communications 1. Overview. Ragnar Thobaben CommTh/EES/KTH

ELEG 5633 Detection and Estimation Signal Detection: Deterministic Signals

A Design of High-Rate Space-Frequency Codes for MIMO-OFDM Systems

EE 229B ERROR CONTROL CODING Spring 2005

Revision of Lecture 5

Lecture 4: Proof of Shannon s theorem and an explicit code

Modulation & Coding for the Gaussian Channel

Lecture 18: Gaussian Channel

Performance of Sphere Decoding of Block Codes

ELEC546 Review of Information Theory

Lower Bounds on the Graphical Complexity of Finite-Length LDPC Codes

Computing Probability of Symbol Error

A Relation between Conditional and Unconditional Soft Bit Densities of Binary Input Memoryless Symmetric Channels

L11: Pattern recognition principles

Lecture 5 Channel Coding over Continuous Channels

MATH3302 Coding Theory Problem Set The following ISBN was received with a smudge. What is the missing digit? x9139 9

An Introduction to Low Density Parity Check (LDPC) Codes

Diversity Performance of a Practical Non-Coherent Detect-and-Forward Receiver

On Design Criteria and Construction of Non-coherent Space-Time Constellations

Capacity of AWGN channels

Shannon meets Wiener II: On MMSE estimation in successive decoding schemes

Channel Estimation under Asynchronous Packet Interference

Problem Set 2. MAS 622J/1.126J: Pattern Recognition and Analysis. Due: 5:00 p.m. on September 30

On the Error Exponents of ARQ Channels with Deadlines

Diversity-Multiplexing Tradeoff in MIMO Channels with Partial CSIT. ECE 559 Presentation Hoa Pham Dec 3, 2007

Error Correcting Codes: Combinatorics, Algorithms and Applications Spring Homework Due Monday March 23, 2009 in class

5. Discriminant analysis

Lecture 7. Union bound for reducing M-ary to binary hypothesis testing

Optimized Symbol Mappings for Bit-Interleaved Coded Modulation with Iterative Decoding

Channel Coding and Interleaving

Turbo Codes. Manjunatha. P. Professor Dept. of ECE. June 29, J.N.N. College of Engineering, Shimoga.

Assignment 9. Due: July 14, 2017 Instructor: Dr. Mustafa El-Halabi. ! A i. P (A c i ) i=1

EXTENDING THE DORSCH DECODER FOR EFFICIENT SOFT DECISION DECODING OF LINEAR BLOCK CODES SEAN MICHAEL COLLISON

MATH/MTHE 406 Homework Assignment 2 due date: October 17, 2016

Problem Set 3 Due Oct, 5

Definition 2.1. Let w be a word. Then the coset C + w of w is the set {c + w : c C}.

This examination consists of 10 pages. Please check that you have a complete copy. Time: 2.5 hrs INSTRUCTIONS

A Thesis for the Degree of Master. An Improved LLR Computation Algorithm for QRM-MLD in Coded MIMO Systems

Detecting Parametric Signals in Noise Having Exactly Known Pdf/Pmf

Bounds on the Error Probability of ML Decoding for Block and Turbo-Block Codes

On the Optimum Asymptotic Multiuser Efficiency of Randomly Spread CDMA

The E8 Lattice and Error Correction in Multi-Level Flash Memory

Chapter [4] "Operations on a Single Random Variable"

Chapter 2. Error Correcting Codes. 2.1 Basic Notions

Codes on Graphs. Telecommunications Laboratory. Alex Balatsoukas-Stimming. Technical University of Crete. November 27th, 2008

4 An Introduction to Channel Coding and Decoding over BSC

Efficient Joint Maximum-Likelihood Channel. Estimation and Signal Detection

UC Riverside UC Riverside Previously Published Works

CHAPTER 14. Based on the info about the scattering function we know that the multipath spread is T m =1ms, and the Doppler spread is B d =0.2 Hz.

Introduction to binary block codes

Transcription:

List Decoding: Geometrical Aspects and Performance Bounds Maja Lončar Department of Information Technology Lund University, Sweden Summer Academy: Progress in Mathematics for Communication Systems Bremen, Germany, July 007

Outline Maximum-likelihood decoding List decoding List configuration matrix List distance List error probability for a worst-case list New bound for the list decoding error probability Summary

Maximum-Likelihood Decoding Let S = {s 0, s 1,..., s M 1 } be a set of M distinct signal vectors used to communicate over the AWGN channel. Let s S denote the transmit signal. Then the received signal is r = s + n. Optimum decision strategy at the receiver: minimize the error probability { } { ŝ MAP =arg min Pr(ŝ s) =arg max Pr(ŝ = s) ŝ S ŝ S that is, for every r decide in favour the most probable signal s S } ŝ MAP = arg max s S {p(s r)} This is the maximum a posteriori probability decoder. { } =arg max Pr(ŝ = s r)p(r) ŝ S r From Bayes formula we have { p(r s)p(s) } ŝ MAP = arg max s S p(r) { } = arg max p(r s)p(s) s S

Maximum-Likelihood Decoding If the signals s S are a priori equiprobable: p(s) = 1/M. Then the MAP decoder reduces to ŝ ML = arg max {p(r s)} s S This is the maximum likelihood decoder. For the AWGN channel p(r s) e 1 N 0 r s ŝ ML = arg min s S { r s } ML decoder is the minimum Euclidean distance decoder. It chooses the point with the smallest d E (r, s) = r s. Decoding error occurs if d E (r, s i ) d E (r, s 0 ), for some i. Let ε i s 0 denote this event. Union bound on the ML error probability: M 1 P e s0 Pr(ε i s 0 ) i=1 d E (r, s 1 ) s 0 d E (r, s ) s 1 s d E (r, s 0 ) r d E (r, s 3 ) s 3

List Decoding Generalization of ML decoding for L 1 most likely codewords. List decoder finds a list of the L best codewords. For Gaussian channel, these are L codewords s i closest to the received vector r. List decoding error occurs if transmitted signal s 0 is not on the list, that is, if, d E (r, s i ) d E (r, s 0 ), i = 1,,..., L or, equivalently, if projections of the noise n along (s i s 0 ) are larger than d E (s i, s 0 )/ n, s i s 0 d }{{} E(s i, s 0 )/, i = 1,,..., L }{{} t i d E0i / that is, (t 1 t... t L ) (d E01 d E0... d E0L )/ t w/ d E (r, s 1 ) s 0 d E (r, s ) s 1 s d E (r, s 0 ) r d E (r, s 3 ) s 3

List Configuration Matrix Components of t, n, s i s 0, are Gaussian distributed with covariance matrix E[t T t] = σ W where W is the Gram matrix of the vectors (s i s 0 ), i = 1,,..., L with elements w ij = s i s 0, s j s 0 = (d E0i + d E0j d Eij )/ W = d E01 (d E01 + d E0 d E1 )/... (d E01 + d E0L d E1L )/ (d E01 + d E0 d E1 )/ d E0... (d E0 + d E0L d EL )/...... (d E01 + d E0L d E1L )/ (d E0 + d E0L d EL )/... d E0L W is a list configuration matrix. It determines the list error probability for a given list: P el (W ) = Pr(t w/), Union-type bound for list decoding error probability: P el N(W )P el (W ) W

List Distance List radius R L for a given signal set S L = {s 0, s 1,..., s L } and a given reference (transmit) signal s 0, is the point of the list-error region that is closest to s 0. This is a radius of the smallest sphere S that contains (encompasses) all the signal points from S L (points on or inside the sphere). Euclidean list distance for a given signal set S L is d EL = R L. Theorem 1: The radius R L of the sphere S that passes through all the points of S L (all the points on the sphere) is given by R L (W ) = 1 ww 1 w T Theorem : The list radius R L of the smallest sphere S that passes through s 0 and encompasses the points s i, i = 1,,..., L, is given by { } 1 R L (W ) = max w I W 1 I I : w I adj(w I ) 0 wt I

List Distance Example: L = s 0 = (0 0) s 1 = (0 ) s = (1 3) ( ) 4 6 W = 6 10 R L = 1 ww 1 w T = 5 3 1 s 1 R L s RL list error region w adj(w )=( 0 16) R L = 10/ s 0 1 d EL =R L = 10

List Distance Example: L = 3 s s 1 = (0 0) s 0 = (0 ) s = (1 3) ( ) 4 W = 1 s 0 R L = 1 ww 1 w T = 5 R L w adj(w )=(1 16) R L = R L = 5 s 1 1 d EL =R L = 0

Minimum List Distance Minimum list radius R L for a list size L is R L min = min {R L(W )} = min W W max I : w I adj(w I ) 0 { 1 } w I W 1 I wt I Minimum Euclidean list distance for a list size L is d ELmin = R L min. For binary linear codes and BPSK signaling, d Eij = 4E sd Hij. Minimum Hamming list distance of a code, for list size L is d HLmin = d ELmin /(4E s). Minimum list distance determines the performance of the list decoder at higher SNR, in the same way as the minimum distance determines the performance of the ML decoder. Theorem 3: For any binary linear code the minimum Hamming list distance is d HLmin L L + 1 For any binary linear code with odd, we have d HLmin L L + 1 + L 1 L + 1

Minimum List Distance For even Worst-case list configuration yielding minimum list distance d HLmin = L L + 1 consists of L codewords of weight at pairwise distances : W H = The codewords form an L-dimensional simplex....................

Minimum List Distance Example: even L = R L min = 1 3 d Emin d ELmin = 3 d Emin s 1 s R L min d E min s 0

Minimum List Distance For odd Worst-case list configuration yielding minimum list distance consists of d HLmin = L L + 1 + L 1 L + 1 (L + 1)/ codewords of weight (L 1)/ codewords of weight + 1 at pairwise distances and + 1. For example, for L = 5: W H = d 1 1 +1 Hmin 1 d 1 +1 Hmin 1 1 d +1 Hmin +1 +1 +1 +1 +1 +1 + 1 +1 +1 +1 +1 +1 + 1

List Error Probability for Worst-Case List Worst-case list configuration determines the performance of list decoder at high SNR Therefore, we want to estimate the list error probability Pr(t w/) for the worst-case list configuration Problem: Finding Pr(t w/) involves L-dimensional integration over the PDF of t Solution: Consider instead the orthogonalized vector q = tv The components of q are uncorrelated and its PDF breaks up into a product of L 1-dimensional PDFs By estimating the integration limits for q we obtain an upper bound on Pr(t w/).

List Error Probability for Worst-Case List For even Lemma 1: The list-error probability Pr(t w/), for a worst-case list configuration, for a code with even is upper-bounded by ( L ) ul (y) Pr(t w/) f(x)dx dy f(y) αl σ1 l= v l (y) with equality for L. f(x) and f(y) denote Gaussian N(0, 1) PDF. The integration intervals are determined by the eigenvalues of W. For odd Lemma : The list-error probability Pr(t w/), for a worst-case list configuration, for a code with odd is upper-bounded by Pr(t w/) φ(α,η) σl f(y 1 ) h(y 1 ) g(y 1 ) f(y )dy L 1 l=1 u l (y 1 ) v l (y 1 ) f(x)dx L l= L+1 z l (y 1 ) w l (y 1 ) f(x)dx dy 1

Generalized Tangential Bound on List Decoding Error Probability We want to improve the union bound on the list-error probability P el for binary codes Tangential-bound approach: Decompose noise vector n into one radial component x, along transmitted signal s 0 : x = n, s 0 L components y l orthogonal to the radial component: y l = n, s l (1 d H0l /N)x, l = 1,,..., L List decoding error probability P el Pr(ε) is upper-bounded by (few many errors) This yields P el min T P el = Pr (ε, x T) + Pr (ε, x > T) T T Pr (ε x) f(x)dx + Pr (x > T) min 1, N(K y )P el (K y, x) f(x)dx + Q(T) K y

Generalized Tangential Bound on List Decoding Error Probability New generalized tangential union bound for list decoding error probability { } P el min T T min 1, N(K y )P el (K y, x)+ W N(W )P el (d HL (W ), x) f(x)dx+q(t) The dominant term is estimated using the upper bound on worst-case-list error probability The remaining terms are upper-bounded only by using the list distance d HL (W ), which is equivalent to replacing the codeword sets with list configuration matrix W by an average codeword s at distance d HL (W ) from the transmitted codeword. s1 R L s D s D s 0

Generalized Tangential Bound on List Decoding Error Probability Comparison of bounds for (4, 1, 8) Golay code with list size L = 10 0 10 1 10 P el 10 3 10 4 10 5 10 6 Simulations Union bound Tangential bound New bound 0 1 3 4 5 6 E b /N 0 [db]

Summary List configuration matrix describes the list geometry and list error probability List radius and list distance are defined via the list configuration matrix Minimum list distance is determined by the worst-case list configuration Upper bound on the list error probability for the worst-case lists is derived New generalized tangential union bound for the list decoding error probability is derived