An improved bound on the list error probability and list distance properties. Bocharova, Irina; Kudryashov, Boris; Johannesson, Rolf; Loncar, Maja

Size: px
Start display at page:

Download "An improved bound on the list error probability and list distance properties. Bocharova, Irina; Kudryashov, Boris; Johannesson, Rolf; Loncar, Maja"

Transcription

1 An improved bound on the list error probability and list distance properties Bocharova, Irina; Kudryashov, Boris; Johannesson, Rolf; Loncar, Maja Published in: IEEE Transactions on Information Theory DOI: 009/TIT Link to publication Citation for published version APA: Bocharova, I, Kudryashov, B, Johannesson, R, & Loncar, M 008 An improved bound on the list error probability and list distance properties IEEE Transactions on Information Theory, 5, General rights Copyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright owners and it is a condition of accessing publications that users recognise and abide by the legal requirements associated with these rights Users may download and print one copy of any publication from the public portal for the purpose of private study or research You may not further distribute the material or use it for any profit-making activity or commercial gain You may freely distribute the URL identifying the publication in the public portal Take down policy If you believe that this document breaches copyright please contact us providing details, and we will remove access to the work immediately and investigate your claim L UNDUNI VERS I TY PO Box7 00L und

2 An Improved Bound on the List Error Probability and List Distance Properties Irina E Bocharova, Rolf Johannesson, Fellow, IEEE, Boris D Kudryashov, and Maja Lončar, Student Member, IEEE Abstract List decoding of binary block codes for the additive white Gaussian noise channel is considered The output of a list decoder is a list of the L most likely codewords, that is, the L signal points closest to the received signal in the Euclidean-metric sense A decoding error occurs when the transmitted codeword is not on this list It is shown that the list error probability is fully described by the so-called list configuration matrix, which is the Gram matrix obtained from the signal vectors forming the list The worst-case list configuration matrix determines the minimum list distance of the code, which is a generalization of the minimum distance to the case of list decoding Some properties of the list configuration matrix are studied and their connections to the list distance are established These results are further exploited to obtain a new upper bound on the list error probability, which is tighter than the previously known bounds This bound is derived by combining the techniques for obtaining the tangential union bound with an improved bound on the error probability for a given list The results are illustrated by examples Index Terms List configuration matrix, list decoding, list distance, list error probability, tangential union bound I INTRODUCTION The optimal decoding method that minimizes the sequence error probability at the receiver is maximum a posteriori probability MAP sequence decoding, which reduces to maximum-likelihood ML decoding when all the sequences codewords are a priori equiprobable When signalling over the additive white Gaussian noise AWGN channel, ML decoding is equivalent to finding the codeword with the smallest Euclidean distance from the received sequence List decoding, introduced in [], is a generalization of ML decoding a list decoder is not restricted to find a single estimate of the transmitted codeword but delivers a list of most likely codewords, closest to the received word in terms of a given metric Decoding is considered successful if the transmitted codeword is included in the list List decoding has found applications in concatenated coding schemes, often used in combination with automatic-repeatrequest ARQ strategies: The outer error detection code, such This research was supported in part by the Royal Swedish Academy of Sciences in cooperation with the Russian Academy of Sciences and in part by the Swedish Research Council under Grant The material in this paper was presented in part at the IEEE International Symposium on Information Theory, Seattle, USA, July 006 I E Bocharova and B D Kudryashov are with the Department of Information Systems, St Petersburg University of Aerospace Instrumentation, St Petersburg, 90000, Russia irina@itlthse; boris@itlthse R Johannesson and M Lončar are with the Department of Electrical and Information Technology, Lund University, PO Box 8, SE-00 Lund, Sweden rolf@eitlthse; maja@eitlthse as the cyclic redundancy check CRC code, is combined with an inner error correcting code At the receiver, rather than using the ML decoder to decode the inner code, a list decoder may be employed to find a list of the most probable sequences, which are subsequently checked by the outer CRC decoder Only if none of the sequences on the list satisfies the CRC parity constraints, a retransmission is requested via a feedback channel This scenario was investigated in [], where the list-viterbi algorithm was developed for list decoding of convolutional codes It was shown in [] that already moderate list sizes provide significantly lower error probability than decoding with list size equal to one Similar applications of list decoding for speech recognition where the outer CRC code is replaced by a language processor were investigated in [3] where the search for the list of sequences is performed with the tree-trellis algorithm TTA, cf also [] Since the introduction of turbo codes [5] more than a decade ago, iterative decoding and detection algorithms have received much attention Iterative turbo schemes bypass the prohibitively complex optimal decoding of the overall concatenated code by employing simpler constituent decoders as separate entities which iteratively exchange soft information on decoded symbols Constituent soft-input soft-output SISO decoders can be realized with the BCJR algorithm [6]; however, its complexity becomes prohibitively high for codes with large trellis state space This is typically the case when the constituent codes are block codes, as is the case in the product codes In this context, list-based SISO decoders have been recently proposed as a low-complexity alternative to the BCJR decoding cf [7], [8], and the references therein These decoding methods use a list of candidate codewords and their metrics to compute approximate symbol reliabilities In [8], the list is obtained by the bidirectional efficient algorithm for searching code trees BEAST and it was demonstrated that a list of only a few most probable codewords suffices for accurate estimation of symbol reliabilities More generally, the turbo receiver principle is applicable to many communication systems that can be represented as concatenated coding schemes, where the inner code is, for example, realized by a modulator followed by the intersymbol interference ISI channel, or by a spacetime code for multiple-antenna transmission List-based inner SISO detectors that form the symbol reliability estimates from a list of signal vectors were proposed, eg, in [9] for MIMO transmission and in [0] for ISI equalization Although different applications of list decoding have been considered in a large number of papers, only a few papers were devoted to estimating the list error probability Since

3 exact expressions for the error probability are most often not analytically tractable, tight bounds are useful tools for estimating the performance of the system and for identifying the parameters that dominate its behavior The earliest results regarding list decoding were obtained for code ensembles, using random coding arguments: Bounds on the asymptotic rates of binary block codes used with list decoding over the binary symmetric channel were investigated in [], [], and later also in [3] for more references to related works see [3] The asymptotic behavior of the list error probability was analyzed in [] and [5], where bounds on the error exponents were obtained More recently, asymptotic bounds on the code size and the error exponents for list decoding in Euclidean space were derived in [6] When estimating the error rate performance of ML decoding that is, list decoding with the list of size one for a specific code used for communicating over the AWGN channel, the most commonly used upper bound is obtained by applying the Bonferroni-type inequality which states that the probability of a union of events is not larger than the sum of the probabilities of the individual events This yields the well-known union bound, which upper-bounds the error probability by the sum of pairwise error events This bound is simple to compute and requires only the knowledge of the code spectrum; however, it is tight only at high signal-to-noise ratio SNR levels, while at low and moderate SNRs it becomes too loose due to the fact that in the sum of pairwise error probabilities, the same error event may be counted many times There have been several improvements during the past two decades that yield much tighter bounds than the union bound These include the tangential bound TB [7], the sphere bound SB [8], [9], and the tangential-sphere bound TSB [0] These bounds are based on the well-known bounding principle introduced by Fano [] for random codes and adapted by Gallager [] for specific codes, where the received signal space is partitioned into two disjoint regions, R and its complement R c, of few and many errors, respectively The error probability Prε is thus split into the sum of two error probabilities, when the received signal r resides inside and outside R, that is, Prε = Prε, r R + Prε, r R c The first term, referring to the region of few errors can be upper-bounded using a union bound, while the second term is bounded simply by Prr R c We call this principle the Fano-Gallager bounding; it is also referred to as the Gallager-Fano bound [3] or Gallager s first bounding principle [] The TB for equi-energy signals which all lie on a hypersphere was derived by splitting the noise vector into radial and tangential components, which lie along and perpendicular to the transmitted signal, respectively The few-error region R in the TB is a half-space where the magnitude of radial noise is not larger than a certain threshold The SB is obtained by considering the spherical region R, and The probability of a union of events E i, i =,,, M is M equal to Pr E i = M PrE i PrE i E i + + i= i= i <i M M+ PrE i E i E im Truncating the right-hand i <i <<i M M side expression after the first term yields an upper bound referred to as the Bonferroni inequality of the first order, since it depends only on the probabilities of elementary events finally, in the TSB, which is tighter than the previous bounds, both approaches are combined and the region R is a circular cone with the axis passing through the transmitted signal point A detailed treatment and comparisons of various Fano- Gallager-type bounds can be found in [3] and [] Recently, two new bounds that improve upon the TSB have been proposed: the so-called added-hyperplane AHP bound [5] and the improved TSB ITSB [6] Both bounds are obtained by upper-bounding the probability Prε, r R using a tighter, second-order Bonferroni-type inequality instead of the union bound used in the TSB Generalization of the bounds for ML decoding to list decoding is not straightforward A list error event is defined with respect to a list of L codewords, which implies that the pairwise error events considered in ML decoding translate to L + -wise list-error events Geometrical properties of list configurations were investigated in [7], and used to derive a union bound on the list error probability The notions of the Euclidean and Hamming list distances were introduced and it was shown that these distances are generalizations of the Euclidean and Hamming distances of the code In this paper, we build upon the work of [7] and investigate the properties of the so-called list distance and its relations to the list configurations Moreover, using the tangential-bound approach from [7], we improve the union bound of [7] Similarly as in [8], we first derive a tighter bound on the error probability for a given list, and then obtain a new upper bound on the list error probability by combining this tighter bound with a modified tangential bound II GEOMETRICAL ASPECTS OF LIST DECODING In this section the notions of the list distance and the list configuration matrix are introduced and their properties and relations are established The results presented in the first two subsections have mostly appeared in [7]; however, we present them here in an extended form, supported by examples and more detailed discussion In the last subsection, the relation of the list distance and the average radius introduced in [] is discussed This section serves as a basis for the derivation of the list-error probability bounds presented in Sections III and IV A List Decoding Let S =s i }, i=0,,, M, be an arbitrary constellation of S = M equiprobable signal points s i =s i s i s N i used to communicate over an additive white Gaussian noise The probability of a union of events E i, i =,,, M can be expressed M as Pr E i = PrE + PrE E c + PrE 3 E c Ec + + i= PrE M E c Ec Ec M, where Ec i denotes the complement of E i M From here follows an upper bound used in [5]: Pr E i PrE + i= PrE E c+pre 3 Ej c ++PrE M Ej c M, where the tightness of the bound is determined by the ordering of the events and the choices of indices j, }, j,, 3},, j M,,, M } This bound is a Bonferroni inequality of the second order, since it involves pairwise joint event probabilities

4 3 AWGN channel Assume that the N-tuple s 0 was transmitted The discrete-time received signal is r = s 0 + n where the noise vector n consists of independent zero-mean Gaussian random variables with variance N 0 / We say that an error for a given list L = s, s,, s L }, of size L < M, occurs if s 0 / L, which implies that s 0 is further from the received signal than all signals on the list, that is, d Er, s l d Er, s 0, l =,,, L where d E r, s l = r s l is the squared Euclidean distance between the received vector r and the vector s l from the list This is slightly pessimistic, since it implies that when is fulfilled with equality, we always include the erroneous N- tuple in the list By substituting into we obtain n + s 0 s l n, which is equivalent to l =,,, L n, s l s 0 d Es 0, s l /, l =,,, L 3 where a, b = ab T denotes the inner product of the row vectors a and b Now let t l denote the inner product t l = n, s l s 0, l =,,, L Then the vector t = t t t L is a Gaussian random vector with zero mean and covariance matrix E [ t T t ] = N 0 Γ The entries of the L L matrix Γ = γ ij }, i, j =,,, L, are γ ij = s i s 0, s j s 0 = d E0i + d E0j d Eij / where d Eij = d E s i, s j = s i s j Thus, Γ is the Gram matrix of the vectors s l s 0, l =,,, L We call Γ the list configuration matrix Let γ denote the vector of the main-diagonal elements of the list configuration matrix Γ, that is, γ = d E0 d E0 d E0L From 3 it follows that the list error probability for any list with given configuration matrix Γ is given by 3 P el Γ = Prt γ/ 5 Consider a binary N, K, d Hmin block code C = v i }, i = 0,,, K, of length N, dimension K, and minimum distance d Hmin Since the distance spectrum is a property of a linear code, we will hereinafter assume code linearity, although this condition is not necessary for the results presented in this section When the code C is used with binary phase shift keying BPSK to communicate over an AWGN channel, the binary code symbols v j i 0, }, j =,,, N, are mapped onto the symbols s j i = v j i E s 3 Hereinafter, the relation a b between two vectors of the same length L should be interpreted element-wise, that is, a b holds if and only if a i b i, i =,,, L where E s is the symbol energy equal to E s = E b R, where R = K/N is the code rate and E b is the energy per bit All the signal points s i, i = 0,,, K, have the same energy, s i = NE s, that is, they lie on a hypershere of radius NEs in the N-dimensional Euclidean space The squared Euclidean distance between two signal points is proportional to the Hamming distance between the corresponding codewords, that is, d Es i, s j = E s d H v i, v j 6 The minimum squared Euclidean distance of the code is d Emin = min s i s j d Es i, s j } = E s d Hmin 7 Then the list configuration matrix can be written as Γ = E s Γ H, where Γ H is the normalized list configuration matrix whose entries are γ Hij = d H0i + d H0j d Hij / 8 where d Hij = d H v i, v j Examples of Γ H for the 7,, 3 Hamming code, the 8,, extended Hamming code, and the,, 8 extended Golay code are given in Tables I, II, and III, respectively Without loss of generality, we assume that the reference signal s 0 corresponds to the allzero codeword v 0 = 0 Each row in Tables I III corresponds to a distinct value of the Hamming list distance d HL which will be explained in the next subsection Several configuration matrices can yield the same d HL For each list size L, list configurations are listed in the order of increasing d HL The last column in the tables shows the number of lists NΓ with the same list configuration matrix Γ Note that the ordering of the codewords on the list is irrelevant Hence, for a given list configuration, NΓ is the number of combinations with matrices Γ that are equal up to a permutation of the maindiagonal and the corresponding off-diagonal entries For list size L =, the normalized list configuration matrix Γ H reduces to codeword s Hamming weight d H0i and values of NΓ yield the distance spectrum of a code For list size L =, consider, for example, the 7,, 3 Hamming code from Table I There are six possible weight combinations to form a list of two codewords, corresponding to six list configuration matrices Γ H Consider, for example, lists with two minimum-weight codewords, d Hmin = 3 There are NΓ = 7 = such lists All the minimumweight codewords of the Hamming code have the pairwise distance d Hij = Hence, the corresponding normalized list 3 configuration matrix is Γ H = 3 Next, consider listsof-two that contain one codeword of weight 3 and one of weight There are in total 7 7 = 9 such pairs, out of which NΓ = pairs have pairwise distance 3 and hence 3 their configuration matrix is Γ H = The remaining 7 pairs are at the distance 7 and their configuration matrix is 3 0 Γ H = 0 In [7] the following union-type bound on the list error probability for a given list size L was derived P el Γ NΓP el Γ 9

5 TABLE I LIST CONFIGURATIONS FOR THE 7,, 3 HAMMING CODE L Γ H = Γ/E s d HL NΓ , 50, ,, 7 7, 7, , , , 3 0, 3 3,, ,,,, , 550 0, , 6 35, 0 3 TABLE II LIST CONFIGURATIONS FOR THE 8,, EXTENDED HAMMING CODE L Γ H = Γ/E s d HL NΓ , 8 7, , 8 8, where NΓ is the number of lists of size L which have the same list configuration matrix Γ It follows from 5 and 9 that the list error probability can be fully described in terms of the properties of the Gram matrix Γ For binary codes, this matrix determines the so-called minimum Hamming list distance of the code [7], d HLmin, which plays the same role for list decoding as the minimum distance for maximumlikelihood decoding In the next subsection, the list distance is defined and illustrated by examples B List Radius and List Distance Consider first maximum-likelihood decoding, that is, list decoding with list size L = The largest contribution to the error probability is obtained when the received point r is exactly between the two closest signal points, that is, signal points at the minimum Euclidean distance d Emin Thus, r is the center of this constellation of L + = signal points Next, we generalize this approach to arbitrary list size L

6 5 TABLE III LIST CONFIGURATIONS FOR THE,, 8 EXTENDED GOLAY CODE L Γ H = Γ/E s d HL NΓ , , , ,, , , 009, 770 Let S be an arbitrary constellation of S =M signal points in the Euclidean space, and let S L be an arbitrary subset S L = s 0, s,, s L } S of L + signal points Then, the minimum list radius of the constellation S, for a list size L, is defined as R L min min S L min r max d Es k, r} 0 0 k L For a given signal subset S L, the list radius is the radius of the smallest sphere S that contains encompasses the points of S L that is, the points lie on or inside the sphere, and the minimizing r is the center of this sphere Minimization over all possible subsets S L S yields the smallest list radius for a given list size L Thus, if the noise n is such that the received signal point r is closer than R L min to the transmitted signal point, it is guaranteed that the transmitted signal point will be among the L points closest to the received signal and the list decoder will not make an error Clearly, the list radius is the distance from the transmitted signal point to the closest point of the list error decision region Like the minimum distance, the minimum list radius is also a constellation property Let s 0 S L be the reference transmitted signal, and let Γ be the list configuration matrix corresponding to the signal subset S L Assume that the vectors s l s 0, l =,,, L are linearly independent; then the matrix Γ has full rank The following theorem from [7] specifies the center and the radius of the circumsphere S of the set S L that is, the sphere such that all the points of S L lie on the sphere We also present the proof, in an extended form, as some of its steps will prove useful later on Theorem : Let S L = s 0, s,, s L } be a set of L + signal points such that the vectors s l s 0, l =,,, L, are linearly independent Let Γ be the corresponding Gram matrix of the vectors s l s 0, and let γ be the row vector of its maindiagonal elements Then the radius R L of the circumsphere S of S L is given by R L Γ = γγ γ T and the center ρ of the sphere S is given by s s 0 ρ s 0 = γγ S, s s 0 S = s L s 0 Proof: Since all the points of S L lie on the sphere S, its radius R L satisfies R L Γ = ρ s 0 = ρ s l, l =,,, L 3 From here it follows that ρ s 0, s l s 0 = s l s 0 which can be rewritten in vector form as ρ s 0 S T = γ 5 where S is given by Note that SS T = Γ Now let ζ = ζ ζ ζ L be a vector of coefficients of the decomposition of the vector ρ s 0 in the L-dimensional basis consisting of the linearly independent vectors s l s 0 Then we can write ρ s 0 = ζ s s 0 + ζ s s ζ L s L s 0 or, equivalently, Substituting 6 into 3 yields R L Γ = ρ s 0 = ρ s 0 = ζs 6 ζss T ζ T = ζγζ T 7 From 5 and 6 it follows that ζγ = γ, which yields ζ = γγ 8 Substituting 8 into 6 and 7 yields and, respectively, which completes the proof Clearly, if the vectors s l s 0 are linearly independent, the sphere S is L-dimensional If, however, some of the vectors are linearly dependent, the Gram matrix Γ is singular, that is, detγ = 0, and the radius is R L =, since the signal points s l lie in a reduced subspace cf Examples 3 below

7 6 For a given signal set S L, the circumsphere S, may, in general, not be the smallest sphere that encompasses the points of S L which is the sphere that determines the list radius Let S denote the smallest encompassing sphere of the set S L, such that the reference point s 0 lies on the sphere, and the remaining points s l, l =,,, L, lie either on or inside S more precisely, at least one more point s l S L, other than s 0, will lie on such a sphere Assume that the vectors s l s 0 are linearly independent Then it was shown in [7] that the radius R L of the sphere S is given by } R L Γ = max γ I Γ I γt I 9 I : γ I Γ I 0 where the maximization is performed over all signal subsets I S L that contain the reference point s 0, such that their corresponding configuration matrix Γ I and its main-diagonal vector γ I fulfill the condition γ I Γ I 0 Note that Γ I is a main submatrix of the configuration matrix Γ, obtained by deleting those rows and columns that correspond to the signal points not included in the chosen subset I Theorem and formula 9 imply the following: The list radius R L is the largest radius of the circumspheres of all the signal subsets I such that γ I Γ I 0 Let I max denote the signal subset that yields the maximum in 9 and thus determines the list radius Then all the points from I max lie on the sphere S and the remaining points, from S L \ I max, lie inside The center θ of the sphere S is given by cf θ s 0 = γ I max Γ I max S Imax where S Imax is the matrix whose rows are s l s 0, s l I max Since I max S L, then R L Γ R L Γ with equality if and only if the list configuration matrix Γ fulfills γγ 0 0 Hence, when determining the list radius of a signal set S L, the first step is to check whether condition 0 is satisfied If so, then the circumsphere S is the smallest encompassing sphere and R L Γ = R L Γ = γγ γ T Otherwise, when at least one component of γγ is negative, the sphere S and its radius R L Γ are determined by a reduced signal set I max S L In some cases, the negative components of γγ indicate which points should be removed from S L to obtain the set I max see Examples, 3, and 5 However, in general, there is no one-to-one correspondence between the negative components of γγ and the set S L \ I max see Example 6 Note that the condition 0 is equivalent to ζ 0 cf 8, that is, all the coefficients of the decomposition of ρ s 0 in the basis set s l s 0 }, l =,,, L, should be nonnegative The set of points in space described by the linear combination ζs, where ζ 0, constitutes an L-dimensional unbounded pyramid, whose vertex is s 0 and whose semi-infinite edges run along s l s 0 If the center ρ of the circumsphere S lies in this pyramid, then S is the smallest sphere that determines the list radius Otherwise, there exists a smaller sphere whose center is inside this pyramid, and it is found by 9 When the vectors s l s 0, l =,,, L, are linearly independent, the inverse of the list configuration matrix is given by Γ = detγ adjγ where adjγ is the adjoint matrix of Γ Furthermore, detγ > 0 Hence, condition 0 is equivalent to γ adjγ 0 In fact, this condition is more general, since it is also applicable for configurations where some of the vectors s l s 0 are linearly dependent Hence, the expression 9 is easily generalised to hold for any signal set S L as: R L Γ = max I : γ I adjγ I 0 γ I Γ I γt I } Note that for a singular list configuration matrix Γ, the list radius is not necessarily infinite For a given list configuration S L, a list error with respect to the transmitted signal s 0 occurs if the received signal point r falls in the error decision region D, which is the intersection of all pairwise error decision regions D l, l =,,, L, between the signal points s l and s 0, that is, D = L l= D l The point of the region D that is closest to the signal point s 0 is the center θ of the sphere S If the list radius is R L Γ =, the pairwise error decision regions do not intersect, L l= D l = For such a list configuration, the probability of a list error for a given transmitted signal s 0 is zero since there is no point in space that is simultaneously closer to the L points s l, l =,,, L than to the point s 0 see Example and Tables I and II for L = 3 The minimum list radius 0 for list size L of a signal constellation S is obtained as R L min = minr L Γ} Γ where the minimization is performed over all possible list configuration matrices for a list size L, that is, over all possible signal subsets S L S The Euclidean list distance for a signal subset S L with a list configuration matrix Γ is defined as d EL Γ R L Γ 3 Thus, from 9 it follows that the squared Euclidean list distance is d ELΓ = max γi Γ } I I : γ I adjγ I 0 γt I The minimum Euclidean list distance of the signal constellation S is then d ELmin = R L min = mind EL Γ} Γ

8 7 When the signal constellation S is a set of bipolar signals corresponding to the binary linear code C, we can also define the Hamming list distance of a code C, for a list size L, as d HL Γ d EL Γ = max γhi Γ H I E γ H T } I s I : γ HI adjγ HI 0 5 The minimum Hamming list distance of a code is then d HLmin d ELmin E s = R L min E s The examples of Hamming list distances are shown in Tables I III In general, the Hamming list distance is not an integer The following examples illustrate the list distance and the list radius for a few signal configurations Example : Consider the set of L+ = 3 signal points S = s 0, s, s } corresponding to the minimal weight codewords that have minimal pairwise distances d = d Emin, as illustrated in Figure If s 0 is the transmitted signal point, then a list error for list size L = occurs if the received signal point r falls in the region D marked in the figure, where both signals s and s are closer than s 0 If, however, r is outside region D, then s 0 will always be one of the two signal points closest to r, and hence s 0 will be included in the list The smallest sphere that encompasses the three signal points is the circumsphere, that is, S = S, and its center ρ is the center of the equilateral triangle This is the point of the region D which is closest to s 0 The Euclidean list distance is d EL = R L = d/ 3, and it is invariant with respect to enumeration of the signal points s s D ρ R L s d D Thus we have 0 6 γ adjγ = 0 = The squared radius R L of the cicumsphere S is given by : R L = γγ γ T = 5 = 5 0 and the coordinates of the center ρ are, according to, ρ = γγ S = 0 5 = 3 However, the point of the list error region D which is nearest to s 0 is not ρ but θ, or, equivalently, the sphere S is not the smallest sphere which encompasses the three signal points This is indicated by the negative sign of some of the elements of γ adjγ, as found in 6 Hence, in order to find the list radius and the list distance, we need to reduce the size of the list and find the subset of signal points for which the distance γ I Γ I γt I is maximized In our case, the signal point s has larger distance from s 0 and thus the signal set is reduced to I max = s 0, s } for which Γ Imax = γ Imax = 0 The s θ D R L ρ 0 3 s 0 s R L Fig Configuration of three non-bipolar signals for which the smallest sphere encompassing the points does not coincide with the sphere on which all the points are lying, and thus, R L R L 3 s Fig List configuration and the list error region D for a list of size L = with signals at equal pairwise distances Region D and point s illustrate the bound Example : Consider now the set S = s 0, s, s } of nonbipolar signals shown in Figure, where the signal coordinates are s 0 = 0 0, s = 0, and s = 3 The squared Euclidean distances are d E0 =, d E0 = 3 + = 0, and = The corresponding Gram matrix is, according to d E, Γ = s s 0 s R L =R L θ 0 ρ D Fig 3 The same configuration as in Fig, but with changed reference signal point s 0 : in this case the two spheres coincide and the list error decision region D has a completely different shape compared to the previous case

9 8 squared list distance is thus d EL = d E0 = 0, and the list radius is R L = d EL / = 0/ = 5/, which is smaller than R L = 5 The center of the sphere S, of radius R L, is given by θ = γ I max Γ I max S Imax = 3 = 3 Note that, unlike in Example, the list distance for the configuration in Figure is not symmetric with respect to the signal points, that is, it changes if we change the reference signal point For example, if we exchange signal points s and s 0, we obtain the configuration as illustrated in Figure 3 The shape of the error decision region D is changed and, in this case, the two spheres coincide Thus, the list distance is determined by the radius of the circumsphere, that is, d EL = R L = 0 Example 3: Consider the signal constellation shown in Figure, with the signals s 0 = 0 0, s = 0, and s = 0 The intersection of the pairwise error regions is D = D D = D, as shown in the figure The squared Euclidean distances are d E0 =, d E0 =, and d E =, and the corresponding Gram matrix is Γ = The matrix Γ is singular due to linear dependency of the signals Since detγ = 0 it follows from that RL =, ie, the circle on which the signal points are lying is a straight line Let us now determine the list radius R L We have that γ adjγ = = 0 0 which indicates that in order to obtain R L, we need to reduce the list size, as in the previous example The signal point s has larger distance from s 0 and thus the signal set is reduced to I max = s 0, s } for which Γ Imax = γ Imax = The squared list distance is thus s 0 R L s s 0 D R L = Fig List configuration with linearly dependent signal points for which R L =, but the list radius R L is finite d EL = d E0 =, 0 s s 0 s R L = R L = Fig 5 List configuration with linearly dependent signal points for which R L = R L = The list error probability is 0 and the list radius is R L = d EL / = The center of the sphere S is the point s, which is formally obtained by θ = γ I max Γ I max S Imax = 0 = 0 = s Example : Consider the signal constellation shown in Figure 5, with the signals s 0 = 0, s =, and s = This configuration corresponds to the one from the previous example, with changed reference point The pairwise decision regions, indicated by the two dashed lines, do not intersect, that is, D = D D =, which implies that the probability of list error is zero because the received signal point can never be closer to both s and s than to s 0 This corresponds to an infinite list distance as we will now formally verify The list configuration matrix is Γ = The matrix Γ is singular, thus detγ = 0 and R L = Now we consider the vector γ adjγ: γ adjγ = = 0 0 Since the elements of γ adjγ are positive, we conclude that the submatrix Γ I which maximizes the quadratic form γγ γ T is the matrix Γ itself, and therefore d EL = R L = R L = Example 5: Consider the set of L + = bipolar signal points corresponding to the codewords v 0 = v = 0 v = 0 v 3 = The pairwise Hamming distances are d H0 = d H0 = 5, d H03 =, d H =, d H3 = d H3 = 3, which yields the list configuration matrix Γ = E s 5 5 The L = 3 dimensional circumsphere has the squared radius R L = γγ γ T = E s However, γ adjγ = which implies that the list distance d EL is determined by a reduced signal set In our case, we need to remove the sequence v 3 in order to maximize the quadratic form and, thus, for I max = v 0, v, v } we obtain d EL = γ Imax Γ I max γ T I max = 556 E s and the squared list radius is R L = d EL / = 389 E s A sphere of the radius R L contains the signal points from I max on its surface, while v 3 lies inside

10 9 Example 6: Consider a set of L+ = points in the threedimensional space, S L = s 0, s, s, s 3 }, where the reference point is the origin, s 0 = 0 0 0, and the remaining points are s = 3 s = 3 s 3 = 3 The list configuration matrix is 33 9 Γ = The circumsphere S of the set S L has the radius R L = 3, and it is illustrated in Figure 6 Since γγ = the list radius is determined by a reduced signal set Using, we find that the smallest sphere S encompassing the signal points contains only signals I max = s 0, s } on its surface, while the points S L \I max = s, s 3 } lie inside Note that only the third element of γγ is negative; however, both s and s 3 are inside S The sphere S is illustrated in Figure 6 Points s 0 and s are visible on the intersections of the two spheres The corresponding list radius is z x R L = 3 s 0 0 s 3 0 Fig 6 The circumsphere S and the smallest encompassing sphere S for the signal set from Example 6 C Properties of the List Configuration Matrix Hereinafter, we consider sets of linearly independent bipolar code signals s l, l =,,, L, for which the matrix Γ fulfills the condition γγ 0 The following theorem establishes a connection between the Hamming list distance d HL Γ H and some properties of the matrix Γ H Theorem : Let Γ H = γ Hij }, i, j =,,, L, be the normalized Gram matrix with entries given by 8 of linearly independent bipolar signal vectors s i s 0 of the binary s s y 6 block code with the minimum Hamming distance d Hmin Then the quadratic form d HL Γ = γ H Γ H γt H has the following properties: All γ Hij are integers, and γ Hii γ Hij 0 Γ H is positive definite 3 If λ max is the maximal eigenvalue of Γ H, then the Hamming list distance satisfies d HL Γ H γ H γ H Γ H γ T γ H /λ max H For any binary code with the minimum Hamming distance d Hmin, the minimum Hamming list distance is d HLmin L L + d Hmin where equality is achieved for even d Hmin with a matrix Γ H whose main-diagonal and off-diagonal elements are γ Hii = d Hmin and γ Hij = d Hmin /, respectively, if such a matrix exists 5 For any binary code with odd d Hmin, the minimum Hamming list distance is d HLmin L L + d Hmin + L L + where the equality is achieved for odd list size L and a matrix Γ H with main diagonal elements d Hmin, i L+ γ Hii = L+ d Hmin +, + i L and off-diagonal elements d Hmin /, if γ Hii = γ Hjj = d Hmin γ Hij = d Hmin + /, otherwise if such a matrix exists Remark: From Statement of Theorem, it follows that the ratio between the minimum Hamming list distance d HLmin and the minimum distance of the code d Hmin when d Hmin is even is d HLmin L d Hmin L + The ratio on the right-hand side of the above inequality was derived in [] using simplex geometry and defined as the asymptotic list decoding gain over ML decoding Proof: The entries of the matrix Γ H are given by 8 Since the Hamming distance satisfies the triangle inequality, we immediately obtain that γ Hij = d H0i + d H0j d Hij / 0 Furthermore, we have γ Hij γ Hii = d H0i + d H0j d Hij d H0i = d H0j d H0i + d Hij 0 which yields γ Hii γ Hij 0 In order to prove that the entries of the matrix Γ H are integers, it suffices to verify that d H0i + d H0j d Hij is always an even number This

11 0 follows directly from the fact that when two codewords have weights d H0i and d H0j of the same parity both odd or both even, then their pairwise distance d Hij is always even; while for codeword weights of opposite parity, the pairwise distance is an odd number The matrix Γ H is a Gram matrix of linearly independent vectors s i s 0, normalized by E s Hence, Γ H is positive definite cf Theorem 70 in [9] 3 According to the Kantorovich inequality [9], for every positive definite symmetric matrix Γ H and any nonnegative row vector x 0 we have λ min λ max λ min + λ max xx T xγ H x T xγ H xt 7 where λ min and λ max are the smallest and the largest eigenvalue of Γ H, respectively By applying the right side of the Kantorovich inequality 7 with x = γ H we obtain d HL Γ H = γ H Γ H γt H γ H γ H Γ H γ T H Furthermore, according to Theorem in [9], γ H Γ H γ T H γ H λ max Thus we obtain d HL Γ H γ H γ H Γ H γ T γ H H λ max The matrix Γ H can be decomposed as follows Γ H = DV D T where D is the diagonal matrix with entries γ Hii and V = w Hij / w Hii w Hjj }, i, j =,,, L Then the Hamming list distance is d HL Γ H = γ H Γ H γt H = γ H D V D γ T H }} v = vv v T where v = γ H D = wh wh w HLL Since V is positive definite and v > 0, we apply the Kantorovich inequality 7 and obtain Since d HL Γ H = vv v T v vv v T L v = γ Hii = trγ H i= and vv v T = we obtain i= j= γhii γ Hij γhii γ Hjj γhjj d H0i + d H0j d Hij = γ Hij = i= j= i= j= = d H0i d Hij j= i= i= j= = L trγ H d Hij = L trγ H i= j= i= j= d Hij trγ H d HL Γ H 8 L trγ H d Hij i= j= Since the pairwise distance between any two codewords satisfies d Hij d Hmin, for i j, and d Hij = 0, for i = j, then we have d Hij LL d Hmin i= j= which, combined with 8, yields trγ H d HL Γ H L trγ H LL d 9 Hmin By taking the derivative of the right-hand side of 9 with respect to trγ H we find that its minimum is achieved for trγ H = L d Hmin On the other hand, all the diagonal entries of Γ H are γ Hii d Hmin, which implies that trγ H Ld Hmin must hold Clearly, in this range of trγ H, 9 is a monotonically increasing function of trγ H ; hence, its minimum value is obtained for trγ H =Ld Hmin and we obtain the following bound d HL Γ H L L + d Hmin 30 Equality in 30 is achieved for a code with even minimum Hamming distance d Hmin and a list configuration matrix of the form d d Hmin d Hmin Hmin d Hmin d d Hmin Hmin Γ H = 3 d Hmin d Hmin d Hmin We call this matrix the worst-case list configuration matrix; it corresponds to the list of L codewords with minimum weights and at minimal possible pairwise distances, which are, for even d Hmin, all equal to d Hmin cf Tables II and III It is easy to verify that matrix 3 satisfies 30 with equality

12 5 If the Hamming weights of two codewords have the same parity both odd or both even, the Hamming distance between them is always even Hence, if a code has odd minimum distance d Hmin, the pairwise distances between codewords of weight d Hmin must be even, that is, d Hij d Hmin + Then, if d Hmin is odd, we conclude that the worst-case list configuration matrix 3 cannot be constructed and the bound 30 is never tight To obtain the worst-case Γ H and the corresponding minimum list distance when d Hmin is odd, assume that in a list of L codewords there are m 0 codewords of odd weight and L m codewords of even weight The pairwise distances between the codewords of sameparity weights are d Hij d Hmin +, while for oppositeparity pairs d Hij d Hmin, which yields i= j= d Hij LL d Hmin +mm +L ml m 3 Following the same procedure as in the previous case, we insert 3 into 8 and, by taking the derivative of the right-hand side of 8 with respect to trγ H, we conclude that the bound 8 is a monotonically increasing function of trγ H for trγ H L i= j= d Hij L d Hmin + m m + L 33 L On the other hand, since the m odd-weight codewords from the list have weights d H0i d Hmin, while the remaining L m even-weight codewords have d H0i d Hmin +, then trγ H has to fulfill trγ H md Hmin + L md Hmin + = Ld Hmin + m 3 The right-hand side of 3 is always larger than the right-hand side of 33; hence, we conclude that the right-hand side of 3 minimizes the bound 8 on the list distance Thus we obtain d HL Γ H Ld Hmin + m LL + d Hmin + m 35 which holds for all list configuration matrices Γ H By taking the derivative of 35 with respect to m we can conclude that, assuming odd L, the minimum of the bound 35 is achieved for m = L+/, which yields d HL Γ H L L + d Hmin + L L + 36 Equality is achieved for the worst-case list configuration matrix with m = L + / diagonal elements equal to γ Hii = d Hmin and L m elements γ Hii = d Hmin +, and with the off-diagonal elements equal to γ Hij = d Hmin / if γ Hii = γ Hjj = d Hmin, and γ Hij = d Hmin +/ otherwise If the list size L is even, the worst-case matrix is obtained in the same way, with m = L + / ; however, in this case, the lower bound 35 on the list distance is not tight D Center of Mass and Average Radius of a List For a given set S L = s 0, s,, s L } of L + signal points, the center of mass is located in the point s given by s = L + s j 37 j=0 The average squared radius RLav, introduced in [], of the signal set S L is the average squared Euclidean distance of the signal points s i S L from the center of mass s, that is, R Lav = L + i=0 s i s = L + d Es i, s 38 i=0 The average squared distance between the signal points from a set S L and a given reference point is often referred to as the moment of inertia of S L, cf [6] Clearly, the moment of inertia is smallest when the reference point is the center of mass s, and then it equals the average squared radius R Lav From the above definitions it follows that the average radius is never larger than the list radius for the given list S L with the configuration matrix Γ, that is, R LΓ R Lav 39 By substituting 37 into 38 we obtain that the average squared radius can also be written as R Lav = L + = = = = s i L + i=0 L + L + L + L + j=0 i=0 s j j=0 s i + L + 3 i=0 j=0 L + s j i=0 j=0 s i s i, s j i=0 j=0 i=0 j=0 s i s j d Es i, s j i=0 j=0 L + LL + L d Emin = L + d Emin which, combined with 39 yields R LΓ R Lav s i, s j L L + d Emin 0 with equality when all L + points have minimum pairwise distances d Emin, that is, when they form an L-dimensional

13 regular simplex In this case, the center of mass of S L coincides with the center of the circumsphere of S L, that is, the minimum average radius is equal to the minimum list radius, R Lav min = R Lmin = Ld Emin L + When the signal vectors s i S L are bipolar sequences of a binary block code C with minimum Hamming distance d Hmin, then 7 and 5 hold; hence 0 yields the bound d HLmin d HL Γ L L + d Hmin which coincides with Statement of Theorem Statement 5 of Theorem can be proved similarly, via the average radius, taking into account the parity of pairwise distances The average radius and the moment of inertia of a list were used in [] and [6] cf also [30] for deriving asymptotic bounds on the code rates and list error performance III UPPER BOUND ON THE LIST ERROR PROBABILITY FOR A GIVEN LIST Using properties of the list configuration matrix Γ we can upper-bound the list error probability Prt γ/ in 5 In [7] the following Chernoff-type bound was proved P el Γ = Prt γ/ exp d ELΓ/N 0 From, 3, and 3 it immediately follows that, for a given list configuration Γ, the probability of list error is not larger than the probability that the noise component along ρ s 0 is larger than the radius R L, that is, P el Γ Pr ν R L Γ Pr ν R L Γ where ν = n, ρ s 0 ρ s 0 is the noise component along ρ s 0 The above inequalities are met with equality for L = Since ν is a zero-mean Gaussian random variable with variance N 0 / we obtain P el Γ Q R L Γ N0 / = Q d HL Γ E s N 0 where Qx = / π x exp y /dy It is easy to see that the bound is tighter than Figure illustrates the worst-case list configuration for L = In this case, the bound corresponds to the probability that the received signal falls into the decision region D, which is the upper half-plane containing the sphere center ρ Note that if s denotes the virtual average signal point average of the set s, s } as shown in Figure, then the half-plane D corresponds to the error region for the pairwise error event between s 0 and s Now we will derive a new upper bound on P el Γ for the worst-case list configuration, which is tighter than the bound We follow an approach similar to the one described in [8] First we orthogonalize the noise components and then estimate the variances and integration limits for the system of the transformed noise components The derivations are based on the following two lemmas, which correspond to the worst-case matrix Γ for even and odd minimum distance, respectively Hereinafter, we will use notations m and 0 n to denote vectors containing m ones and n zeros, respectively Thus, for example, vector a a a b 0 0 can be written as a 3 b 0 = a 3 b/a 0 Lemma : Let K be an L L matrix with the following structure, i = j K = βk ij }, k ij = κ, i j, i, j =,,, L 3 where β and κ are arbitrary constants Then its eigenvalues are λ = β + κl λ l = β κ, l =, 3,, L 5 with the corresponding eigenvectors x = = L 6 x l = l l 0 L l, l =, 3,, L7 l Lemma : Let K be an L L matrix of the following structure A B K = B T 8 C where A is an m m matrix of the form a 0, i = j A = a ij }, a ij =, i, j =,,, m 9 a, i j C is an n n matrix, with n = L m, of the form c 0, i = j C = c ij }, c ij =, i, j =,,, n 50 c, i j and B is an m n matrix whose elements are all equal to b: B = b ij }, b ij = b, i =,,, m, j =,,, n 5 where a 0, a, c 0, c, and b are arbitrary constant values Then the eigenvalues of K are ξ l = a 0 a, l =,,, m 5 ξ l = c 0 c, l = m, m +,, L 53 ξ L = λ A + λ C λ A λ C + b m n 5 ξ L = λ A + λ C + λ A λ C + b m n 55 where λ A and λ C are the dominant eigenvalues of the matrices A and C, respectively, that is, λ A = a 0 + am 56 λ C = c 0 + cn 57

14 3 Furthermore, the corresponding eigenvectors of the matrix K are x l = l l 0 L l, l =,,, m l x l = 0m l m+ l m+ 0 L l, l m + l = m, m +,, L x L = bn m ξ L λ A n x L = bn m ξ L λ A n The proofs of Lemmas and are given in Appendix Now we are ready to state the following two theorems which we use to obtain upper-bounds on the list error probability P el Γ, for the worst-case list configuration, for even and odd minimum distance, respectively Theorem 3: Let t be a Gaussian random vector of length L with zero mean and covariance matrix K given by 3 from Lemma, and let α = α L be a vector of L constant values α Then the probability Prt α can be upper-bounded by Prt α αl σ fy L ul y fxdx dy 58 l= v l y with equality for L The integration limits are given by u l y = y σ αl σl, v l y = u ly l where σ = Lλ, with λ given by, and σ l = λ l l/l, l =, 3,, L, with λ l given by 5 Hereinafter, fx and fy denote the Gaussian N 0, probability density function Theorem : Let t be a Gaussian random vector of length L with zero mean and covariance matrix K given by 8 from Lemma and let α = α m η n be a vector containing m constant values α, and n = L m constant values η Then the probability Prt α can be upper-bounded by Prt α fy φα,η σl hy gy L l=m m fy dy z l y w l y l= u l y v l y fxdx fxdx dy 59 The expressions for the integration limits in the above formula are φα, η = nbmα + ηξ L λ A gy = y σl λ A ξ L bmnαξ L ξ L ξ L λ A σ L hy = y σl nηξ L ξ L σl u l y = y σl φα, η bn σ l v l y = u ly l z l y = y σl φα, η ξ L λ A σ l w l y = z ly l m + where the values of σ l, l =,,, L, are defined in the following table l σ l l m a 0 al + /l m l L c 0 cl m + /l m + l = L ξ L b n m + nξ L λ A l = L = m + n ξ L b n m + nξ L λ A where λ A, λ C, ξ L, and ξ L are given by 56, 57, 5, and 55, respectively Proofs of Theorems 3 and are given in Appendix Consider now, for example, the worst-case list configuration for a code with even minimum Hamming distance d Hmin The corresponding matrix Γ is specified in Statement of Theorem and the Hamming list distance is d HLmin = L L + d Hmin According to 5, the list error probability for the given list is P el Γ=Pr d Hmin t E s L =Prt E s d Hmin L To upper-bound the probability Prt E s d Hmin L we apply Theorem 3 with α = E s d Hmin The integration limits in the bound 58 from Theorem 3 depend on the eigenvalues of the covariance matrix of t, that is, K = ΓN 0 / We determine them by applying Lemma with β = E s d Hmin N 0 and κ = / Thus we obtain that the largest eigenvalue of K is λ = E s d Hmin L + N 0, while the other L eigenvalues are equal to λ l = E s d Hmin N 0 By substituting these values into 58 we obtain the following bound on the error probability P el Γ: L ul y P el Γ fy fxdx dy v l y dhlmin E s /N 0 = dhlmin E s /N 0 fy l= L Qu l y Qv l y dy l= 60 where LL + l u l = y d HLmin E s /N 0 l v l = u l, l =, 3,, L l For L =, bound 60 holds with equality and it is illustrated in Figure as the probability that the received signal is in the

List Decoding: Geometrical Aspects and Performance Bounds

List Decoding: Geometrical Aspects and Performance Bounds List Decoding: Geometrical Aspects and Performance Bounds Maja Lončar Department of Information Technology Lund University, Sweden Summer Academy: Progress in Mathematics for Communication Systems Bremen,

More information

Höst, Stefan; Johannesson, Rolf; Zigangirov, Kamil; Zyablov, Viktor V.

Höst, Stefan; Johannesson, Rolf; Zigangirov, Kamil; Zyablov, Viktor V. Active distances for convolutional codes Höst, Stefan; Johannesson, Rolf; Zigangirov, Kamil; Zyablov, Viktor V Published in: IEEE Transactions on Information Theory DOI: 101109/18749009 Published: 1999-01-01

More information

Aalborg Universitet. Bounds on information combining for parity-check equations Land, Ingmar Rüdiger; Hoeher, A.; Huber, Johannes

Aalborg Universitet. Bounds on information combining for parity-check equations Land, Ingmar Rüdiger; Hoeher, A.; Huber, Johannes Aalborg Universitet Bounds on information combining for parity-check equations Land, Ingmar Rüdiger; Hoeher, A.; Huber, Johannes Published in: 2004 International Seminar on Communications DOI link to publication

More information

Performance of small signal sets

Performance of small signal sets 42 Chapter 5 Performance of small signal sets In this chapter, we show how to estimate the performance of small-to-moderate-sized signal constellations on the discrete-time AWGN channel. With equiprobable

More information

Tightened Upper Bounds on the ML Decoding Error Probability of Binary Linear Block Codes and Applications

Tightened Upper Bounds on the ML Decoding Error Probability of Binary Linear Block Codes and Applications on the ML Decoding Error Probability of Binary Linear Block Codes and Department of Electrical Engineering Technion-Israel Institute of Technology An M.Sc. Thesis supervisor: Dr. Igal Sason March 30, 2006

More information

Chapter 7. Error Control Coding. 7.1 Historical background. Mikael Olofsson 2005

Chapter 7. Error Control Coding. 7.1 Historical background. Mikael Olofsson 2005 Chapter 7 Error Control Coding Mikael Olofsson 2005 We have seen in Chapters 4 through 6 how digital modulation can be used to control error probabilities. This gives us a digital channel that in each

More information

These outputs can be written in a more convenient form: with y(i) = Hc m (i) n(i) y(i) = (y(i); ; y K (i)) T ; c m (i) = (c m (i); ; c m K(i)) T and n

These outputs can be written in a more convenient form: with y(i) = Hc m (i) n(i) y(i) = (y(i); ; y K (i)) T ; c m (i) = (c m (i); ; c m K(i)) T and n Binary Codes for synchronous DS-CDMA Stefan Bruck, Ulrich Sorger Institute for Network- and Signal Theory Darmstadt University of Technology Merckstr. 25, 6428 Darmstadt, Germany Tel.: 49 65 629, Fax:

More information

Convergence analysis for a class of LDPC convolutional codes on the erasure channel

Convergence analysis for a class of LDPC convolutional codes on the erasure channel Convergence analysis for a class of LDPC convolutional codes on the erasure channel Sridharan, Arvind; Lentmaier, Michael; Costello Jr., Daniel J.; Zigangirov, Kamil Published in: [Host publication title

More information

SIPCom8-1: Information Theory and Coding Linear Binary Codes Ingmar Land

SIPCom8-1: Information Theory and Coding Linear Binary Codes Ingmar Land SIPCom8-1: Information Theory and Coding Linear Binary Codes Ingmar Land Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.1 Overview Basic Concepts of Channel Coding Block Codes I:

More information

On the exact bit error probability for Viterbi decoding of convolutional codes

On the exact bit error probability for Viterbi decoding of convolutional codes On the exact bit error probability for Viterbi decoding of convolutional codes Irina E. Bocharova, Florian Hug, Rolf Johannesson, and Boris D. Kudryashov Dept. of Information Systems Dept. of Electrical

More information

MATH3302. Coding and Cryptography. Coding Theory

MATH3302. Coding and Cryptography. Coding Theory MATH3302 Coding and Cryptography Coding Theory 2010 Contents 1 Introduction to coding theory 2 1.1 Introduction.......................................... 2 1.2 Basic definitions and assumptions..............................

More information

THIS paper is aimed at designing efficient decoding algorithms

THIS paper is aimed at designing efficient decoding algorithms IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 45, NO. 7, NOVEMBER 1999 2333 Sort-and-Match Algorithm for Soft-Decision Decoding Ilya Dumer, Member, IEEE Abstract Let a q-ary linear (n; k)-code C be used

More information

392D: Coding for the AWGN Channel Wednesday, January 24, 2007 Stanford, Winter 2007 Handout #6. Problem Set 2 Solutions

392D: Coding for the AWGN Channel Wednesday, January 24, 2007 Stanford, Winter 2007 Handout #6. Problem Set 2 Solutions 392D: Coding for the AWGN Channel Wednesday, January 24, 2007 Stanford, Winter 2007 Handout #6 Problem Set 2 Solutions Problem 2.1 (Cartesian-product constellations) (a) Show that if A is a K-fold Cartesian

More information

Lecture 12. Block Diagram

Lecture 12. Block Diagram Lecture 12 Goals Be able to encode using a linear block code Be able to decode a linear block code received over a binary symmetric channel or an additive white Gaussian channel XII-1 Block Diagram Data

More information

LOW-density parity-check (LDPC) codes were invented

LOW-density parity-check (LDPC) codes were invented IEEE TRANSACTIONS ON INFORMATION THEORY, VOL 54, NO 1, JANUARY 2008 51 Extremal Problems of Information Combining Yibo Jiang, Alexei Ashikhmin, Member, IEEE, Ralf Koetter, Senior Member, IEEE, and Andrew

More information

The Capacity of Finite Abelian Group Codes Over Symmetric Memoryless Channels Giacomo Como and Fabio Fagnani

The Capacity of Finite Abelian Group Codes Over Symmetric Memoryless Channels Giacomo Como and Fabio Fagnani IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 55, NO. 5, MAY 2009 2037 The Capacity of Finite Abelian Group Codes Over Symmetric Memoryless Channels Giacomo Como and Fabio Fagnani Abstract The capacity

More information

On Weight Enumerators and MacWilliams Identity for Convolutional Codes

On Weight Enumerators and MacWilliams Identity for Convolutional Codes On Weight Enumerators and MacWilliams Identity for Convolutional Codes Irina E Bocharova 1, Florian Hug, Rolf Johannesson, and Boris D Kudryashov 1 1 Dept of Information Systems St Petersburg Univ of Information

More information

Introduction to Convolutional Codes, Part 1

Introduction to Convolutional Codes, Part 1 Introduction to Convolutional Codes, Part 1 Frans M.J. Willems, Eindhoven University of Technology September 29, 2009 Elias, Father of Coding Theory Textbook Encoder Encoder Properties Systematic Codes

More information

Algebraic Soft-Decision Decoding of Reed Solomon Codes

Algebraic Soft-Decision Decoding of Reed Solomon Codes IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 49, NO. 11, NOVEMBER 2003 2809 Algebraic Soft-Decision Decoding of Reed Solomon Codes Ralf Koetter, Member, IEEE, Alexer Vardy, Fellow, IEEE Abstract A polynomial-time

More information

Tilburg University. Two-Dimensional Minimax Latin Hypercube Designs van Dam, Edwin. Document version: Publisher's PDF, also known as Version of record

Tilburg University. Two-Dimensional Minimax Latin Hypercube Designs van Dam, Edwin. Document version: Publisher's PDF, also known as Version of record Tilburg University Two-Dimensional Minimax Latin Hypercube Designs van Dam, Edwin Document version: Publisher's PDF, also known as Version of record Publication date: 2005 Link to publication General rights

More information

Code design: Computer search

Code design: Computer search Code design: Computer search Low rate codes Represent the code by its generator matrix Find one representative for each equivalence class of codes Permutation equivalences? Do NOT try several generator

More information

Codes and Rings: Theory and Practice

Codes and Rings: Theory and Practice Codes and Rings: Theory and Practice Patrick Solé CNRS/LAGA Paris, France, January 2017 Geometry of codes : the music of spheres R = a finite ring with identity. A linear code of length n over a ring R

More information

Bounds on the Error Probability of ML Decoding for Block and Turbo-Block Codes

Bounds on the Error Probability of ML Decoding for Block and Turbo-Block Codes Bounds on the Error Probability of ML Decoding for Block and Turbo-Block Codes Igal Sason and Shlomo Shamai (Shitz) Department of Electrical Engineering Technion Israel Institute of Technology Haifa 3000,

More information

Tilburg University. Two-dimensional maximin Latin hypercube designs van Dam, Edwin. Published in: Discrete Applied Mathematics

Tilburg University. Two-dimensional maximin Latin hypercube designs van Dam, Edwin. Published in: Discrete Applied Mathematics Tilburg University Two-dimensional maximin Latin hypercube designs van Dam, Edwin Published in: Discrete Applied Mathematics Document version: Peer reviewed version Publication date: 2008 Link to publication

More information

Chapter 2. Error Correcting Codes. 2.1 Basic Notions

Chapter 2. Error Correcting Codes. 2.1 Basic Notions Chapter 2 Error Correcting Codes The identification number schemes we discussed in the previous chapter give us the ability to determine if an error has been made in recording or transmitting information.

More information

BASICS OF DETECTION AND ESTIMATION THEORY

BASICS OF DETECTION AND ESTIMATION THEORY BASICS OF DETECTION AND ESTIMATION THEORY 83050E/158 In this chapter we discuss how the transmitted symbols are detected optimally from a noisy received signal (observation). Based on these results, optimal

More information

3. Coding theory 3.1. Basic concepts

3. Coding theory 3.1. Basic concepts 3. CODING THEORY 1 3. Coding theory 3.1. Basic concepts In this chapter we will discuss briefly some aspects of error correcting codes. The main problem is that if information is sent via a noisy channel,

More information

Lecture Notes 1: Vector spaces

Lecture Notes 1: Vector spaces Optimization-based data analysis Fall 2017 Lecture Notes 1: Vector spaces In this chapter we review certain basic concepts of linear algebra, highlighting their application to signal processing. 1 Vector

More information

Error Exponent Region for Gaussian Broadcast Channels

Error Exponent Region for Gaussian Broadcast Channels Error Exponent Region for Gaussian Broadcast Channels Lihua Weng, S. Sandeep Pradhan, and Achilleas Anastasopoulos Electrical Engineering and Computer Science Dept. University of Michigan, Ann Arbor, MI

More information

Simultaneous channel and symbol maximum likelihood estimation in Laplacian noise

Simultaneous channel and symbol maximum likelihood estimation in Laplacian noise Simultaneous channel and symbol maximum likelihood estimation in Laplacian noise Gustavsson, Jan-Olof; Nordebo, Sven; Börjesson, Per Ola Published in: [Host publication title missing] DOI: 10.1109/ICOSP.1998.770156

More information

Online algorithms for parallel job scheduling and strip packing Hurink, J.L.; Paulus, J.J.

Online algorithms for parallel job scheduling and strip packing Hurink, J.L.; Paulus, J.J. Online algorithms for parallel job scheduling and strip packing Hurink, J.L.; Paulus, J.J. Published: 01/01/007 Document Version Publisher s PDF, also known as Version of Record (includes final page, issue

More information

Sridharan, Arvind; Truhachev, Dmitri; Lentmaier, Michael; Costello Jr., Daniel J.; Zigangirov, Kamil

Sridharan, Arvind; Truhachev, Dmitri; Lentmaier, Michael; Costello Jr., Daniel J.; Zigangirov, Kamil Distance bounds for an ensemble of LDPC convolutional codes Sridharan, Arvind; Truhachev, Dmitri; Lentmaier, Michael; Costello Jr., Daniel J.; Zigangirov, Kamil Published in: IEEE Transactions on Information

More information

Low-complexity error correction in LDPC codes with constituent RS codes 1

Low-complexity error correction in LDPC codes with constituent RS codes 1 Eleventh International Workshop on Algebraic and Combinatorial Coding Theory June 16-22, 2008, Pamporovo, Bulgaria pp. 348-353 Low-complexity error correction in LDPC codes with constituent RS codes 1

More information

On Design Criteria and Construction of Non-coherent Space-Time Constellations

On Design Criteria and Construction of Non-coherent Space-Time Constellations On Design Criteria and Construction of Non-coherent Space-Time Constellations Mohammad Jaber Borran, Ashutosh Sabharwal, and Behnaam Aazhang ECE Department, MS-366, Rice University, Houston, TX 77005-89

More information

Low-density parity-check (LDPC) codes

Low-density parity-check (LDPC) codes Low-density parity-check (LDPC) codes Performance similar to turbo codes Do not require long interleaver to achieve good performance Better block error performance Error floor occurs at lower BER Decoding

More information

Applications of Lattices in Telecommunications

Applications of Lattices in Telecommunications Applications of Lattices in Telecommunications Dept of Electrical and Computer Systems Engineering Monash University amin.sakzad@monash.edu Oct. 2013 1 Sphere Decoder Algorithm Rotated Signal Constellations

More information

Chapter 3 Linear Block Codes

Chapter 3 Linear Block Codes Wireless Information Transmission System Lab. Chapter 3 Linear Block Codes Institute of Communications Engineering National Sun Yat-sen University Outlines Introduction to linear block codes Syndrome and

More information

ML and Near-ML Decoding of LDPC Codes Over the BEC: Bounds and Decoding Algorithms

ML and Near-ML Decoding of LDPC Codes Over the BEC: Bounds and Decoding Algorithms 1 ML and Near-ML Decoding of LDPC Codes Over the BEC: Bounds and Decoding Algorithms Irina E. Bocharova, Senior Member, IEEE, Boris D. Kudryashov, Senior Member, IEEE, Vitaly Skachek, Member, IEEE, Eirik

More information

Nearest Neighbor Decoding in MIMO Block-Fading Channels With Imperfect CSIR

Nearest Neighbor Decoding in MIMO Block-Fading Channels With Imperfect CSIR IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 58, NO. 3, MARCH 2012 1483 Nearest Neighbor Decoding in MIMO Block-Fading Channels With Imperfect CSIR A. Taufiq Asyhari, Student Member, IEEE, Albert Guillén

More information

On the Performance of. Golden Space-Time Trellis Coded Modulation over MIMO Block Fading Channels

On the Performance of. Golden Space-Time Trellis Coded Modulation over MIMO Block Fading Channels On the Performance of 1 Golden Space-Time Trellis Coded Modulation over MIMO Block Fading Channels arxiv:0711.1295v1 [cs.it] 8 Nov 2007 Emanuele Viterbo and Yi Hong Abstract The Golden space-time trellis

More information

Introduction to binary block codes

Introduction to binary block codes 58 Chapter 6 Introduction to binary block codes In this chapter we begin to study binary signal constellations, which are the Euclidean-space images of binary block codes. Such constellations have nominal

More information

Lecture 3: Error Correcting Codes

Lecture 3: Error Correcting Codes CS 880: Pseudorandomness and Derandomization 1/30/2013 Lecture 3: Error Correcting Codes Instructors: Holger Dell and Dieter van Melkebeek Scribe: Xi Wu In this lecture we review some background on error

More information

Bounds on the Maximum Likelihood Decoding Error Probability of Low Density Parity Check Codes

Bounds on the Maximum Likelihood Decoding Error Probability of Low Density Parity Check Codes Bounds on the Maximum ikelihood Decoding Error Probability of ow Density Parity Check Codes Gadi Miller and David Burshtein Dept. of Electrical Engineering Systems Tel-Aviv University Tel-Aviv 69978, Israel

More information

Maximum Likelihood Decoding of Codes on the Asymmetric Z-channel

Maximum Likelihood Decoding of Codes on the Asymmetric Z-channel Maximum Likelihood Decoding of Codes on the Asymmetric Z-channel Pål Ellingsen paale@ii.uib.no Susanna Spinsante s.spinsante@univpm.it Angela Barbero angbar@wmatem.eis.uva.es May 31, 2005 Øyvind Ytrehus

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

Graphs with given diameter maximizing the spectral radius van Dam, Edwin

Graphs with given diameter maximizing the spectral radius van Dam, Edwin Tilburg University Graphs with given diameter maximizing the spectral radius van Dam, Edwin Published in: Linear Algebra and its Applications Publication date: 2007 Link to publication Citation for published

More information

The Hamming Codes and Delsarte s Linear Programming Bound

The Hamming Codes and Delsarte s Linear Programming Bound The Hamming Codes and Delsarte s Linear Programming Bound by Sky McKinley Under the Astute Tutelage of Professor John S. Caughman, IV A thesis submitted in partial fulfillment of the requirements for the

More information

Lower Bounds on the Graphical Complexity of Finite-Length LDPC Codes

Lower Bounds on the Graphical Complexity of Finite-Length LDPC Codes Lower Bounds on the Graphical Complexity of Finite-Length LDPC Codes Igal Sason Department of Electrical Engineering Technion - Israel Institute of Technology Haifa 32000, Israel 2009 IEEE International

More information

Chapter 7: Channel coding:convolutional codes

Chapter 7: Channel coding:convolutional codes Chapter 7: : Convolutional codes University of Limoges meghdadi@ensil.unilim.fr Reference : Digital communications by John Proakis; Wireless communication by Andreas Goldsmith Encoder representation Communication

More information

Optimum Soft Decision Decoding of Linear Block Codes

Optimum Soft Decision Decoding of Linear Block Codes Optimum Soft Decision Decoding of Linear Block Codes {m i } Channel encoder C=(C n-1,,c 0 ) BPSK S(t) (n,k,d) linear modulator block code Optimal receiver AWGN Assume that [n,k,d] linear block code C is

More information

Arrangements, matroids and codes

Arrangements, matroids and codes Arrangements, matroids and codes first lecture Ruud Pellikaan joint work with Relinde Jurrius ACAGM summer school Leuven Belgium, 18 July 2011 References 2/43 1. Codes, arrangements and matroids by Relinde

More information

Trellis-based Detection Techniques

Trellis-based Detection Techniques Chapter 2 Trellis-based Detection Techniques 2.1 Introduction In this chapter, we provide the reader with a brief introduction to the main detection techniques which will be relevant for the low-density

More information

Root systems and optimal block designs

Root systems and optimal block designs Root systems and optimal block designs Peter J. Cameron School of Mathematical Sciences Queen Mary, University of London Mile End Road London E1 4NS, UK p.j.cameron@qmul.ac.uk Abstract Motivated by a question

More information

Maximum Achievable Diversity for MIMO-OFDM Systems with Arbitrary. Spatial Correlation

Maximum Achievable Diversity for MIMO-OFDM Systems with Arbitrary. Spatial Correlation Maximum Achievable Diversity for MIMO-OFDM Systems with Arbitrary Spatial Correlation Ahmed K Sadek, Weifeng Su, and K J Ray Liu Department of Electrical and Computer Engineering, and Institute for Systems

More information

Decoding the Tail-Biting Convolutional Codes with Pre-Decoding Circular Shift

Decoding the Tail-Biting Convolutional Codes with Pre-Decoding Circular Shift Decoding the Tail-Biting Convolutional Codes with Pre-Decoding Circular Shift Ching-Yao Su Directed by: Prof. Po-Ning Chen Department of Communications Engineering, National Chiao-Tung University July

More information

Support weight enumerators and coset weight distributions of isodual codes

Support weight enumerators and coset weight distributions of isodual codes Support weight enumerators and coset weight distributions of isodual codes Olgica Milenkovic Department of Electrical and Computer Engineering University of Colorado, Boulder March 31, 2003 Abstract In

More information

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2. APPENDIX A Background Mathematics A. Linear Algebra A.. Vector algebra Let x denote the n-dimensional column vector with components 0 x x 2 B C @. A x n Definition 6 (scalar product). The scalar product

More information

PSK bit mappings with good minimax error probability

PSK bit mappings with good minimax error probability PSK bit mappings with good minimax error probability Erik Agrell Department of Signals and Systems Chalmers University of Technology 4196 Göteborg, Sweden Email: agrell@chalmers.se Erik G. Ström Department

More information

Linear Algebra Massoud Malek

Linear Algebra Massoud Malek CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product

More information

16.36 Communication Systems Engineering

16.36 Communication Systems Engineering MIT OpenCourseWare http://ocw.mit.edu 16.36 Communication Systems Engineering Spring 2009 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms. 16.36: Communication

More information

Irredundant Families of Subcubes

Irredundant Families of Subcubes Irredundant Families of Subcubes David Ellis January 2010 Abstract We consider the problem of finding the maximum possible size of a family of -dimensional subcubes of the n-cube {0, 1} n, none of which

More information

4488 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 54, NO. 10, OCTOBER /$ IEEE

4488 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 54, NO. 10, OCTOBER /$ IEEE 4488 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 54, NO. 10, OCTOBER 2008 List Decoding of Biorthogonal Codes the Hadamard Transform With Linear Complexity Ilya Dumer, Fellow, IEEE, Grigory Kabatiansky,

More information

Lecture 4 : Introduction to Low-density Parity-check Codes

Lecture 4 : Introduction to Low-density Parity-check Codes Lecture 4 : Introduction to Low-density Parity-check Codes LDPC codes are a class of linear block codes with implementable decoders, which provide near-capacity performance. History: 1. LDPC codes were

More information

Bounds on Mutual Information for Simple Codes Using Information Combining

Bounds on Mutual Information for Simple Codes Using Information Combining ACCEPTED FOR PUBLICATION IN ANNALS OF TELECOMM., SPECIAL ISSUE 3RD INT. SYMP. TURBO CODES, 003. FINAL VERSION, AUGUST 004. Bounds on Mutual Information for Simple Codes Using Information Combining Ingmar

More information

Efficient Computation of the Pareto Boundary for the Two-User MISO Interference Channel with Multi-User Decoding Capable Receivers

Efficient Computation of the Pareto Boundary for the Two-User MISO Interference Channel with Multi-User Decoding Capable Receivers Efficient Computation of the Pareto Boundary for the Two-User MISO Interference Channel with Multi-User Decoding Capable Receivers Johannes Lindblom, Eleftherios Karipidis and Erik G. Larsson Linköping

More information

One Lesson of Information Theory

One Lesson of Information Theory Institut für One Lesson of Information Theory Prof. Dr.-Ing. Volker Kühn Institute of Communications Engineering University of Rostock, Germany Email: volker.kuehn@uni-rostock.de http://www.int.uni-rostock.de/

More information

Orthogonal Arrays & Codes

Orthogonal Arrays & Codes Orthogonal Arrays & Codes Orthogonal Arrays - Redux An orthogonal array of strength t, a t-(v,k,λ)-oa, is a λv t x k array of v symbols, such that in any t columns of the array every one of the possible

More information

October 25, 2013 INNER PRODUCT SPACES

October 25, 2013 INNER PRODUCT SPACES October 25, 2013 INNER PRODUCT SPACES RODICA D. COSTIN Contents 1. Inner product 2 1.1. Inner product 2 1.2. Inner product spaces 4 2. Orthogonal bases 5 2.1. Existence of an orthogonal basis 7 2.2. Orthogonal

More information

GEORGIA INSTITUTE OF TECHNOLOGY SCHOOL OF ELECTRICAL AND COMPUTER ENGINEERING Final Examination - Fall 2015 EE 4601: Communication Systems

GEORGIA INSTITUTE OF TECHNOLOGY SCHOOL OF ELECTRICAL AND COMPUTER ENGINEERING Final Examination - Fall 2015 EE 4601: Communication Systems GEORGIA INSTITUTE OF TECHNOLOGY SCHOOL OF ELECTRICAL AND COMPUTER ENGINEERING Final Examination - Fall 2015 EE 4601: Communication Systems Aids Allowed: 2 8 1/2 X11 crib sheets, calculator DATE: Tuesday

More information

Linear Algebra I. Ronald van Luijk, 2015

Linear Algebra I. Ronald van Luijk, 2015 Linear Algebra I Ronald van Luijk, 2015 With many parts from Linear Algebra I by Michael Stoll, 2007 Contents Dependencies among sections 3 Chapter 1. Euclidean space: lines and hyperplanes 5 1.1. Definition

More information

Introduction to Low-Density Parity Check Codes. Brian Kurkoski

Introduction to Low-Density Parity Check Codes. Brian Kurkoski Introduction to Low-Density Parity Check Codes Brian Kurkoski kurkoski@ice.uec.ac.jp Outline: Low Density Parity Check Codes Review block codes History Low Density Parity Check Codes Gallager s LDPC code

More information

Full-State Feedback Design for a Multi-Input System

Full-State Feedback Design for a Multi-Input System Full-State Feedback Design for a Multi-Input System A. Introduction The open-loop system is described by the following state space model. x(t) = Ax(t)+Bu(t), y(t) =Cx(t)+Du(t) () 4 8.5 A =, B =.5.5, C

More information

Lattices and Lattice Codes

Lattices and Lattice Codes Lattices and Lattice Codes Trivandrum School on Communication, Coding & Networking January 27 30, 2017 Lakshmi Prasad Natarajan Dept. of Electrical Engineering Indian Institute of Technology Hyderabad

More information

Tilburg University. Strongly Regular Graphs with Maximal Energy Haemers, W. H. Publication date: Link to publication

Tilburg University. Strongly Regular Graphs with Maximal Energy Haemers, W. H. Publication date: Link to publication Tilburg University Strongly Regular Graphs with Maximal Energy Haemers, W. H. Publication date: 2007 Link to publication Citation for published version (APA): Haemers, W. H. (2007). Strongly Regular Graphs

More information

A Simple Example Binary Hypothesis Testing Optimal Receiver Frontend M-ary Signal Sets Message Sequences. possible signals has been transmitted.

A Simple Example Binary Hypothesis Testing Optimal Receiver Frontend M-ary Signal Sets Message Sequences. possible signals has been transmitted. Introduction I We have focused on the problem of deciding which of two possible signals has been transmitted. I Binary Signal Sets I We will generalize the design of optimum (MPE) receivers to signal sets

More information

Shannon meets Wiener II: On MMSE estimation in successive decoding schemes

Shannon meets Wiener II: On MMSE estimation in successive decoding schemes Shannon meets Wiener II: On MMSE estimation in successive decoding schemes G. David Forney, Jr. MIT Cambridge, MA 0239 USA forneyd@comcast.net Abstract We continue to discuss why MMSE estimation arises

More information

UC Riverside UC Riverside Previously Published Works

UC Riverside UC Riverside Previously Published Works UC Riverside UC Riverside Previously Published Works Title Soft-decision decoding of Reed-Muller codes: A simplied algorithm Permalink https://escholarship.org/uc/item/5v71z6zr Journal IEEE Transactions

More information

The E8 Lattice and Error Correction in Multi-Level Flash Memory

The E8 Lattice and Error Correction in Multi-Level Flash Memory The E8 Lattice and Error Correction in Multi-Level Flash Memory Brian M Kurkoski University of Electro-Communications Tokyo, Japan kurkoski@iceuecacjp Abstract A construction using the E8 lattice and Reed-Solomon

More information

Transmuted distributions and extrema of random number of variables

Transmuted distributions and extrema of random number of variables Transmuted distributions and extrema of random number of variables Kozubowski, Tomasz J.; Podgórski, Krzysztof Published: 2016-01-01 Link to publication Citation for published version (APA): Kozubowski,

More information

On Two Probabilistic Decoding Algorithms for Binary Linear Codes

On Two Probabilistic Decoding Algorithms for Binary Linear Codes On Two Probabilistic Decoding Algorithms for Binary Linear Codes Miodrag Živković Abstract A generalization of Sullivan inequality on the ratio of the probability of a linear code to that of any of its

More information

Vector spaces. EE 387, Notes 8, Handout #12

Vector spaces. EE 387, Notes 8, Handout #12 Vector spaces EE 387, Notes 8, Handout #12 A vector space V of vectors over a field F of scalars is a set with a binary operator + on V and a scalar-vector product satisfying these axioms: 1. (V, +) is

More information

only nite eigenvalues. This is an extension of earlier results from [2]. Then we concentrate on the Riccati equation appearing in H 2 and linear quadr

only nite eigenvalues. This is an extension of earlier results from [2]. Then we concentrate on the Riccati equation appearing in H 2 and linear quadr The discrete algebraic Riccati equation and linear matrix inequality nton. Stoorvogel y Department of Mathematics and Computing Science Eindhoven Univ. of Technology P.O. ox 53, 56 M Eindhoven The Netherlands

More information

Error Floors of LDPC Coded BICM

Error Floors of LDPC Coded BICM Electrical and Computer Engineering Conference Papers, Posters and Presentations Electrical and Computer Engineering 2007 Error Floors of LDPC Coded BICM Aditya Ramamoorthy Iowa State University, adityar@iastate.edu

More information

Decoding of LDPC codes with binary vector messages and scalable complexity

Decoding of LDPC codes with binary vector messages and scalable complexity Downloaded from vbn.aau.dk on: marts 7, 019 Aalborg Universitet Decoding of LDPC codes with binary vector messages and scalable complexity Lechner, Gottfried; Land, Ingmar; Rasmussen, Lars Published in:

More information

Lecture 7 MIMO Communica2ons

Lecture 7 MIMO Communica2ons Wireless Communications Lecture 7 MIMO Communica2ons Prof. Chun-Hung Liu Dept. of Electrical and Computer Engineering National Chiao Tung University Fall 2014 1 Outline MIMO Communications (Chapter 10

More information

Performance Analysis and Interleaver Structure Optimization for Short-Frame BICM-OFDM Systems

Performance Analysis and Interleaver Structure Optimization for Short-Frame BICM-OFDM Systems 1 Performance Analysis and Interleaver Structure Optimization for Short-Frame BICM-OFDM Systems Yuta Hori, Student Member, IEEE, and Hideki Ochiai, Member, IEEE Abstract Bit-interleaved coded modulation

More information

Lecture 4: Proof of Shannon s theorem and an explicit code

Lecture 4: Proof of Shannon s theorem and an explicit code CSE 533: Error-Correcting Codes (Autumn 006 Lecture 4: Proof of Shannon s theorem and an explicit code October 11, 006 Lecturer: Venkatesan Guruswami Scribe: Atri Rudra 1 Overview Last lecture we stated

More information

Published in: Proceedings of the 21st Symposium on Mathematical Theory of Networks and Systems

Published in: Proceedings of the 21st Symposium on Mathematical Theory of Networks and Systems Aalborg Universitet Affine variety codes are better than their reputation Geil, Hans Olav; Martin, Stefano Published in: Proceedings of the 21st Symposium on Mathematical Theory of Networks and Systems

More information

Computing Probability of Symbol Error

Computing Probability of Symbol Error Computing Probability of Symbol Error I When decision boundaries intersect at right angles, then it is possible to compute the error probability exactly in closed form. I The result will be in terms of

More information

Using Laplacian Eigenvalues and Eigenvectors in the Analysis of Frequency Assignment Problems

Using Laplacian Eigenvalues and Eigenvectors in the Analysis of Frequency Assignment Problems Using Laplacian Eigenvalues and Eigenvectors in the Analysis of Frequency Assignment Problems Jan van den Heuvel and Snežana Pejić Department of Mathematics London School of Economics Houghton Street,

More information

CALCULUS ON MANIFOLDS. 1. Riemannian manifolds Recall that for any smooth manifold M, dim M = n, the union T M =

CALCULUS ON MANIFOLDS. 1. Riemannian manifolds Recall that for any smooth manifold M, dim M = n, the union T M = CALCULUS ON MANIFOLDS 1. Riemannian manifolds Recall that for any smooth manifold M, dim M = n, the union T M = a M T am, called the tangent bundle, is itself a smooth manifold, dim T M = 2n. Example 1.

More information

Reed-Solomon codes. Chapter Linear codes over finite fields

Reed-Solomon codes. Chapter Linear codes over finite fields Chapter 8 Reed-Solomon codes In the previous chapter we discussed the properties of finite fields, and showed that there exists an essentially unique finite field F q with q = p m elements for any prime

More information

Soft-Decision Decoding Using Punctured Codes

Soft-Decision Decoding Using Punctured Codes IEEE TRANSACTIONS ON INFORMATION THEORY, VOL 47, NO 1, JANUARY 2001 59 Soft-Decision Decoding Using Punctured Codes Ilya Dumer, Member, IEEE Abstract Let a -ary linear ( )-code be used over a memoryless

More information

Appendix B Information theory from first principles

Appendix B Information theory from first principles Appendix B Information theory from first principles This appendix discusses the information theory behind the capacity expressions used in the book. Section 8.3.4 is the only part of the book that supposes

More information

Characterization of Convex and Concave Resource Allocation Problems in Interference Coupled Wireless Systems

Characterization of Convex and Concave Resource Allocation Problems in Interference Coupled Wireless Systems 2382 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL 59, NO 5, MAY 2011 Characterization of Convex and Concave Resource Allocation Problems in Interference Coupled Wireless Systems Holger Boche, Fellow, IEEE,

More information

A lower bound for the Laplacian eigenvalues of a graph proof of a conjecture by Guo

A lower bound for the Laplacian eigenvalues of a graph proof of a conjecture by Guo A lower bound for the Laplacian eigenvalues of a graph proof of a conjecture by Guo A. E. Brouwer & W. H. Haemers 2008-02-28 Abstract We show that if µ j is the j-th largest Laplacian eigenvalue, and d

More information

Modulation & Coding for the Gaussian Channel

Modulation & Coding for the Gaussian Channel Modulation & Coding for the Gaussian Channel Trivandrum School on Communication, Coding & Networking January 27 30, 2017 Lakshmi Prasad Natarajan Dept. of Electrical Engineering Indian Institute of Technology

More information

IN THE last several years, there has been considerable

IN THE last several years, there has been considerable IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 48, NO. 8, AUGUST 2002 2291 Unitary Signal Constellations Differential Space Time Modulation With Two Transmit Antennas: Parametric Codes, Optimal Designs,

More information

IN this paper, we will introduce a new class of codes,

IN this paper, we will introduce a new class of codes, IEEE TRANSACTIONS ON INFORMATION THEORY, VOL 44, NO 5, SEPTEMBER 1998 1861 Subspace Subcodes of Reed Solomon Codes Masayuki Hattori, Member, IEEE, Robert J McEliece, Fellow, IEEE, and Gustave Solomon,

More information

1 Basic Combinatorics

1 Basic Combinatorics 1 Basic Combinatorics 1.1 Sets and sequences Sets. A set is an unordered collection of distinct objects. The objects are called elements of the set. We use braces to denote a set, for example, the set

More information