Soft-Decision Majority Decoding of Reed Muller Codes

Similar documents
A GENERALIZATION OF A CONJECTURE OF MELHAM. 1. Introduction The Fibonomial coefficient is, for n m 1, defined by

JORDAN CANONICAL FORM AND ITS APPLICATIONS

Central limit theorem for functions of weakly dependent variables

Some Remarks on the Boundary Behaviors of the Hardy Spaces

An Adaptive Diagonal Loading Covariance Matrix Estimator in Spatially Heterogeneous Sea Clutter Yanling Shi, Xiaoyan Xie

A question of Gol dberg concerning entire functions with prescribed zeros

Optimum Settings of Process Mean, Economic Order Quantity, and Commission Fee

Adsorption and Desorption Kinetics for Diffusion Controlled Systems with a Strongly Concentration Dependent Diffusivity

Game Study of the Closed-loop Supply Chain with Random Yield and Random Demand

30 The Electric Field Due to a Continuous Distribution of Charge on a Line

THE LEAST COMMON MULTIPLE OF RANDOM SETS OF POSITIVE INTEGERS. 1. Introduction

ALOIS PANHOLZER AND HELMUT PRODINGER

Solution to HW 3, Ma 1a Fall 2016

Study on GPS Common-view Observation Data with Multiscale Kalman Filter. based on correlation Structure of the Discrete Wavelet Coefficients

A STUDY OF HAMMING CODES AS ERROR CORRECTING CODES

AMACHINE-to-machine (M2M) communication system

Queuing Network Approximation Technique for Evaluating Performance of Computer Systems with Hybrid Input Source

Orbital Angular Momentum Eigenfunctions

Vortex Initialization in HWRF/HMON Models

On generalized Laguerre matrix polynomials

Some Ideal Convergent Sequence Spaces Defined by a Sequence of Modulus Functions Over n-normed Spaces

KANTOROVICH TYPE INEQUALITIES FOR THE DIFFERENCE WITH TWO NEGATIVE PARAMETERS. Received April 13, 2010; revised August 18, 2010

ATMO 551a Fall 08. Diffusion

THE LEAST COMMON MULTIPLE OF RANDOM SETS OF POSITIVE INTEGERS. 1. Introduction

Research Article Approximation of Signals (Functions) by Trigonometric Polynomials in L p -Norm

VLSI IMPLEMENTATION OF PARALLEL- SERIAL LMS ADAPTIVE FILTERS

Revision of Lecture Eight

Probability Distribution (Probability Model) Chapter 2 Discrete Distributions. Discrete Random Variable. Random Variable. Why Random Variable?

4/18/2005. Statistical Learning Theory

6 PROBABILITY GENERATING FUNCTIONS

Lecture 23: Central Force Motion

arxiv: v1 [math.co] 1 Apr 2011

Pearson s Chi-Square Test Modifications for Comparison of Unweighted and Weighted Histograms and Two Weighted Histograms

LINEAR MOMENTUM Physical quantities that we have been using to characterize the motion of a particle

Surveillance Points in High Dimensional Spaces

New problems in universal algebraic geometry illustrated by boolean equations

On Bounds for Harmonic Topological Index

1 Random Variable. Why Random Variable? Discrete Random Variable. Discrete Random Variable. Discrete Distributions - 1 DD1-1

10/04/18. P [P(x)] 1 negl(n).

ON INDEPENDENT SETS IN PURELY ATOMIC PROBABILITY SPACES WITH GEOMETRIC DISTRIBUTION. 1. Introduction. 1 r r. r k for every set E A, E \ {0},

FARADAY'S LAW. dates : No. of lectures allocated. Actual No. of lectures 3 9/5/09-14 /5/09

NOTE. Some New Bounds for Cover-Free Families

Robust Spectrum Decision Protocol against Primary User Emulation Attacks in Dynamic Spectrum Access Networks

Multiple Criteria Secretary Problem: A New Approach

Central Coverage Bayes Prediction Intervals for the Generalized Pareto Distribution

Induction Motor Identification Using Elman Neural Network

3.1 Random variables

Quadratic Harmonic Number Sums

EM Boundary Value Problems

Distributed Adaptive Networks: A Graphical Evolutionary Game-Theoretic View

t is bounded. Thus, the state derivative x t is bounded. Let y Cx represent the system output. Then y

F-IF Logistic Growth Model, Abstract Version

Temporal-Difference Learning

FARADAY'S LAW dt

On the velocity autocorrelation function of a Brownian particle

Stanford University CS259Q: Quantum Computing Handout 8 Luca Trevisan October 18, 2012

Application of Poisson Integral Formula on Solving Some Definite Integrals

Research Article On Alzer and Qiu s Conjecture for Complete Elliptic Integral and Inverse Hyperbolic Tangent Function

DESIGN AND IMPLEMENTATION OF SPLIT RADIX ALGORITHM FOR LENGTH - 6 M DFT USING VLSI AND FPGA

Markscheme May 2017 Calculus Higher level Paper 3

Journal of Inequalities in Pure and Applied Mathematics

JENSEN S INEQUALITY FOR DISTRIBUTIONS POSSESSING HIGHER MOMENTS, WITH APPLICATION TO SHARP BOUNDS FOR LAPLACE-STIELTJES TRANSFORMS

Lifting Private Information Retrieval from Two to any Number of Messages

Soft decision decoding of Reed-Muller codes: recursive lists

arxiv: v1 [hep-ph] 17 Apr 2018

8-3 Magnetic Materials

arxiv: v1 [math.co] 4 May 2017

Maximum Torque Control of Induction Traction Motor Based on DQ Axis Voltage Regulation

On the integration of the equations of hydrodynamics

Relating Branching Program Size and. Formula Size over the Full Binary Basis. FB Informatik, LS II, Univ. Dortmund, Dortmund, Germany

On the Poisson Approximation to the Negative Hypergeometric Distribution

EN40: Dynamics and Vibrations. Midterm Examination Tuesday March

Fractional Zero Forcing via Three-color Forcing Games

Analysis of Arithmetic. Analysis of Arithmetic. Analysis of Arithmetic Round-Off Errors. Analysis of Arithmetic. Analysis of Arithmetic

HOW TO TEACH THE FUNDAMENTALS OF INFORMATION SCIENCE, CODING, DECODING AND NUMBER SYSTEMS?

AN ADAPTIVE ORDER-STATISTIC NOISE FILTER FOR GAMMA-CORRECTED IMAGE SEQUENCES

A Multivariate Normal Law for Turing s Formulae

PROBLEM SET #1 SOLUTIONS by Robert A. DiStasio Jr.

TO THE STATISTICS OF DOUBLE STARS

THE INFLUENCE OF THE MAGNETIC NON-LINEARITY ON THE MAGNETOSTATIC SHIELDS DESIGN

CENTRAL INDEX BASED SOME COMPARATIVE GROWTH ANALYSIS OF COMPOSITE ENTIRE FUNCTIONS FROM THE VIEW POINT OF L -ORDER. Tanmay Biswas

A Bijective Approach to the Permutational Power of a Priority Queue

A generalization of the Bernstein polynomials

Syntactical content of nite approximations of partial algebras 1 Wiktor Bartol Inst. Matematyki, Uniw. Warszawski, Warszawa (Poland)

Geometry of the homogeneous and isotropic spaces

CSCE 478/878 Lecture 4: Experimental Design and Analysis. Stephen Scott. 3 Building a tree on the training set Introduction. Outline.

Probablistically Checkable Proofs

Energy Levels Of Hydrogen Atom Using Ladder Operators. Ava Khamseh Supervisor: Dr. Brian Pendleton The University of Edinburgh August 2011

LECTURE 15. Phase-amplitude variables. Non-linear transverse motion

Tidal forces. m r. m 1 m 2. x r 2. r 1

Lecture 28: Convergence of Random Variables and Related Theorems

COMPUTATIONS OF ELECTROMAGNETIC FIELDS RADIATED FROM COMPLEX LIGHTNING CHANNELS

On the Quasi-inverse of a Non-square Matrix: An Infinite Solution

Appendix B The Relativistic Transformation of Forces

HAM for finding solutions to coupled system of variable coefficient equations arising in fluid dynamics

Failure Probability of 2-within-Consecutive-(2, 2)-out-of-(n, m): F System for Special Values of m

MEASURING CHINESE RISK AVERSION

Handout: IS/LM Model

On the ratio of maximum and minimum degree in maximal intersecting families

1121 T Question 1

Transcription:

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 46, NO., JANUARY 2 258 Soft-Decision Majoity Decoding of Reed Mulle Codes Ilya Due, Mebe, IEEE, and Rafail Kichevskiy Abstact We pesent a new soft-decision ajoity decoding algoith fo Reed Mulle codes RM ( ). Fist, the eliabilities of 2 tansitted sybols ae ecalculated into the eliabilities of 2 paity checks that epesent each infoation bit. In tun, infoation bits ae obtained by the weighted ajoity that gives oe weight to oe eliable paity checks. It is poven that fo long low-ate codes RM ( ), ou soft-decision algoith outpefos its conventional had-decision countepat by log ( 2) 2 db at any given output eo pobability. Fo fixed code ate and, ou algoith inceases alost 2 ties the coecting capability of soft-decision bounded distance decoding. Index Tes Gaussian channel, ajoity decoding, paity checks, eliabilities. I. INTRODUCTION Reed Mulle (RM) codes RM (; ) [] have length n = 2, diension k = i= and code distance d =2 : Thei ajoity decoding has been intoduced in the seinal pape [3] followed by nueous developents (see [], [5], [7], [], [2], [4], [7], and efeences theein). The algoith povides bounded distance decoding with coplexity ode of nk o less. It is also known [8] that ajoity decoding coects any eo pattens beyond the weight b(d)=2c. Naely, fo fixed-ode it does so fo ost eo vectos of Haing weight n(" )=2, whee the esidual te " has vanishing ode (=d) =2 as!.fo RM codes of fixed ate R, we coect ost eo pattens of weight up to (d ln d)=4. A nube of efficient decoding schees wee designed fo RM codes in the past decade. The newly developed ecusive algoiths povide bounded distance decoding with low-coplexity ode of n in(; ) on both had- [9] and soft-decision channels [6]. Siulation esults [6] also showed that ecusive soft-decision algoiths can significantly supass bounded distance decoding. Anothe efficient algoith [5] gives a slightly highe coplexity ode of n 2 fo codes RM (2; ) while coecting ost eo pattens of highe weight n(")=2, whee " has ode of (=n) =4 as!: Finally, algoith [3] consides ultistage axiu-likelihood decoding by using an efficient tellis stuctue. Yet, its coplexity is an exponent in n. In this coespondence, we develop ajoity decoding fo RM codes used ove additive white Gaussian noise (AWGN) channels. Conside a channel with white Gaussian noise N (; 2 ) and the pobability density function i ; g(u) =e u =2 = p 2: () The two sybols and ae tansitted as + and. These two take abitay eal values u at the eceive end with pobability densi- Manuscipt eceived Septebe 8, 998; evised July 5, 999. This wok was suppoted by the NSF unde Gant NCR-973844. The authos ae with the College of Engineeing, Univesity of Califonia, Riveside, CA 9252 USA (e-ail: {due}{afail}@ee.uc.edu). Counicated by I. F. Blake, Associate Edito fo Coding Theoy. Publishe Ite Identifie S 8-9448()76-6. ties g(u) and g(u+), espectively. In had-decision decoding, the tansitted sybols 6 ae intechanged with tansition eo pobability p() =Q(=) def = = e u =2 du= p 2: (2) By contast, in soft-decision decoding, the eceived signals u ae not ounded up to 6. We wish to pocess futhe the likelihoods p(ju)=p(ju) of tansitted sybols while keeping the coplexity O(nk) of ajoity schees. Moe specifically, the following questions aise. Can these likelihoods ipove the pefoance of ajoity decoding? How uch can we educe the possible S=N atios? How any oe had-decision eos can we coect? The idea of ou algoith is as follows. Each infoation sybol of ode can be found fo 2 independent paity checks defined ove disjoint subsets of 2 code sybols. The siple ajoity of these checks is taken in had-decision decoding. By contast, in soft-decision decoding we use weighted ajoity. Fist, we ecalculate the initial eliabilities of 2 tansitted sybols into the eliability of the coesponding paity check. Second, the ajoity voting schee accuulates all 2 paity checks and gives oe weight to the oe eliable ones. To estiate pefoance of a given code RM (; ), we fix an output bit-eo ate "<=2. Then we copae the axiu noise powes h 2 (") and 2 s (") that suppot tansission at this ate " in had- and soft-decision decoding, espectively. Ou ain theoetical esult is that s 2 (")! 2 h (")=2 fo any sequence RM (; ) with! and fixed ode, egadless of the eo ate ". In othe wods, soft-decision decoding gains log (=2) 2. db ove conventional ajoity schee fo all long low-ate RM codes at any output eo ate ". Fo fixed code ate R, the situation is diffeent. It tuns out that both soft-decision decoding and its conventional countepat equie siila signal-to-noise atios. Howeve, even in this case we incease 4= ties the tansition eo pobability suppoted by ou decoding. The ipoveent is even oe significant when copaed with soft decision bounded distance decoding. We show that ou algoith coects ost eo pattens of the Euclidean weight n=( ln 2), that is, about 2 =2 ties oe than the decoding capacity p d guaanteed by bounded distance decoding and by the ecusive algoith of [6]. Fo the pactical standpoint, we develop an analytical technique that gives explicit a posteioi pobabilities fo paity checks given the eliabilities of tansitted sybols. This allows us to obtain tight nueical bounds on output eo ate fo any RM (; ) code. When these bounds wee checked fo soe codes against siulation esults, both tuned out to be alost identical. The ateial of this coespondence is oganized as follows. In Section II, we conside conventional ajoity algoith. Hee we evise the poof of [8] using a pobabilistic appoach. In Section III, we genealize this technique fo soft-decision ajoity decoding. Then we study decoding pefoance fo codes RM (; ) in Section IV. In Section V, we fist discuss the watefall-like behavio of the output eo pobability. Then we show how the algoith pefos on channels of vaying quality. Finally, we discuss coplexity issues and pefoance fo shot codes. Given any ate <R<, we have asyptotic equality =! =2 as!. 8-9448()$. 2 IEEE

259 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 46, NO., JANUARY 2 II. BACKGROUND: MAJORITY DECODING Fist, we biefly intoduce Reed Mulle codes RM (; ) and thei ajoity decoding. Repesent each position x = ; ; 2 as a point (x ; ;x ) on the binay cube E 2. Then we conside a Boolean polynoial f (x) = a i ;...; i x i x i of degee o less. Hee the su includes onoials of degees s defined ove indices i < < i s <. When positions x un though E 2, we obtain a binay vecto f (x) of length 2. The Reed Mulle code RM (; ) is the set of vectos ff (x)g obtained fo polynoials of degee o less. Binay coefficients a i ;...; i give s infoation bits of ode s. Majoity decoding fist finds all coefficients a i ;...; i of the highest ode. These ae found in a siila way; theefoe, without loss of geneality we conside one coefficient, say, a def = a ;...;. Then define an -face X j E 2. This is an affine flat f(x ; ;x )g of size 2 spun ove the fist vaiables x ; ;x while the whole subvecto j = (x + ; ;x ) is fixed in one of 2 diffeent ways. Siilaly to positions x, we wite j = ; ; 2 and conside all 2 disjoint faces X j. Fo the definition of codes RM (; ), one can eadily veify that a = f (x): (3) Now we see that a can be obtained fo 2 disjoint faces X j. The decode defines a by taking the ajoity of 2 paity checks f (x). Obviously, (3) o soething akin to it can be applied to any infoation sybol. In tun, this iplies that we can coect any eo patten of weight 2 o less. Afte all coefficients of ode ae found, ajoity decoding poceeds with ode, in which case 2 eos can be coected. Fo each sybol a, we need n opeations to calculate all 2 paity checks (3) and find thei ajoity. Thus decoding coplexity has ode of nk even if each new calculation (3) does not use the pevious esults. Also, ajoity decoding does not need any coputational ovehead to etieve the infoation content fo a coected codewod. This copaes favoably with ost decoding algoiths. Below we conside long codes RM (; ) with gowing distance d =2 used ove a binay-syetic channel (BSC) with tansition eo pobability p. Let q =pand y =2p.Giveny, we define paaete h =2 ()=2 y 2 = 2 ()=2 : (4) y 2 y 2 We suppose that h = o(d =6 ) as d!. Late we will see that such a estiction does not affect eithe asyptotic esults o applications fo shot lengths. The following lea uses geneal pobabilistic aguents and seves as a benchak fo futhe copaison. Lea : Fo long codes RM (; ) with ()!, ajoity decoding etieves any infoation bit of ode with asyptotic bit eo ate Q( h )= e u p =2 du= 2: Poof: Fo any infoation bit a, the output eo pobability does not depend on a tansitted codewod. Below we assue that f is tansitted, and its coupted vesion u is eceived. Let f x and u x denote the coesponding sybols in position x. Givenj, let P and Q = P be the pobabilities that the jth check u x gives the values ^a j =and ^a j =, espectively. Obviously, ^a j =, if the check includes odd nube of petubed sybols (that is s ). Hence 2 P = p s q 2 s 2 Q = p s q 2 s : (5) s s odd s even s Siilaly to [4], we expand the expession (q p) 2 and see that Q P =(qp) 2. Then P =(y 2 )=2; Q =(+y 2 )=2: (6) Now fo any paity check j =;...;dwe use the ando vaiable +; if ^a j = j = (7) ; if ^a j =. Then j has expectation E j = Q P = y 2 and vaiance D j =y 2. Majoity voting akes the wong decision a = if j <. Note that diffeent j ae obtained fo disjoint subsets X j and, theefoe, independent. So the su of d!vaiables j is noally distibuted with the expectation 2 E j and vaiance 2 D j. To get j <, we need 2 ()=2 E j = D j standad deviations, that equals (4). Accoding to [2, p. 93], even lage deviation P ( j < ) conveges to Q( h ) wheneve h = o(d =6 ) as d d!. Reaks: ) In the sequel, we fix the output bit eo ate " >. In this case, ou paaete h is also fixed as Q ("). Thus the above liitation h = o(d =6 ) does not affect ou setting. 2) Long codes RM (; ) wee used above only to get inceasing nube d!of independent ando vaiables j. We can eove this estiction and conside the ando vaiable j with binoial distibution instead of the Gaussian one. In this case, exact estiates can be obtained by nueical (say, copute) calculations. 3) The pobability to get at least one incoect sybol of ode is uppe-bounded by the union bound Q( h). One can easily veify that this tends to fo any, if h p ln 4. 4) Also, it can be shown that the infoation bits of lowe odes ;...; have salle pobabilities of incoect decoding. The following lea evisits the esults obtained in [8] and shows that ajoity decoding outpefos the guaanteed coecting capability d=2 about 2 ties fo codes RM (; ) of fixed ode, and (ln d)=2 ties fo codes of fixed ate <R<. The poof diffes fo that in [8] and is given in Appendix I. Let c be any constant exceeding ln 2. Lea 2: Fo!, ajoity decoding of long codes RM (; ) coects vitually all eo pattens of weight t n( (c=d) =2 )=2; if = const (8) t (ln d ln 2)d=4; if <R<: (9) III. SOFT-DECISION MAJORITY DECODING As befoe, let f 2 E n 2 be the tansitted codewod and u 2R n be the eceived codewod. In had-decision decoding, each sybol u x = 6 is incoect with the sae pobability p = Q(=). As a esult, each paity check gives the sae pobability P of incoect decoding. By contast, in soft-decision decoding, the a posteioi pobabilities p def = p(ju) and q def = p(ju) of the tansitted sybol f depend on the value u 2Rof the eceived sybol. In decoding, we wish to elate the likelihoods of coupted sybols u x to the likelihoods of the infoation bits. Ou fist step is to find

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 46, NO., JANUARY 2 26 the a posteioi pobabilities Q j = Qj(^aj =ju) and Pj = Pj (^aj = ju) of the paity check f (x). Applying the Bayes ule, we see that p=q = g(u +)=g(u ). Then p = whee b =2u= 2. Also, let 2 e b=2 e b=2 q e b=2 + e b=2 = : () e b=2 + eb=2 y = q p =(e b=2 e b=2 )=(e b=2 + e b=2 ) def = tanh(b): () Given the eceived vecto u with sybols u x, we now find the quantities y x fo any position x. Next, we show that the pobabilities Q j and P j depend on Y j = y x : Lea 3: A paity check ^a j has the a posteioi pobabilities Q j =(+Yj )=2; Pj =(Yj )=2: (2) Poof: The poof is siila to that of Lea. Again, we suppose that the zeo codewod is sent as the sequence of +. Then P j is the su of a posteioi pobabilities P (fju) taken ove all vectos f with odd nube of ones on the subset X j. In tun, this iplies that P j equals the su of all onoials p x...p x q x...q x that include an odd nube s of pobabilities p x on positions x 2 X j. Siilaly, Q j is the su of all onoials with an even nube s of pobabilities p x. On the othe hand, the expession (q x p x ) adds the latte onoials (with any even nube of pobabilities p x ) and subtacts the foe ones (with any odd nube of pobabilities p x). We can also use the expansion of (q x + p x ), which gives the su of all onoials. Theefoe, we have the equalities Q j + P j = (q x + p x )= Q j P j = These two equalities pove ou lea. (q x p x )=Y j : Now that we have defined the a posteioi pobabilities Q j and P j fo each paity check j, we wish to assign diffeent weights to diffeent checks j. Obviously, the eliability of ou estiate ^a j gows with jq j P j j. Theefoe, in the sequel we use a siple epiical ule and assign the weight W j = Y j (3) to the paity check ^a j. In Section V we justify such a weighting in oe detail. Finally, we find the total weight W = d Y j= j of all estiates ^a j, and ake the ajoity decision ; if W a = (4) ; if W<. Given the eceived signals u x, now the soft decision decoding is done as follows. Algoith: ) Select any infoation sybol a of the highest ode. 2) Fo each eceived signal u x, find y x = tanh(2u x = 2 ). 3) Fo each paity check ^a j, find Y j = y x. 4) Find the oveall weight W = j Y j, and ake the ajoity decision (4) on sybol a. 5) Afte all infoation sybols of ode ae deteined, poceed with the sybols of ode, and so on. 2 y is called the hypebolic tangent of b. IV. DECODING PERFORMANCE A. Peliinay Leas Given a code RM (; ), we suppose that f = is tansitted as a sequence of +. Then ou output u has pobability density function (pdf) g(u ). In decoding, we poceed with the vaiable y = tanh (2u= 2 ). In the sequel, we will find its fist two oents Ey = tanh(2u= 2 )g(u ) du Ey 2 = tanh 2 (2u= 2 )g(u ) du: (5) Siilaly to Lea, we also need the paaete s =2 ()=2 (Ey) 2 (Ey 2 ) 2 (Ey) 2 =2 ()=2 Ey 2 (Ey) 2 : (6) Following the eaks to Lea, we suppose that s is fixed. Lea 4: Fo long codes RM (; ) with!, softdecision ajoity decoding etieves any infoation bit of ode with asyptotic bit-eo ate Q( s)=(= p2) e u =2 du: Poof: We need to find the pobability of incoect decoding P ( Y j < ) of any infoation sybol a. Recall that each Y j is the poduct of independent and identically distibuted vaiables y x. Also, diffeent Y j ae taken ove disjoint subsets X j. Theefoe, all d vaiables Y j ae also independent and identically distibuted. Hence, thei su Y j is asyptotically noal if d!. Now we can use only the ean EY j = (Ey) 2 and the vaiance DY j = (Ey 2 ) 2 (Ey) 2. To get Y j <, we need s = 2 ()=2 EY j= DY j standad deviations. Fo fixed s,we can use noal distibution and estiate P ( Y j < ) as Q( s ). Now the poble is educed to finding the fist two oents Ey and Ey 2. Theoes and 2 below show that we need to conside long RM (; ) codes only when 2!o 2!. Naely, the case!funishes long codes RM (; ) of fixed-ode, while! is applied to codes of fixed code ate. We study these two asyptotic settings in the following leas. The poofs ae given in Appendices II and III. Lea 5: Fo!, the fist two oents of the ando vaiable y = tanh (2u= 2 ) satisfy the elation Ey Ey 2 2 : (7) Lea 6: Fo!, the fist two oents of the ando vaiable y = tanh (2u= 2 ) satisfy the elation Ey Ey 2 Q(=) ( =2)e =2 : (8) B. Main Theoes Now that the oents of y ae found, we can copae soft-decision pefoance with its had-decision countepat. Given any output bit-eo pobability " < =2, we will find the coesponding noise powes 2 s and 2 h that sustain this pobability in soft- and had-decision decoding. Below we say that these powes ae "-sustainable. The coesponding "-sustainable tansition eo pobabilities ae p s = Q(=s) and p h = Q(= h ). Accoding to Leas and 4, the output bit-eo pobability equals " if we use = Q (") standad deviations. Theefoe, we only need to find noise powes 2 h and 2 s that give the sae in expessions (4) and (6). This technical poble is studied below in 2

26 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 46, NO., JANUARY 2 two diffeent settings. We use notations c, c fo constants <c;c <. Theoe : Fo any output bit-eo pobability ", soft-decision decoding of codes RM (; ) of fixed-ode and!inceases =2 ties "-sustainable noise powe ove had-decision decoding 2 s= 2 h! =2: Poof: We stat with had-decision decoding of long codes RM (; ). Fist, we see fo (4) that h = Q (") does not gow with only if y!. Then Q(= h ) = 2y tends to =2 and h!as!. Now we can use appoxiation p Q(= h )==2= h 2 + O(h 2 ): This shows that y 2= h p2. Again, we use (4) fo fixed and find h 2 ()=2 h 2 (=2) 2 : (9) Now we poceed with soft-decision decoding, and let s!. Then we take Ey and Ey 2 fo Lea 5 and substitute these two into (6). Hee we also use that is fixed. Then s 2 ()=2 s 2 : (2) To obtain the sae eo ate " = Q(), we take h = s = in (9) and (2). Thus we find "-sustainable noise powes 2 h 2 (2 = 2 ) =2 2 s (2 = 2 ) =2 : (2) Now we see that 2 s= 2 h =2. This gives log (=2) 2dB enegy gain, and poves Theoe. Theoe 2: Fo any output bit-eo pobability ", soft-decision decoding of codes RM (; ) of fixed code ate <R<and! inceases 4= ties "-sustainable tansition eo pobability ove had-decision decoding p s =p h! 4=: (22) Poof: Fist, note that long codes RM (; ) give fixed code ate <R<, only if =2 as!. Second, h is liited in (4) by any c only if y 2! as!. This allows us to siplify (4) and use the foula h 2 ()=2 y 2. Then we take logaiths of both sides and see that ln y ()(ln 2)=2 + (23) fo fixed h. Then y! and p h! since ln y! in (23) as!. Now we see that Q(= h )! and h!. Also, ln y =ln(2p h ) 2p h, and p h ( )(ln 2)=2 +2 : (24) We poceed with soft-decision decoding, and let s!. The sae aguents applied to (6) show that s <c only if (Ey 2 =(Ey) 2 ) 2! as!. Then we siplify (6) and see that s 2 ()=2 (Ey) 2 =(Ey 2 ) 2 : (25) Taking logaiths of both sides, and using the asyptotic (8) fo Lea 6, we find that p s 4( )(ln 2)=(2 +2 ): (26) This poves ou theoe. Note, howeve, that this ipoveent (22) coes along with a negligible incease in sustainable noise powe. In paticula, p h = Q(= h ) h e =2 = p 2; fo h! : We cobine this with (24) and see that 2 h =(2( +2) ln 2) =( ln 2) (27) fo had-decision decoding of long codes with!. Siilaly, we take p s se =2 = p 2 fo Lea 6 and get the sae asyptotic expession fo soft decision 2 s =( ln 2): (28) In the following coollay we find the Euclidean weights of the eo pattens coectable by ou algoith. The code has iniu Euclidean distance 2p d when used ove the input alphabet 6. We pshow that fo fixed we exceed the bounded distance decoding weight d oe than 2 =2 ties. Fo fixed code ate R, we have a siila incease and outpefo bounded distance decoding p 2 =2 = ln 2 ties. Coollay 9: Fo!, soft-decision ajoity decoding of codes RM (; ) coects vitually all eo pattens of Euclidean weight: p n(d=2) =2 ; if = const (29) n=( ln 2); if <R<: (3) Poof: Ou poof is siila to that of Lea 2. The codewod f is tansitted and the noise vecto e is added on the AWGN channel. Then the eceived vecto f + e 2R n is located at the squaed Euclidean distance 2 = x ex 2 fo f. The added noise e x has distibution N (; 2 ) fo each position x. Then the squaed su 2 = 2 has 2 -distibution with n degees of feedo. It is well known [2] that 2 tends to the noal distibution N (n; 2n) as n!. Then the pobability P (n < 2 = p 2 <n+ n ln n) tends to =2. On the othe hand, the output block eo pobability is uppe-bounded by Q() and tends to fo p 2, accoding p to the poof of Lea 2. In tun, (2) shows that we achieve = 2 if we take 2 (d=2) =2 fo fixed. Siilaly, (25) and (28) give the coesponding theshold 2 =( ln 2) fo fixed ate R, as!. Now we see that ost eo pattens of squaed weight n 2 < 2 <n 2 + 2p n ln n ae coected fo the above choice of. Theefoe, we can coect ost eo pattens of Euclidean weight p n. This leads to bounds (29) and (3). V. DISCUSSION OF THE RESULTS AND THEIR APPLICATIONS A. Watefall Decline of the Output Eo Pobability Fo fixed-ode, noise powe s 2 in (2) only aginally depends on. In tun, = Q (") is a slowly changing function fo "!. Theefoe, the output eo ate " exhibits steep watefall decline if the noise powe 2 falls below the theshold s. 2 Fo exaple, " apidly deceases fo 4 to when changes fo 3:7 to 6:3.Even fo codes of the thid ode, such a apid decline coes when the noise powe is only educed (6:3=3:7) =4 :4 ties. Fo fixed-ate R, we achieve even a steepe decline in the output-biteo pobability ". Accoding to (28), asyptotic noise powes h 2 s 2 =( ln 2) do not depend on and. This iplies that all long codes RM (; ) of fixed code ate R sustain appoxiately the sae noise level, egadless of the ate R and the output-bit-eo pobability ". This phenoenon is due to asyptotically bad popeties of both the codes RM (; ) and thei ajoity decoding. Fist, RM codes of ode =2 have distance that gows only as p n, so that ln d= ln n =2. Second, ajoity decoding outpefos bounded

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 46, NO., JANUARY 2 262 distance decoding only (ln d)=2 ties, and cannot coect eo pattens of a linealy inceasing weight. Theefoe, despite all the ipoveents obtained, we can achieve any output eo ate "<=2 only if 2! as!. Then tansition eo pobability Q(=) apidly declines as expf=2 2 g. In tun, such a decline outweighs all othe factos in (25) that depend on paaetes and. B. Noise Sensitivity Ou next issue is whethe ou algoith can be applied to the AWGN channels with vaying noise powe. Note that we used the known noise powe 2 when calculating the paaetes y x in Step 2 of ou algoith. Now suppose that we have a vaying noise powe 2 while using the sae estiate () peanently fixed at 2. We wish to find whethe ou algoith is sensitive to the noise powe 2. Since the estiates y x ae fixed, the algoith is kept unchanged. Then the only diffeence appeas in calculating the oents Ey and Ey 2 given in (5). Naely, we need to change the foe distibution g(u ) fo the actual distibution g(; u ) = expf(u ) 2 =2 2 g= p 2: In this way, we need to find the oents Ey = Ey 2 = tanh(2u= 2 )g(; u ) du tanh 2 (2u= 2 )g(; u ) du: (3) These two oents can be calculated in the sae way it was done fo the foe oents (5) in Leas 5 and 6. We stat with codes RM (; ) of fixed ode. Diect calculations show that the new oents (3) ae Ey 2 and Ey 2 2 = 4 fo!. In tun, the foe paaete s (2) is eplaced by 2 ()=2 2 (=) 2 = 2 : Note that > s fo <. This iplies that we decease the output eo ate Q() on channels with bette quality even though this quality is unknown to the decode. Howeve, fo, we see that 2 ()=2 2. This coincides with the esult obtained fo the channels with a known noise. Again, the output eo ate inceases, which is also consistent with ou pevious discussion. Siila esults ae also valid fo codes RM (; ) of fixed ate R. In this case, we study the above oents (3) fo! and find that Ey Ey 2 Q(=) 2 = 2. Then we substitute these oents into (6) and see that the output bit-eo pobability Q() depends on the channel noise in the sae way it did fo codes of fixed ode. C. Maxiu-Likelihood Voting In the fist steps of ou algoith, we find the a posteioi pobabilities Q j and P j assigned to ou estiate ^a j of the paity check j. Then ou voting schee sus all 2 weights Y j = Q j P j. Moe geneally, we can egad any voting schee as the epetition code (a;...;a) of length d conveted by a channel noise onto soe sequence (^a ;...; ^a d ). Note that the output sequence is given in each position j by pobabilities Q j and P j. Also, diffeent positions give independent estiates, and we can speak of a eoyless channel. Then the best voting schee can be obtained by applying axiu-likelihood decoding to this epetition code. Such a decoding akes a decision a = if ln(q j=p j). In othe wods, we change ou ajoity weight (3) Y j fo the axiu-likelihood weight ~W = Z j ; whee Z j def = ln +Y j Y j : (32) Ou next question is whethe the latte weight Z j can outpefo ou foe weight Y j. As above, vaiables Z j ae independent and equally distibuted. Theefoe, thei su W ~ is noally distibuted if!. In tun, this iplies that we need to find the fist two oents EZ and EZ 2 of the ando vaiable Z = Z j. Note that Z can be epesented as the infinite seies Z =2(Y + Y 3 =3+Y 5 =5+...), whee Y = y 2 and y = tanh (2u= 2 ). Fo (4) we see that in had-decision decoding, the siila paaete y 2 2 ()=2 h apidly tends to fo any h as!. Fo soft-decision decoding, oe detailed analysis also shows that EZ EZ 2 2EY 2EY 2! fo any given " as!. This iplies that axiu-likelihood weighting Z j is asyptotically equivalent to the linea weighting Y j as!. This equivalent pefoance is due to the fact that jy j jalost suely, in which case Z j 2Y j. D. Coplexity Issues Recall that conventional ajoity decoding has coplexity uppebounded by nk binay opeations. Siilaly, in soft-decision decoding we obtain any infoation sybol a by using all n sybols u x only once. Theefoe, coplexity is also uppe-bounded by the sae ode O(nk). This ode, howeve, includes opeations ove eal nubes (below called eal opeations). Fist, we need 5n opeations (exponentiations, suations, and divisions) to find n quantities y x = tanh (2u x= 2 ). Then we need n eal opeations fo each a, while calculating Y j = y x and thei su j Y j. So, ou coplexity includes kn +5n eal opeations. Below we show that this coplexity ode nk can be educed fo both ajoity algoiths by ultiple euse of estiates Y j. We stat with =, in which case each -face X () j includes two positions, say x and x. Then we find the estiate Y () j = y x y x. Now note that fo =2, each 2-face X (2) j is a union X () l [ X p () of two -faces taken fo soe indices (l; p). Theefoe, each Y (2) j is the poduct Y () l Y p () of two estiates obtained befoe. We poceed in the sae fashion and find estiates Y (i) j of ode i using pevious estiates obtained in step i. Each of k i = ( i ) infoation sybols of ode i equies d i ultiplications Y (i) l Y p (i) (o additions od 2 in had-decision decoding) and d i additions. So fo codes RM (; ), the oveall coplexity has the ode (; ) =2 i= k i d i =2 i= i 2 i : Now note that coplexity of ajoity decoding is lowe-bounded by the sae ode of k i= id i, since any ajoity input includes d i estiates Y (i) j fo each of k i infoation sybols. Also, one can eadily veify that (; ) (; ) =2 3 =2n log 3 that is below the foe coplexity nk = Rn 2 fo any R. We can also use the appoxiation y x u x = 2 obtained below in (33) fo!. In this case, we use the eceived signals u x instead of y x. This allows us to eove the ovehead of 5n opeations. It is inteesting to note that when u x wee used in copute siulation, the

263 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 46, NO., JANUARY 2 output pefoance degaded by at ost. db fo all codes even fo sall. Suaizing, we see that soft-decision ajoity decoding allows to find exact odes of both the output bit-eo ate and decoding coplexity. This copaes favoably to othe algoiths known to date and discussed in Section I. Table I gives the suay. TABLE I DECODING ALGORITHMS WITH POLYNOMIAL COMPLEXITY E. Shot Codes In pevious sections, we studied long RM codes given any output eo pobability ". The esults ae eadily genealized fo shot codes RM (; ). Fist, conside had-decision decoding. In this case, we estiate any infoation bit by taking the ajoity of d independent paity checks. Each of the is incoect with pobability P given by (6). Theefoe, fo sall, we use binoial distibution instead of the noal distibution Q( h ) eployed in Lea. Then the output bit-eo pobability is TABLE II DECODING PERFORMANCE FOR CODE RM (; 5) " = d d=2 P d=2 Q d=2 =2+ j>d=2 d j P j Q dj : Fo soft-decision decoding, the output bit-eo pobability is P ( j Y j < ), whee Y j = y x ae d independent ando vaiables defined by (). Fo d 6, we use noal distibution fo the su j Y j, as was done above in the asyptotic setting of Lea 4. Howeve, we can also eploy copute calculations, and find the fist two oents Ey and Ey 2 given in (5) fo any noise powe 2. In this way, we change the asyptotic setting of Leas 5 and 6 fo siple nueical analysis valid fo shot codes. Fo sall distances d, oe igoous calculations can give exact pdf P W of the su d Y j. Let P Y denote the pdf of each ando vaiable Y j. Then P W can be nueically calculated as a 2 -fold convolution of P Y. In tun, conventional techniques [2] can be used to find P Y using the PDF p y of the ando vaiable y. Finally, p y is eadily obtained fo the oiginal pdf g(u ). Ou appoach allows us to nueically obtain exact output eo ates fo any code RM (; ). We note that copute siulation tuned out to be vey close to these nueical esults fo all the codes consideed. In [6], copute siulation esults wee pesented fo ecusive decoding of codes RM (; 5) and RM (4; 9). Fo copaison, we show in Table II siulation esults fo code RM (; 5) using had- and soft-decision ajoity decoding, ecusive algoith of [6] and axiu-likelihood decoding. Signal-to-noise atios obtained fo soft decision ajoity decoding show a slight ipoveent to those in [6]. This slight ipoveent was also obtained fo RM (4; 9). APPENDIX I PROOF OF LEMMA 2 We give a bief outline (see also a cobinatoial poof in [8]). Fist, fo fixed, we set y = (c=d) =2 in (4). This gives p = ((c=d) =2 )=2 and h p c as!. The output block eo pobability is uppe-bounded by Q( h). This bound declines as e c=2, since is a polynoial te fo any = const, while Q( h ) declines as e =2 = h p2 (see [2]). On the othe hand, eo pattens of weight pn occu with a highe pobability ( n pn)p pn ( p) (p)n = 8np( p) [] that has exponential ode 2 =2. So fo eo pattens of weight pn, only the faction (e c =2) =2! can be left uncoected. Now conside RM (; ) of fixed ate < R <. Since k = i= i, we easily veify that =2 Q ( p R) =2. Hee Q is the invese Gaussian function. Then we set y = (2=d) =2. In this case, h p 2 and the union bound Q( h) declines as (e=2). Diect ecalculation of p =2yshows that p = (ln d ln 2 + o())d=4n VI. CONCLUSIONS In this pape, we intoduced soft decision ajoity decoding fo Reed Mulle codes RM (; ). Fo any noise powe 2 of the AWGN channel, we deived explicit analytical expessions fo the output-bit-eo pobability " as a function of 2. Fo long codes RM (; ) of fixed ode, the new algoith outpefos the conventional ajoity decoding by log (=2) 2dBatany given output eo pobability ". Fo long codes of fixed ate R, the powe gain becoes negligible. Howeve, we coect 4= ties oe channel eos elative to had-decision ajoity decoding. Fo codes of any ate R, we coect ost eo pattens of Euclidean weight at least n=( ln 2) and incease alost 2 =2 ties the decoding capability p d of bounded distance decoding. The algoith is also well suited fo channels with vaying noise powe. whee < o() < d(ln 2 d)=4n. Note that p has exponential ode 2 =2. In tun, the eos of weight pn occu with the pobability at least = 8np( p) that has exponential ode 2 =4 > (e=2) fo this p. Theefoe, at ost the faction (e=2) 2 =4 of eos is left uncoected. Reak: In the second pat of the poof, we can take salle y = (5c=2d) =2, whee c> ln 2 as above. This, howeve, does not change ou ode t d(ln < d)=4 in (9). APPENDIX II PROOF OF LEMMA 5 We need to find the oents (5) of the ando vaiable y =(e b=2 e b=2 )=(e b=2 + e b=2 )

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 46, NO., JANUARY 2 264 whee b =2u= 2 and u has noal distibution g(u ). By taking thee deivatives, we find the Maclauin s expansion of y fo! y = u u3 + R(u); jr(u)j = 2 6 y (u) ; <<: (33) Below we find the eans of jr(u)j; jr 2 (u)j, y, and y 2 as!.it is easily veified that fo soe constants a and c the inequality holds. Fo (33) and (34) we see that jy (u)j < c= 6 e ajuj= (34) EjR(u)j = g(u )jr(u)j du <c 7 e a u= u =2 u 3 du (35) whee c and a ae soe constants. The substitution v =(u a )= tansfos (35) to EjR(u)j <c 2 6 (v + a ) 3 e v =2 dv < c 3 3 : (36) Siila calculations also show that EjR 2 (u)j = O( 3 ). Then we use the substitution u =t and find the eans of u and u 2 Eu = Eu 2 = = ug(u )du = (t +)g(t) dt = (37) u 2 g(u ) du (t 2 +2t +)g(t) dt = 2 +: (38) The above foulas give Ey = 2 + O( 3 ) and Ey 2 = 2 + O( 3 ). We fist find the ean Ep = APPENDIX III PROOF OF LEMMA 6 ( + e 2u= ) g(u ) du of the pobability p given by () if!. The substitution u= 2 = v and siple calculations give Ep = e=2 p 2 v =2 e dv: (39) e v v + e The integal in (39) conveges unifoly. So, to find its liit as! we take = v =2 e e v + e dv! v dv ;! : (4) e v v + e Then we ake the substitution e v = t and find that In tun, this gives dv e v + e v = Siila calculations also give dt t 2 = =2: (4) + Ep e =2 =8 Q(=)=2: (42) Ep 2 = Fo!, we find that ( + e 2u= ) 2 g(u ) du: Ep 2 Q(=)=4: (43) Since y =2p, we now find that Ey Ey 2 Q(=). ACKNOWLEDGMENT The authos wish to thank C. Johnson and R. Redon fo poviding copute siulation of RM codes. REFERENCES [] E. F. Assus, J.-M. Goethals, and H. F. Mattson J., Genealized t-designs and ajoity decoding of linea codes, Info. Cont., vol. 32, pp. 43 6, 976. [2] W. Felle, An Intoduction to Pobability Theoy and its Applications, New Yok: Wiley, 964, vol.. [3] G. D. Foney, Coset codes Pat II: Binay lattices and elated codes, IEEE Tans. Info. Theoy, vol. 34, pp. 52 87, Sept. 988. [4] R. G. Gallage, Low-Density Paity Check Codes. Cabidge, MA: MIT Pess, 963. [5] W. C. Goe, The equivalence of L-step genealization and a Reed decoding pocedue, IEEE Tans. Info. Theoy, vol. IT-5, pp. 84 86, 969. [6] G. A. Kabatyanskii, On decoding of Reed Mulle codes in seicontinuous channels, in Poc. 2nd Int. Wokshop on Algebaic and Cobinatoial Coding Theoy, Leningad, USSR, 99, pp. 87 9. [7] V. D. Kolesnik, Pobabilistic decoding of ajoity codes, Pobl. Info. Tans,, vol. 7, no. 3, pp. 93 2, 97. [8] R. E. Kichevskiy, On the nube of Reed Mulle code coectable eos, Dokl. Sov. Acad. Sci., vol. 9, pp. 54 547, 97. [9] S. N. Litsyn, On decoding coplexity of low-ate Reed Mulle codes (in Russian), in Poc. 9th All-Union Conf. Coding Theoy and Infoation Tansission, Odessa, USSR, 988, pp. 22 24. [] F. J. MacWillias and N. J. A. Sloane, The Theoy of Eo-Coecting Codes. Asteda, The Nethelands: Noth Holland, 977. [] J. L. Massey, Theshold Decoding, 963. [2] A. H. Muad and T. E. Fuja, Distibuted decoding of cyclic block codes using a genealization of ajoity-logic decoding, IEEE Tans. Info. Theoy, vol. 39, pp. 535 545, 993. [3] I. S. Reed, A class of ultiple eo coecting codes and the decoding schee, IEEE Tans. Info. Theoy, vol. PGIT-4, pp. 38 49, 954. [4] L. D. Rudolph, Theshold decoding of cyclic codes, IEEE Tans. Info. Theoy, vol. IT-5, pp. 59 592, 969. [5] V. Sidel nikov and A. Peshakov, Decoding of Reed Mulle codes with a lage nube of eos, Pobl. Info. Tans., vol. 28, pp. 8 94, 992. [6] G. Schnabl and M. Bosset, Soft-decision decoding of Reed Mulle codes as genealized ultiple concatenated codes, IEEE Tans. Info. Theoy, vol. 4, pp. 34 38, 995. [7] H. Vate, Binay coding by integation of polynoials, IEEE Tans. Info. Theoy, vol. 4, pp. 47 424, 994.