On a Problem of Massey
|
|
- Nancy Bryant
- 6 years ago
- Views:
Transcription
1 On a Problem of Massey Jon Hamkins Jet Propulsion Laboratory California Institute of Technology Pasadena, CA 909, USA Jon.Hamkins@jpl.nasa.gov Abstract In 976, Massey introduced a method to compute the confidence interval for the frame error rate of a coded communications system based on the simulation of just a few frame errors []. He commented that his approach did not apply to bit error rate confidence intervals, because bit errors are not independent in a coded system. In this paper, we show how to overcome the limitation Massey recognized, and present a method to compute a confidence interval of the bit error rate of a coded communications system from a simulation of bit and frame error events. The proposed interval may be easily computed from the first and second sample moments of the number of bits errors per frame. I. ITRODUCTIO In 976, Massey had a contract with ASA [] to help find a channel code appropriate for the International Ultraviolet Explorer (IUE), a joint space mission between ASA, the European Space Agency, and the UK Space Research Council. The mission did in fact use a convolutional code and went on to great success, returning over 00,000 images that formed a heavily used astronomy database and spawned nearly 4,000 peer-reviewed astronomy papers. IUE decided it wanted a constraint-length 24, rate /2 convolutional code for use with sequential decoding. See, e.g., [2] for a discussion of sequential decoding of convolutional codes. This mission was just prior to the era in which much shorter constraint length codes decoded with the Viterbi algorithm began to become the norm. Indeed, it stands out that at the time Massey referred to constraint length 24 as rather short []! Massey proceeded to analyze and simulate virtually every binary (24,/2) convolutional code that had been proposed up to that time. He considered ten codes in all, including ones designed by himself, Costello, Johannesson, Bahl, Jelenik, Bussgang, Lin, and Lyne see the references of [] for full details on the codes. With the large constraint length of 24 and short frame length of 256 bits, terminating the trellis reduced the code rate noticeably unfortunately, this was before the invention of the tail-biting method which would have neatly avoided the problem (see, e.g., [3]) and so Massey also considered various partial termination schemes. Given the computers of the day, Massey s simulations were limited to decoding, for each code, 0,000 frames at one SR. Four top candidate codes were each simulated an additional This work was performed at the Jet Propulsion Laboratory, California Institute of Technology, under a contract with the ational Aeronautics and Space Administration. 40,000 frames. His simulations produced 0 to 5 frame errors for each code. With this hard-fought-for but meager amount of data, Massey used confidence intervals to make conclusions about the relative merit of the various codes. In Massey s words [], The very small number of frame errors, between 0 and 5 inclusive for the best and the worst codes, makes it essential to consider the statistical significance of the frame error probabilities. II. COFIDECE ITERVAL FOR THE FRAME ERROR RATE For an excellent modern discussion of the computation of confidence intervals, including a computation based on using the exact (binomial) distribution for the number of simulated frame errors, as well as Gaussian and Poisson approximations, see [4]. In the following, suppose that X frame errors are observed in a simulation of decoded frames on a memoryless channel, so that ˆp X/ is the observed frame error rate (FER). Let p denote the true FER. A. ormal approximation One simple, popular approach to confidence intervals is to note that by the Central Limit Theorem (CLT), as, the probability density function of ˆp approaches that of a Gaussian random variable with mean p and variance p( p)/ [5]. The mean and variance may be estimated by ˆp and ˆp( ˆp)/, respectively, which leads to the ( α)-confidence interval where (ˆp a, ˆp + a) () ˆp( ˆp) a = Φ ( α/2) (2) and Φ(x) = 2π x e t2 /2 dt is the cumulative distribution function of a zero-mean, unit-variance Gaussian random variable. For a 95% confidence interval, Φ (0.975).96. Massey points out that when X is small, ˆp( ˆp)/ is not a good estimate of the variance of X []. In fact, no matter how large is, if X < 4 the interval includes a negative range!
2 B. Massey s approach: Poisson approximation Massey introduced what is now a standard method to overcome some of the limitations of the Gaussian approximation above, by noting that when and p, X is approximately Poisson distributed, with mean λ and variance λ, where λ p. Using the Poisson probability mass function f X (i) = λi i! e λ leads to the ( α)-confidence interval (λ L, λ H ) when X = x, given by the solution to i=x x i=0 λ i L i! e λ L = α 2 λ i H i! e λ H = α 2 The confidence interval for ˆp = x/, then, is (λ L /, λ H /). This solution works adequately for x < 5 [], and for x > 0 it is nearly identical to the Gaussian approximation discussed above [4]. III. COFIDECE ITERVALS FOR THE BIT ERROR RATE OF A CODED SYSTEM We turn now to the main topic and novel contribution of this paper. We desire to determine the confidence interval for the bit error rate (BER) of a coded communications system, based on simulations of the decoder. A. Massey s identification of the problem Massey recognized that while the procedure in section II-B is useful for computing FER confidence intervals, it cannot be used in the same way to compute BER confidence intervals []: It probably should be pointed out that, although 256 information bits are decoded in each frame so that there are 256 times as many bit decoding decisions as frame decoding decisions, one cannot assert greater statistical confidence in the observed decoding bit error probability than in the observed frame error probability. The reason of course is that the decodings of bits within a frame are highly dependent so that one has no more independent bit decoding decisions from which to infer probabilities than one has independent frame decoding decisions. In other words, counting x bit errors in total simulated bits and applying the formulas (3) and (4) would result in an inappropriately narrow confidence interval for the BER. Much of the literature on confidence intervals for BER treats the case in which bit errors are independent events (e.g., [6], [7]) and MATLAB s function to compute the confidence interval for BER, BERCOFIT(), also assumes this [8]. While there has been some work on BER confidence intervals on channels with memory [9], to our knowledge the computation of confidence intervals for a block-coded system has not been presented. (3) (4) B. BER confidence interval for a coded system Suppose frames of a binary (n, k) code are simulated, and for the sake of analysis, suppose that the decoder is required to output an estimate of the information bits in each frame, whether it successfully completes decoding the frame or not. Let B i be a random variable representing the number of bit errors, B i k, in the i th information block at the output of the decoder. Then B,..., B is a set of i.i.d. random variables. Let µ E[B i ] (5) σ 2 var[b i ] = E[B 2 i ] µ 2 (6) Typically, the distribution of B i is unknown. For modern iteratively decoded channel codes such as low-density paritycheck codes, the distribution of B i may depend on SR of the simulation, or details of the decoder, even when conditioned on the event that a frame error has occurred. For example, at low SR when frame errors are dominated by the decoder s failure to converge, many bit errors may occur in each frame error, while at high SR where the decoder performance is limited by the code s minimum distance or trapping sets, only a handful of bit errors might typically occur in each frame error (and of course, the FER itself is lower). Let p b denote the true BER. The number of bits simulated is k, so the observed BER is given by ˆp b = k B i As, by the CLT we have ( µ ˆp b k, σ 2 ) k 2 For large we may estimate the fist and second moments of B i by their sample first and second moments: µ ˆµ B i = kˆp b (7) ( ) σ 2 ˆσ 2 Bi 2 ˆµ 2 (8) Thus, a ( α)-confidence interval for the BER can be given by (ˆp b a, ˆp b + a ), with a = ˆσ k Φ ( α/2) (9) = ( ) ( ) 2 k Bi 2 B i Φ ( α/2) (0) A simulation would normally record only i B i; by also recording one extra quantity, i B2 i, the confidence interval in (0) may be computed. These two partial sums may be augmented with each new simulated frame, so that the entire sequence B, B 2,... need not be stored. Thus, the confidence interval remains easy to compute.
3 C. BER confidence interval when few frame errors are simulated As with the Gaussian-approximation for the FER confidence interval, the accuracy of the interval in (0) depends on the accuracy of the approximation in (8). Even with very large ensuring the accuracy of the CLT approximation for ˆp b if only a few B i are greater than zero, we won t have an accurate estimate of the variance of ˆp b. For the FER, X > 0 is sufficient; for the BER, even more are needed. What can be said when only a few frame errors have been collected? Since bit errors occur in bunches, not singly, neither the individual bit errors nor the bit errors per frame, B i, are binomial or Poisson distributed, and Massey s approach cannot be directly applied. When frame i is in error, B i k, so that ˆp k ˆp b ˆp () which loosely bounds the confidence interval for BER as (λ L /(k), λ H /), where λ L and λ H are given in (3) and (4). IV. GUIDELIE FOR SIMULATIO LEGTH A simulation which collects X 385 frame errors will be able to estimate the FER to within an error of 0%, with 95% confidence, because from (2), a ˆp = Φ (0.975) ˆp ˆp( ˆp) < Φ (0.975) ˆp < < 0. (2) where we have used ˆp <, Φ (0.975) <.96 and ˆp 385/. In general, when the length of the ( α)- confidence interval for the FER is desired to be shorter than plus or minus 00β% from the estimate, the simulation should be run until X (Φ ( α/2)/β) 2. Such a guideline is useful for the communications engineer to know how long to run a simulation to get a desired accuracy for the FER. We present now an analogous guideline for how long a simulation of a coded system should be run to get a good estimate of the BER. Let ( ) 2 / X B i Bi 2 (3) Theorem : If X > (Φ ( α/2)/β) 2, then the error for the BER estimate, with ( α)-confidence, is less than 00β% of the observed BER, ˆp b. Proof: From (0), we have a < ( ) k Bi 2 Φ ( α/2) (4) β kφ ( α/2) B i Φ ( α/2) (5) = β ˆp b (6) TABLE I MIIMUM X (FER) OR X (BER) FOR A SIMULATIO TO ACHIEVE A GIVE LEVEL OF ACCURACY AT A GIVE LEVEL OF COFIDECE. Error in X or X, at Confidence: FER or BER 90% 95% 99% % % % % % % 6 27 X = umber of frame errors X = Given by (3) Thus, with 95% confidence the BER is within plus or minus 0% of of the simulated BER when X 385. Table I summarizes the minimum X or X a simulation must reach in order to achieve a given accuracy at a given confidence level, for the FER or BER, respectively, using the Gaussian approximation to the interval discussed in the preceding sections. V. EXAMPLES A. Constant number of bit errors per frame Suppose the coded system is such that whenever a frame error is made, the decoded frame contains exactly b bit errors, where b is a constant. In this case, ˆp b = k B i = Bi 2 = bi {frame i in error} = bˆp k (7) bi {frame i in error} = bˆp (8) b 2 I {frame i in error} = b 2 ˆp (9) where I is the indicator function and, as before, ˆp is the observed FER. Plugging into (0), we have a = k b2 ˆp b 2 ˆp 2 Φ ( α/2) (20) = b ˆp( ˆp) Φ ( α/2) (2) k = b k a (22) where a is given in (2), and so the BER confidence interval is (ˆp b a, ˆp b + a ) = b k (ˆp a, ˆp + a) (23) That is, the BER confidence interval is exactly b/k times the FER confidence interval in (2), as expected. This means that on a log plot of BER and FER, the length of the confidence intervals will be the same. This is an extreme case. Typically, there is some variation in the number of bits-in-error in the simulated frames-in-error. In those cases, the length of the BER confidence interval
4 Error Rate FER BER E b / 0, db Fig.. Performance of the CCSDS k = 784, r = /2 turbo code. Error Rate nd simulated frame error Simulated FER Simulated BER 5 th 4 th 3 rd 95% confidence interval FER 95% confidence interval BER 95% confidence interval, assuming independent bit errors 0 9 Probability E b / 0 =.7 db E b / 0 = db E b / 0 = 0 db umber of simulated frames Fig % confidence intervals for FER and BER at E b / 0 =.7 db, as a simulation progressed umber of bit errors in frame Fig. 2. Observed distribution of B i > 0 at various E b / 0. is strictly greater than that of the FER confidence interval, reflecting the uncertainty both in the number of frame errors and in the number of bits in error within frames containing errors. B. A turbo code Among the turbo codes which have been standardized for space communications [0], we consider the one with input length k = 784 and code rate r = /2. The performance of the code is shown in Fig., where it can be seen that an error floor begins at just below FER= 0 5. A decoder was simulated at E b / 0 = 0 db, db, and.7 db, and the observed distribution of B i > 0 is shown in dark gray, light gray, and black, respectively, in Fig. 2. At E b / 0 = 0 db and db, the code is operating in the waterfall region, where codewords that fail to decode correctly have a number of errors in them that is often indistinguishable from random errors which would occur in uncoded transmission at that SR. At E b / 0 =.7 db, however, the code is operating just inside the error floor region. The code has two codewords with input weight 3 and output weight 7, which is the minimum distance of the code. This explains why B i = 3 was observed for about /4 of the frames in error. The code has a total of a few dozen codewords of weight 8, 9,..., 28, and at least 836 codewords with input weight 9 and output weight 29 consistent with B i = 9 being observed in more than 0% of the frames in error. Despite the fact that B i 9 in more than 82% of the frames in error, the average value of B i is higher, approximately 0.0. Thus, most frame errors do not contribute a representative amount to the BER, making it necessary to simulate longer, and check that the guideline in Table I holds. A simulation of about 0 0 frames was run at E b / 0 =.7 db. Fig. 3 illustrates the 95% confidence intervals for the FER and BER as the simulation progressed. The confidence intervals for the FER and BER were computed from (2) and (0), respectively. Since the FER at this SR is less than 0 5, more than 0 6 simulated frames were necessary to collect even ten frames in error, when the Gaussian-approximation confidence intervals begin to be appropriate. The first few frame errors are identified in Fig. 3. After frames were simulated, the FER confidence interval is less than one decade thick, while the BER confidence interval is about.5 decades. At every point in the simulation, the confidence interval for BER is wider than that of the FER, as expected. As the simulation of independent frames progresses, the FER confidence interval continually shrinks; the BER confidence interval also usually shrinks, except that occasionally frames with large numbers of bit errors are observed, which can temporarily increase the uncertainty. For
5 example, the first 3 frame errors observed contained a total of 59 bit errors, but the 4th frame error alone contained 44 bit errors. This had a large impact on the average BER to that point, and an enormous impact on the confidence interval, which became vacuous at the lower end. In this example simulation, the condition of X > 385 was met when 460 million frames, and X = 2048 frame errors, had been simulated. This was more than five times the simulation length required to achieve the same 0% uncertainty in the FER. This is consistent with the higher observed variation in the simulated BER, compared to the simulated FER, as the simulation progressed. Also shown in Fig. 3 is a portion of the 95% confidence interval that would be computed if we incorrectly assumed that bit errors were independent. It can be immediately seen that the interval is inappropriately narrow, because the true BER is outside of the confidence interval over wide ranges of the number of simulated frames. VI. COCLUSIOS Massey derived a method to compute confidence intervals for the FER of a block-coded transmission, and recognized that his method did not apply to confidence intervals of the BER. We presented a method to compute confidence intervals for the BER from the first and second sample moments of simulated bit errors per frame, which is information easily recorded in a software simulation. The BER confidence interval is wider than the FER confidence interval, except when every frame error causes a constant number of bit errors, in which case the interval lengths are the same. We also presented a rule of thumb, analogous to that used for FER, for determining when a simulation has been run long enough to get an accurate estimate of the BER. REFERECES [] J. L. Massey, Comparison of rate one-half, equivalent constraint length 24, binary convolutional codes for use with sequential decoding on the deep-space channel, University of otre Dame, Department of Electrical Engineering, Tech. Rep., Apr [2] S. Lin and D. J. Costello Jr., Error Control Coding: Fundamentals and Applications. ew Jersey: Prentice-Hall, 983. [3] H. Ma and J. Wolf, On tail biting convolutional codes, Communications, IEEE Transactions on, vol. 34, no. 2, pp. 04, 986. [4] M. C. Jeruchim, P. Balaban, and K. S. Shanmugan, Simulation of Communication Systems, 2nd ed. ew York: Kluwer Academic/Plenum, [5] H. Stark and J. W. Woods, Probability, Random Processes, and Estimation Theory for Engineers, 2nd ed. Englewood Cliffs, J: Prentice Hall, 994. [6] C. Jeruchim, Techniques for estimating the bit error rate in the simulation of digital communication systems, Selected Areas in Communications, IEEE Journal on, vol. 2, no., pp , 984. [7] S. Berber, Bit error rate measurement with predetermined confidence, Electronics Letters, vol. 27, no. 3, pp , 99. [8] MATLAB, version (R20a). atick, Massachusetts: The MathWorks Inc., 20. [9] M. Knowles and A. Drukarev, Bit error rate estimation for channels with memory, Communications, IEEE Transactions on, vol. 36, no. 6, pp , 988. [0] CCSDS 3.0-B-2. TM synchronization and channel coding. Blue Book. Issue 2, Aug. 20. [Online]. Available: publications/archive/3x0b2ec.pdf
International Zurich Seminar on Communications (IZS), March 2 4, 2016
Resource-Aware Incremental Redundancy in Feedbac and Broadcast Richard D. Wesel, Kasra Vailinia, Sudarsan V. S. Ranganathan, Tong Mu University of California, Los Angeles, CA 90095 Email: {wesel,vailinia,
More informationFEEDBACK does not increase the capacity of a discrete
1 Sequential Differential Optimization of Incremental Redundancy Transmission Lengths: An Example with Tail-Biting Convolutional Codes Nathan Wong, Kasra Vailinia, Haobo Wang, Sudarsan V. S. Ranganathan,
More informationTurbo Codes for Deep-Space Communications
TDA Progress Report 42-120 February 15, 1995 Turbo Codes for Deep-Space Communications D. Divsalar and F. Pollara Communications Systems Research Section Turbo codes were recently proposed by Berrou, Glavieux,
More informationLecture 4 : Introduction to Low-density Parity-check Codes
Lecture 4 : Introduction to Low-density Parity-check Codes LDPC codes are a class of linear block codes with implementable decoders, which provide near-capacity performance. History: 1. LDPC codes were
More informationLeast-Squares Performance of Analog Product Codes
Copyright 004 IEEE Published in the Proceedings of the Asilomar Conference on Signals, Systems and Computers, 7-0 ovember 004, Pacific Grove, California, USA Least-Squares Performance of Analog Product
More informationA Systematic Description of Source Significance Information
A Systematic Description of Source Significance Information Norbert Goertz Institute for Digital Communications School of Engineering and Electronics The University of Edinburgh Mayfield Rd., Edinburgh
More informationExact Probability of Erasure and a Decoding Algorithm for Convolutional Codes on the Binary Erasure Channel
Exact Probability of Erasure and a Decoding Algorithm for Convolutional Codes on the Binary Erasure Channel Brian M. Kurkoski, Paul H. Siegel, and Jack K. Wolf Department of Electrical and Computer Engineering
More informationConvolutional Codes ddd, Houshou Chen. May 28, 2012
Representation I, II Representation III, IV trellis of Viterbi decoding Turbo codes Convolutional Codes ddd, Houshou Chen Department of Electrical Engineering National Chung Hsing University Taichung,
More informationEVALUATION OF PACKET ERROR RATE IN WIRELESS NETWORKS
EVALUATION OF PACKET ERROR RATE IN WIRELESS NETWORKS Ramin Khalili, Kavé Salamatian LIP6-CNRS, Université Pierre et Marie Curie. Paris, France. Ramin.khalili, kave.salamatian@lip6.fr Abstract Bit Error
More informationExpectation propagation for signal detection in flat-fading channels
Expectation propagation for signal detection in flat-fading channels Yuan Qi MIT Media Lab Cambridge, MA, 02139 USA yuanqi@media.mit.edu Thomas Minka CMU Statistics Department Pittsburgh, PA 15213 USA
More informationRCA Analysis of the Polar Codes and the use of Feedback to aid Polarization at Short Blocklengths
RCA Analysis of the Polar Codes and the use of Feedback to aid Polarization at Short Blocklengths Kasra Vakilinia, Dariush Divsalar*, and Richard D. Wesel Department of Electrical Engineering, University
More informationSub-Gaussian Model Based LDPC Decoder for SαS Noise Channels
Sub-Gaussian Model Based LDPC Decoder for SαS Noise Channels Iulian Topor Acoustic Research Laboratory, Tropical Marine Science Institute, National University of Singapore, Singapore 119227. iulian@arl.nus.edu.sg
More informationGirth Analysis of Polynomial-Based Time-Invariant LDPC Convolutional Codes
IWSSIP 212, 11-13 April 212, Vienna, Austria ISBN 978-3-2-2328-4 Girth Analysis of Polynomial-Based Time-Invariant LDPC Convolutional Codes Hua Zhou and Norbert Goertz Institute of Telecommunications Vienna
More informationFault Tolerance Technique in Huffman Coding applies to Baseline JPEG
Fault Tolerance Technique in Huffman Coding applies to Baseline JPEG Cung Nguyen and Robert G. Redinbo Department of Electrical and Computer Engineering University of California, Davis, CA email: cunguyen,
More informationOne Lesson of Information Theory
Institut für One Lesson of Information Theory Prof. Dr.-Ing. Volker Kühn Institute of Communications Engineering University of Rostock, Germany Email: volker.kuehn@uni-rostock.de http://www.int.uni-rostock.de/
More informationIntroduction to Probability and Statistics (Continued)
Introduction to Probability and Statistics (Continued) Prof. icholas Zabaras Center for Informatics and Computational Science https://cics.nd.edu/ University of otre Dame otre Dame, Indiana, USA Email:
More informationTurbo Compression. Andrej Rikovsky, Advisor: Pavol Hanus
Turbo Compression Andrej Rikovsky, Advisor: Pavol Hanus Abstract Turbo codes which performs very close to channel capacity in channel coding can be also used to obtain very efficient source coding schemes.
More informationPractical Polar Code Construction Using Generalised Generator Matrices
Practical Polar Code Construction Using Generalised Generator Matrices Berksan Serbetci and Ali E. Pusane Department of Electrical and Electronics Engineering Bogazici University Istanbul, Turkey E-mail:
More informationOn the exact bit error probability for Viterbi decoding of convolutional codes
On the exact bit error probability for Viterbi decoding of convolutional codes Irina E. Bocharova, Florian Hug, Rolf Johannesson, and Boris D. Kudryashov Dept. of Information Systems Dept. of Electrical
More informationSOFT DECISION FANO DECODING OF BLOCK CODES OVER DISCRETE MEMORYLESS CHANNEL USING TREE DIAGRAM
Journal of ELECTRICAL ENGINEERING, VOL. 63, NO. 1, 2012, 59 64 SOFT DECISION FANO DECODING OF BLOCK CODES OVER DISCRETE MEMORYLESS CHANNEL USING TREE DIAGRAM H. Prashantha Kumar Udupi Sripati K. Rajesh
More informationConstruction of low complexity Array based Quasi Cyclic Low density parity check (QC-LDPC) codes with low error floor
Construction of low complexity Array based Quasi Cyclic Low density parity check (QC-LDPC) codes with low error floor Pravin Salunkhe, Prof D.P Rathod Department of Electrical Engineering, Veermata Jijabai
More informationhypothesis a claim about the value of some parameter (like p)
Testing hypotheses hypothesis a claim about the value of some parameter (like p) significance test procedure to assess the strength of evidence provided by a sample of data against the claim of a hypothesized
More informationTHIS paper is aimed at designing efficient decoding algorithms
IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 45, NO. 7, NOVEMBER 1999 2333 Sort-and-Match Algorithm for Soft-Decision Decoding Ilya Dumer, Member, IEEE Abstract Let a q-ary linear (n; k)-code C be used
More informationPerformance of Multi Binary Turbo-Codes on Nakagami Flat Fading Channels
Buletinul Ştiinţific al Universităţii "Politehnica" din Timişoara Seria ELECTRONICĂ şi TELECOMUNICAŢII TRANSACTIONS on ELECTRONICS and COMMUNICATIONS Tom 5(65), Fascicola -2, 26 Performance of Multi Binary
More informationQPP Interleaver Based Turbo-code For DVB-RCS Standard
212 4th International Conference on Computer Modeling and Simulation (ICCMS 212) IPCSIT vol.22 (212) (212) IACSIT Press, Singapore QPP Interleaver Based Turbo-code For DVB-RCS Standard Horia Balta, Radu
More informationAdaptive Cut Generation for Improved Linear Programming Decoding of Binary Linear Codes
Adaptive Cut Generation for Improved Linear Programming Decoding of Binary Linear Codes Xiaojie Zhang and Paul H. Siegel University of California, San Diego, La Jolla, CA 9093, U Email:{ericzhang, psiegel}@ucsd.edu
More informationError-Correcting Schemes with Dynamic Thresholds in Nonvolatile Memories
2 IEEE International Symposium on Information Theory Proceedings Error-Correcting Schemes with Dynamic Thresholds in Nonvolatile Memories Hongchao Zhou Electrical Engineering Department California Institute
More informationSoft-Output Trellis Waveform Coding
Soft-Output Trellis Waveform Coding Tariq Haddad and Abbas Yongaçoḡlu School of Information Technology and Engineering, University of Ottawa Ottawa, Ontario, K1N 6N5, Canada Fax: +1 (613) 562 5175 thaddad@site.uottawa.ca
More informationAn Enhanced (31,11,5) Binary BCH Encoder and Decoder for Data Transmission
An Enhanced (31,11,5) Binary BCH Encoder and Decoder for Data Transmission P.Mozhiarasi, C.Gayathri, V.Deepan Master of Engineering, VLSI design, Sri Eshwar College of Engineering, Coimbatore- 641 202,
More informationCode design: Computer search
Code design: Computer search Low rate codes Represent the code by its generator matrix Find one representative for each equivalence class of codes Permutation equivalences? Do NOT try several generator
More informationPerformance Analysis and Code Optimization of Low Density Parity-Check Codes on Rayleigh Fading Channels
Performance Analysis and Code Optimization of Low Density Parity-Check Codes on Rayleigh Fading Channels Jilei Hou, Paul H. Siegel and Laurence B. Milstein Department of Electrical and Computer Engineering
More informationExpectation propagation for symbol detection in large-scale MIMO communications
Expectation propagation for symbol detection in large-scale MIMO communications Pablo M. Olmos olmos@tsc.uc3m.es Joint work with Javier Céspedes (UC3M) Matilde Sánchez-Fernández (UC3M) and Fernando Pérez-Cruz
More informationA new analytic approach to evaluation of Packet Error Rate in Wireless Networks
A new analytic approach to evaluation of Packet Error Rate in Wireless Networks Ramin Khalili Université Pierre et Marie Curie LIP6-CNRS, Paris, France ramin.khalili@lip6.fr Kavé Salamatian Université
More informationParallel Concatenated Chaos Coded Modulations
Parallel Concatenated Chaos Coded Modulations Francisco J. Escribano, M. A. F. Sanjuán Departamento de Física Universidad Rey Juan Carlos 8933 Móstoles, Madrid, Spain Email: francisco.escribano@ieee.org
More informationModulation codes for the deep-space optical channel
Modulation codes for the deep-space optical channel Bruce Moision, Jon Hamkins, Matt Klimesh, Robert McEliece Jet Propulsion Laboratory Pasadena, CA, USA DIMACS, March 25 26, 2004 March 25 26, 2004 DIMACS
More informationNAME... Soc. Sec. #... Remote Location... (if on campus write campus) FINAL EXAM EE568 KUMAR. Sp ' 00
NAME... Soc. Sec. #... Remote Location... (if on campus write campus) FINAL EXAM EE568 KUMAR Sp ' 00 May 3 OPEN BOOK exam (students are permitted to bring in textbooks, handwritten notes, lecture notes
More informationCalculating the Required Number of Bits in the Function of Confidence Level and Error Probability Estimation
SERBIA JOURAL OF ELECTRICAL EGIEERIG Vol. 9, o. 3, October 2012, 361-375 UDK: 621.391:004.7]:519.213 DOI: 10.2298/SJEE1203361M Calculating the Required umber of Bits in the Function of Confidence Level
More informationMaking Error Correcting Codes Work for Flash Memory
Making Error Correcting Codes Work for Flash Memory Part I: Primer on ECC, basics of BCH and LDPC codes Lara Dolecek Laboratory for Robust Information Systems (LORIS) Center on Development of Emerging
More informationReliability of Technical Systems
Main Topics 1. Introduction, Key Terms, Framing the Problem 2. Reliability Parameters: Failure Rate, Failure Probability, etc. 3. Some Important Reliability Distributions 4. Component Reliability 5. Software
More informationBinary Convolutional Codes
Binary Convolutional Codes A convolutional code has memory over a short block length. This memory results in encoded output symbols that depend not only on the present input, but also on past inputs. An
More informationDistributed Arithmetic Coding
Distributed Arithmetic Coding Marco Grangetto, Member, IEEE, Enrico Magli, Member, IEEE, Gabriella Olmo, Senior Member, IEEE Abstract We propose a distributed binary arithmetic coder for Slepian-Wolf coding
More informationMaximum Likelihood Decoding of Codes on the Asymmetric Z-channel
Maximum Likelihood Decoding of Codes on the Asymmetric Z-channel Pål Ellingsen paale@ii.uib.no Susanna Spinsante s.spinsante@univpm.it Angela Barbero angbar@wmatem.eis.uva.es May 31, 2005 Øyvind Ytrehus
More informationOn the properness of some optimal binary linear codes and their dual codes
Eleventh International Workshop on Algebraic and Combinatorial Coding Theory June 16-22, 2008, Pamporovo, Bulgaria pp. 76-81 On the properness of some optimal binary linear codes and their dual codes Rossitza
More informationarxiv:cs/ v2 [cs.it] 1 Oct 2006
A General Computation Rule for Lossy Summaries/Messages with Examples from Equalization Junli Hu, Hans-Andrea Loeliger, Justin Dauwels, and Frank Kschischang arxiv:cs/060707v [cs.it] 1 Oct 006 Abstract
More informationReducing Computation Time for the Analysis of Large Social Science Datasets
Reducing Computation Time for the Analysis of Large Social Science Datasets Douglas G. Bonett Center for Statistical Analysis in the Social Sciences University of California, Santa Cruz Jan 28, 2014 Overview
More informationCHAPTER 8 Viterbi Decoding of Convolutional Codes
MIT 6.02 DRAFT Lecture Notes Fall 2011 (Last update: October 9, 2011) Comments, questions or bug reports? Please contact hari at mit.edu CHAPTER 8 Viterbi Decoding of Convolutional Codes This chapter describes
More informationChapter 7 Reed Solomon Codes and Binary Transmission
Chapter 7 Reed Solomon Codes and Binary Transmission 7.1 Introduction Reed Solomon codes named after Reed and Solomon [9] following their publication in 1960 have been used together with hard decision
More informationDigital Communication Systems ECS 452. Asst. Prof. Dr. Prapun Suksompong 5.2 Binary Convolutional Codes
Digital Communication Systems ECS 452 Asst. Prof. Dr. Prapun Suksompong prapun@siit.tu.ac.th 5.2 Binary Convolutional Codes 35 Binary Convolutional Codes Introduced by Elias in 1955 There, it is referred
More informationLow-Complexity Puncturing and Shortening of Polar Codes
Low-Complexity Puncturing and Shortening of Polar Codes Valerio Bioglio, Frédéric Gabry, Ingmar Land Mathematical and Algorithmic Sciences Lab France Research Center, Huawei Technologies Co. Ltd. Email:
More informationSequential Decoding of Binary Convolutional Codes
Sequential Decoding of Binary Convolutional Codes Yunghsiang S. Han Dept. Computer Science and Information Engineering, National Chi Nan University Taiwan E-mail: yshan@csie.ncnu.edu.tw Y. S. Han Sequential
More informationSIPCom8-1: Information Theory and Coding Linear Binary Codes Ingmar Land
SIPCom8-1: Information Theory and Coding Linear Binary Codes Ingmar Land Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.1 Overview Basic Concepts of Channel Coding Block Codes I:
More informationConvergence analysis for a class of LDPC convolutional codes on the erasure channel
Convergence analysis for a class of LDPC convolutional codes on the erasure channel Sridharan, Arvind; Lentmaier, Michael; Costello Jr., Daniel J.; Zigangirov, Kamil Published in: [Host publication title
More informationError Floors of LDPC Coded BICM
Electrical and Computer Engineering Conference Papers, Posters and Presentations Electrical and Computer Engineering 2007 Error Floors of LDPC Coded BICM Aditya Ramamoorthy Iowa State University, adityar@iastate.edu
More informationTrapping Set Enumerators for Specific LDPC Codes
Trapping Set Enumerators for Specific LDPC Codes Shadi Abu-Surra Samsung Telecommunications America 1301 E. Lookout Dr. Richardson TX 75082 Email: sasurra@sta.samsung.com David DeClercq ETIS ENSEA/UCP/CNRS
More information3.4. The Binomial Probability Distribution
3.4. The Binomial Probability Distribution Objectives. Binomial experiment. Binomial random variable. Using binomial tables. Mean and variance of binomial distribution. 3.4.1. Four Conditions that determined
More informationAdvances in Error Control Strategies for 5G
Advances in Error Control Strategies for 5G Jörg Kliewer The Elisha Yegal Bar-Ness Center For Wireless Communications And Signal Processing Research 5G Requirements [Nokia Networks: Looking ahead to 5G.
More informationOn the Computation of EXIT Characteristics for Symbol-Based Iterative Decoding
On the Computation of EXIT Characteristics for Symbol-Based Iterative Decoding Jörg Kliewer, Soon Xin Ng 2, and Lajos Hanzo 2 University of Notre Dame, Department of Electrical Engineering, Notre Dame,
More informationDesign of Optimal Quantizers for Distributed Source Coding
Design of Optimal Quantizers for Distributed Source Coding David Rebollo-Monedero, Rui Zhang and Bernd Girod Information Systems Laboratory, Electrical Eng. Dept. Stanford University, Stanford, CA 94305
More informationLDPC Decoder LLR Stopping Criterion
International Conference on Innovative Trends in Electronics Communication and Applications 1 International Conference on Innovative Trends in Electronics Communication and Applications 2015 [ICIECA 2015]
More informationStructured Low-Density Parity-Check Codes: Algebraic Constructions
Structured Low-Density Parity-Check Codes: Algebraic Constructions Shu Lin Department of Electrical and Computer Engineering University of California, Davis Davis, California 95616 Email:shulin@ece.ucdavis.edu
More informationRapport technique #INRS-EMT Exact Expression for the BER of Rectangular QAM with Arbitrary Constellation Mapping
Rapport technique #INRS-EMT-010-0604 Exact Expression for the BER of Rectangular QAM with Arbitrary Constellation Mapping Leszek Szczeciński, Cristian González, Sonia Aïssa Institut National de la Recherche
More informationChapter 12: Inference about One Population
Chapter 1: Inference about One Population 1.1 Introduction In this chapter, we presented the statistical inference methods used when the problem objective is to describe a single population. Sections 1.
More informationThe Super-Trellis Structure of Turbo Codes
2212 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL 46, NO 6, SEPTEMBER 2000 The Super-Trellis Structure of Turbo Codes Marco Breiling, Student Member, IEEE, and Lajos Hanzo, Senior Member, IEEE Abstract
More informationSUBOPTIMALITY OF THE KARHUNEN-LOÈVE TRANSFORM FOR FIXED-RATE TRANSFORM CODING. Kenneth Zeger
SUBOPTIMALITY OF THE KARHUNEN-LOÈVE TRANSFORM FOR FIXED-RATE TRANSFORM CODING Kenneth Zeger University of California, San Diego, Department of ECE La Jolla, CA 92093-0407 USA ABSTRACT An open problem in
More informationDigital Communications
Digital Communications Chapter 8: Trellis and Graph Based Codes Saeedeh Moloudi May 7, 2014 Outline 1 Introduction 2 Convolutional Codes 3 Decoding of Convolutional Codes 4 Turbo Codes May 7, 2014 Proakis-Salehi
More informationResidual Versus Suppressed-Carrier Coherent Communications
TDA Progress Report -7 November 5, 996 Residual Versus Suppressed-Carrier Coherent Communications M. K. Simon and S. Million Communications and Systems Research Section This article addresses the issue
More informationPUNCTURED 8-PSK TURBO-TCM TRANSMISSIONS USING RECURSIVE SYSTEMATIC CONVOLUTIONAL GF ( 2 N ) ENCODERS
19th European Signal Processing Conference (EUSIPCO 2011) Barcelona, Spain, August 29 - September 2, 2011 PUCTURED 8-PSK TURBO-TCM TRASMISSIOS USIG RECURSIVE SYSTEMATIC COVOLUTIOAL GF ( 2 ) ECODERS Calin
More informationProbability and Stochastic Processes
Probability and Stochastic Processes A Friendly Introduction Electrical and Computer Engineers Third Edition Roy D. Yates Rutgers, The State University of New Jersey David J. Goodman New York University
More informationNon-Linear Turbo Codes for Interleaver-Division Multiple Access on the OR Channel.
UCLA Graduate School of Engineering - Electrical Engineering Program Non-Linear Turbo Codes for Interleaver-Division Multiple Access on the OR Channel. Miguel Griot, Andres I. Vila Casado, and Richard
More informationCoding theory: Applications
INF 244 a) Textbook: Lin and Costello b) Lectures (Tu+Th 12.15-14) covering roughly Chapters 1,9-12, and 14-18 c) Weekly exercises: For your convenience d) Mandatory problem: Programming project (counts
More informationON DISTRIBUTED ARITHMETIC CODES AND SYNDROME BASED TURBO CODES FOR SLEPIAN-WOLF CODING OF NON UNIFORM SOURCES
7th European Signal Processing Conference (EUSIPCO 2009) Glasgow, Scotland, August 24-28, 2009 ON DISTRIBUTED ARITHMETIC CODES AND SYNDROME BASED TURBO CODES FOR SLEPIAN-WOLF CODING OF NON UNIFORM SOURCES
More informationCodes on graphs and iterative decoding
Codes on graphs and iterative decoding Bane Vasić Error Correction Coding Laboratory University of Arizona Prelude Information transmission 0 0 0 0 0 0 Channel Information transmission signal 0 0 threshold
More informationUpper Bounds on the Capacity of Binary Intermittent Communication
Upper Bounds on the Capacity of Binary Intermittent Communication Mostafa Khoshnevisan and J. Nicholas Laneman Department of Electrical Engineering University of Notre Dame Notre Dame, Indiana 46556 Email:{mhoshne,
More informationPerformance of Low Density Parity Check Codes. as a Function of Actual and Assumed Noise Levels. David J.C. MacKay & Christopher P.
Performance of Low Density Parity Check Codes as a Function of Actual and Assumed Noise Levels David J.C. MacKay & Christopher P. Hesketh Cavendish Laboratory, Cambridge, CB3 HE, United Kingdom. mackay@mrao.cam.ac.uk
More informationAnalysis of methods for speech signals quantization
INFOTEH-JAHORINA Vol. 14, March 2015. Analysis of methods for speech signals quantization Stefan Stojkov Mihajlo Pupin Institute, University of Belgrade Belgrade, Serbia e-mail: stefan.stojkov@pupin.rs
More informationRandom Redundant Soft-In Soft-Out Decoding of Linear Block Codes
Random Redundant Soft-In Soft-Out Decoding of Linear Block Codes Thomas R. Halford and Keith M. Chugg Communication Sciences Institute University of Southern California Los Angeles, CA 90089-2565 Abstract
More informationAPPLICATIONS. Quantum Communications
SOFT PROCESSING TECHNIQUES FOR QUANTUM KEY DISTRIBUTION APPLICATIONS Marina Mondin January 27, 2012 Quantum Communications In the past decades, the key to improving computer performance has been the reduction
More information!P x. !E x. Bad Things Happen to Good Signals Spring 2011 Lecture #6. Signal-to-Noise Ratio (SNR) Definition of Mean, Power, Energy ( ) 2.
Bad Things Happen to Good Signals oise, broadly construed, is any change to the signal from its expected value, x[n] h[n], when it arrives at the receiver. We ll look at additive noise and assume the noise
More informationNew Puncturing Pattern for Bad Interleavers in Turbo-Codes
SERBIAN JOURNAL OF ELECTRICAL ENGINEERING Vol. 6, No. 2, November 2009, 351-358 UDK: 621.391.7:004.052.4 New Puncturing Pattern for Bad Interleavers in Turbo-Codes Abdelmounaim Moulay Lakhdar 1, Malika
More information1 1 0, g Exercise 1. Generator polynomials of a convolutional code, given in binary form, are g
Exercise Generator polynomials of a convolutional code, given in binary form, are g 0, g 2 0 ja g 3. a) Sketch the encoding circuit. b) Sketch the state diagram. c) Find the transfer function TD. d) What
More informationOn the Throughput, Capacity and Stability Regions of Random Multiple Access over Standard Multi-Packet Reception Channels
On the Throughput, Capacity and Stability Regions of Random Multiple Access over Standard Multi-Packet Reception Channels Jie Luo, Anthony Ephremides ECE Dept. Univ. of Maryland College Park, MD 20742
More informationLloyd-Max Quantization of Correlated Processes: How to Obtain Gains by Receiver-Sided Time-Variant Codebooks
Lloyd-Max Quantization of Correlated Processes: How to Obtain Gains by Receiver-Sided Time-Variant Codebooks Sai Han and Tim Fingscheidt Institute for Communications Technology, Technische Universität
More informationAnalytical Performance of One-Step Majority Logic Decoding of Regular LDPC Codes
Analytical Performance of One-Step Majority Logic Decoding of Regular LDPC Codes Rathnakumar Radhakrishnan, Sundararajan Sankaranarayanan, and Bane Vasić Department of Electrical and Computer Engineering
More informationLow-density parity-check codes
Low-density parity-check codes From principles to practice Dr. Steve Weller steven.weller@newcastle.edu.au School of Electrical Engineering and Computer Science The University of Newcastle, Callaghan,
More informationOn Weight Enumerators and MacWilliams Identity for Convolutional Codes
On Weight Enumerators and MacWilliams Identity for Convolutional Codes Irina E Bocharova 1, Florian Hug, Rolf Johannesson, and Boris D Kudryashov 1 1 Dept of Information Systems St Petersburg Univ of Information
More informationLessons in Estimation Theory for Signal Processing, Communications, and Control
Lessons in Estimation Theory for Signal Processing, Communications, and Control Jerry M. Mendel Department of Electrical Engineering University of Southern California Los Angeles, California PRENTICE HALL
More informationLecture 12. Block Diagram
Lecture 12 Goals Be able to encode using a linear block code Be able to decode a linear block code received over a binary symmetric channel or an additive white Gaussian channel XII-1 Block Diagram Data
More informationThe E8 Lattice and Error Correction in Multi-Level Flash Memory
The E8 Lattice and Error Correction in Multi-Level Flash Memory Brian M Kurkoski University of Electro-Communications Tokyo, Japan kurkoski@iceuecacjp Abstract A construction using the E8 lattice and Reed-Solomon
More informationChannel Coding I. Exercises SS 2017
Channel Coding I Exercises SS 2017 Lecturer: Dirk Wübben Tutor: Shayan Hassanpour NW1, Room N 2420, Tel.: 0421/218-62387 E-mail: {wuebben, hassanpour}@ant.uni-bremen.de Universität Bremen, FB1 Institut
More informationIMAGE COMPRESSION OF DIGITIZED NDE X-RAY RADIOGRAPHS. Brian K. LoveweIl and John P. Basart
IMAGE COMPRESSIO OF DIGITIZED DE X-RAY RADIOGRAPHS BY ADAPTIVE DIFFERETIAL PULSE CODE MODULATIO Brian K. LoveweIl and John P. Basart Center for ondestructive Evaluation and the Department of Electrical
More informationBinary Convolutional Codes of High Rate Øyvind Ytrehus
Binary Convolutional Codes of High Rate Øyvind Ytrehus Abstract The function N(r; ; d free ), defined as the maximum n such that there exists a binary convolutional code of block length n, dimension n
More informationNew Designs for Bit-Interleaved Coded Modulation with Hard-Decision Feedback Iterative Decoding
1 New Designs for Bit-Interleaved Coded Modulation with Hard-Decision Feedback Iterative Decoding Alireza Kenarsari-Anhari, Student Member, IEEE, and Lutz Lampe, Senior Member, IEEE Abstract Bit-interleaved
More informationCodes on graphs and iterative decoding
Codes on graphs and iterative decoding Bane Vasić Error Correction Coding Laboratory University of Arizona Funded by: National Science Foundation (NSF) Seagate Technology Defense Advanced Research Projects
More informationHadamard Codes. A Hadamard matrix of order n is a matrix H n with elements 1 or 1 such that H n H t n = n I n. For example,
Coding Theory Massoud Malek Hadamard Codes A Hadamard matrix of order n is a matrix H n with elements 1 or 1 such that H n H t n = n I n. For example, [ ] 1 1 1 1 1 1 1 1 1 1 H 1 = [1], H 2 =, H 1 1 4
More informationHidden Markov Models Part 1: Introduction
Hidden Markov Models Part 1: Introduction CSE 6363 Machine Learning Vassilis Athitsos Computer Science and Engineering Department University of Texas at Arlington 1 Modeling Sequential Data Suppose that
More informationTurbo Codes for xdsl modems
Turbo Codes for xdsl modems Juan Alberto Torres, Ph. D. VOCAL Technologies, Ltd. (http://www.vocal.com) John James Audubon Parkway Buffalo, NY 14228, USA Phone: +1 716 688 4675 Fax: +1 716 639 0713 Email:
More informationUNIT I INFORMATION THEORY. I k log 2
UNIT I INFORMATION THEORY Claude Shannon 1916-2001 Creator of Information Theory, lays the foundation for implementing logic in digital circuits as part of his Masters Thesis! (1939) and published a paper
More informationThe t-distribution. Patrick Breheny. October 13. z tests The χ 2 -distribution The t-distribution Summary
Patrick Breheny October 13 Patrick Breheny Biostatistical Methods I (BIOS 5710) 1/25 Introduction Introduction What s wrong with z-tests? So far we ve (thoroughly!) discussed how to carry out hypothesis
More informationAn Efficient Algorithm for Finding Dominant Trapping Sets of LDPC Codes
An Efficient Algorithm for Finding Dominant Trapping Sets of LDPC Codes Mehdi Karimi, Student Member, IEEE and Amir H. Banihashemi, Senior Member, IEEE Abstract arxiv:1108.4478v2 [cs.it] 13 Apr 2012 This
More informationLecture 7: Confidence interval and Normal approximation
Lecture 7: Confidence interval and Normal approximation 26th of November 2015 Confidence interval 26th of November 2015 1 / 23 Random sample and uncertainty Example: we aim at estimating the average height
More information