INFORMATION PROCESSING ABILITY OF BINARY DETECTORS AND BLOCK DECODERS. Michael A. Lexa and Don H. Johnson
|
|
- Gavin Baker
- 5 years ago
- Views:
Transcription
1 INFORMATION PROCESSING ABILITY OF BINARY DETECTORS AND BLOCK DECODERS Michael A. Lexa and Don H. Johnson Rice University Department of Electrical and Computer Engineering Houston, TX ABSTRACT This paper applies the concepts of information processing [] to the study of binary detectors and block decoders in a single user digital communication system. We quantify performance in terms of the information transfer ratio which measures how well systems preserve discrimination information between two stochastic signals. We investigate hard decision detectors and minimum distance decoders in various additive noise environments. We show that likelihood ratio digital demodulators maximize.. INTRODUCTION In our theory of information processing, information is defined only with respect to the ultimate receiver. Consequently, no single objective measure can quantify the information a signal expresses. For example, this paper (presumably) means more to a signal processing researcher than it does to a Shakespearean scholar. To probe how well systems process information, we resort to calculating how well an informational change at the input is expressed in the output. The complete theoretical basis of this theory can be found elsewhere []. Briefly, to quantify an informational change, we calculate the information-theoretic distance, specifically the Kullback-Leibler distance (KL), between the probability distributions characterizing the signals that encode two pieces of information. We assume the signals, but not the information, are stochastic. The Data Processing Theorem [] says that the KL distance between the outputs of any system responding to the two inputs must be less than or equal to the distance calculated at the input. Here, we use this framework to characterize how well likelihood ratio detectors and block decoders process the information encoded in their inputs. This work was supported by the National Science Foundation under Grant CCR The word distance does not imply a metric since the KL distance is not symmetric in its arguments and does not satisfy the triangle inequality. We adopt the digital communication system model shown schematically in Figure. The input binary data word u α of length K represents the information the receiver ultimately wants. The encoder simply maps the data word into a code word of length N (u α v α ) and passes the code word onto the modulator. The modulator maps the code word into their signal representations (v α s α ) and transmits a continuous-time signal using an antipodal signal set. The channel adds white noise and the total transmission interval for each data word is KT seconds. Viewed from the framework of information processing, we say that the information is encoded in the received signal vector r α. Obviously, we use the word encode in an untraditional sense. What we mean is the following. The theory of information processing assumes that information does not exist in a tangible form, rather it is always contained within a signal. Thus, the received signal vector contains information about the data word. Normally, we would describe the received signal as a noisy version of the transmitted signal, but viewing the information as being encoded makes it easier to think about this theory in an arbitrary setting. We calculate three KL distances. The first is between the two received signal vectors r α, r α2 at the input to the detector. (The subscripts α and α 2 distinguish the two transmitted pieces of information.) The second is between the detected binary words w α, w α2 at the output of the detector (input to decoder), and the third is between the decoded binary words û α, û α2. We denote these distances by D r (α α 2 ), D w (α α 2 ), and Dû (α α 2 ) respectively. These distances represent the informational change between these particular signals. Certainly, a change in information (that is, a change in data words) induces the distance, but more importantly, through Stein s Lemma [2], the KL distance is the exponential decay rate of the false alarm probability of an optimum Neyman-Pearson detector. Thus, these distances quantify our ability to discriminate between the two information bearing signals at the input and output of the detector and decoder. Because of the Data Processing Theorem [], the detector and the decoder can at best pre-
2 KL KL KL u α vα s α (t) r α (t) r α w α ^ uα encoder modulator demodulator detector decoder n(t) Fig.. Two binary data blocks u α, u α2 are separately transmitted. The Kullback-Leibler distance between the distributions induced by each of the data blocks is calculated at the input and output of the detector and the decoder. The ratios of the input and output distances provide a measure of how well the detector and decoder preserve the informational change encoded their input signals. serve the distance presented at their input and at worst, reduce it to zero causing the ultimate recipient of the transmission to lose all ability to discern the informational change. The performance criterion we use is the information transfer ratio denoted by and defined as the ratio of the KL distances at the input and output of any system. It is a number between zero and one and reflects the fraction of the informational change preserved across a system. Here, we study the information transfer ratio of the detector and the decoder. det = D w (α α 2 ) D r (α α 2 ) dec = D û (α α 2 ) D w (α α 2 ) Ideally, the information transfer ratios across each of these systems would equal one indicating no informational loss. However in reality, we expect informational losses because the probability of error is never zero. The overall information transfer ratio across both the detector and decoder is simply expressed as the product of the individual information transfer ratios [3]. overall = det dec 2. KULLBACK-LEIBLER DISTANCE CALCULATIONS Each transmitted data word induces a probability distribution on the received signal vectors at the output of the demodulator. For example, if the channel adds white noise then each element of the received vector r α would be normally distributed with mean ± KE b /N and variance N /2 depending upon whether a zero or one is transmitted. (E b is the energy per data bit.) The statistical independence of the received vector elements allows us to write the KL distance at the input of the detector as a sum of the distances () (2) between each received vector element [3]. D r (α α 2 ) = N D rj (α α 2 ) (3) j= Simplifying this expression we can rewrite it in terms of the Hamming distance between v α and v α2 because D rj (α α 2 ) = if the j th bits in each word are the same. D r (α α 2 ) = d H (v α, v α2 ) D r (α α 2 ) (4) Table lists the KL distances D r (α α 2 ) for various noise distributions as a function of SNR. The detector compares each received sample r αj (j =,..., N) to a threshold and declares as its output either a one or a zero. The detected binary word w α is the collection of N such outputs. The decoder maps w α to estimates of the transmitted data words (w α û α, w α2 û α2 ). Specifically, its output is the code word closest in Hamming distance to w α (minimum distance decoding). We calculate the KL distance at the output of the detector by viewing each binary vector w n (n =,..., 2 N ) as the output of a binary symmetric channel with error probability P e. (See Table for expressions of P e for different noise distributions.) Accordingly, the probability of receiving w n when we transmit v α (or equivalently u α ) is Pr[w n u α ] = P d H(w n,v α) e ( P e ) N d H(w n,v α). These probabilities define the discrete distribution over the output of the detector, thus by definition we obtain D w (α α 2 ) = = N D wj (α α 2 ) j= 2 N n= Pr[w n u α ] log Pr[w n u α ] Pr[w n u α2 ] (5)
3 Noise Distribution D r (α α 2) P e SNR 4ξ Q ( 2ξ ) 2 e 4 ξ + 4 ξ 2 e 2 ξ SNR 4 2 ln ( + ξ) 2 ln [ sech ( )] 2 2ξ [ 2 tan sinh ( )] 2 2ξ ( 2 tan ξ ) Table. The Kullback-Leibler distances between the received random variables r αj and r α2j and the detector s hard decision bit error probabilities are shown in columns two and three for various noise distributions. In each expression ξ = KE b /NN where the signal-to-noise ratio per bit (SNR) equals E b /N. For the distribution, the quantity N is understood to be the width parameter. The fourth and fifth columns list the asymptotic values of the information transfer ratio across the detector. (See Figure 2.) When no error control coding is employed K = N. Calculation of the KL distance at the output of the decoder hinges on the decoding probabilities. Assuming u α is transmitted, the probability of decoding it as û m (m =,..., 2 K ) is the total probability mass of the decoding sphere of v m. Pr[û m u α ] = L m l= Pr[w l u α ] Here, l indexes the binary words within the decoding sphere of v m. To ensure the KL distance at the output of the decoder is defined we assume there are no failure-to-decode events, or in other words, we assume each w n lies within a decoding sphere. Similar to equation (5), we have Dû (α α 2 ) = 2 K m= Pr[û m u α ] log Pr[û m u α ] Pr[û m u α2 ]. (6) In the special case when no error control coding is employed, v α = u α, N = K, and the decoder performs no function. The output of the detector is the estimate of the transmitted data word. The expression for the KL distance at the input of the detector remains unchanged except u α substitutes v α in equation (4). The KL distance at the output of the detector can be written like equation (6) but with Pr[û m u α ] replaced by Pr[û m u α ] = P d H(u m,u α) e ( P e ) K d H(u m,u α). In this case we can also simplify the output KL distance in much the same way as equation (3). Because the estimates û α are statistically independent when there is no coding, we can write Dû (α α 2 ) = [ d H (u α, u α2 ) ( P e ) log P ] e P e + P e log P e P e (7) The bracketed term is the KL distance between the binary distributions which result every time a data bit is transmitted. 3. EXAMPLES AND DISCUSSION We study three fundamental examples. We investigate performance when no error control coding is used (the uncoded case), and then consider two Hamming codes ((3, ) and (7, 4)). In order to make fair comparisons between uncoded and coded cases, we maintain constant data rates. This requirement constrains the total transmission time of the N coded bits of a (N, K) code to KT seconds. (It takes KT seconds to transmit K data bits in uncoded cases.) We plot the information transfer ratios for four noise distributions in Figure 2 for the uncoded case and list their respective asymptotic values of in Table. These curves show the informational loss for making hard decisions at the detector. Notice the decrease in performance as the SNR increases. It is not due to the output KL distances decreasing but instead, to the growing proportional differences between the input and the output distances. (See Figure 3.) This fact means the detector better preserves the informational change at lower SNR values than at higher values. However, even though the detector is less efficient with SNR, the loss is not great. Because the information transfer ratio across the detector is completely independent of the input data words, it is, in particular, independent of the input data word length for the uncoded case. We prove in Appendix A that a likelihood ratio digital demodulator maximizes the information transfer ratio across binary detectors. Thus, the curves in Figure 2 represent the best achievable performance across any hard decision detector. Figure 4 plots det, dec, and overall when we use a (3, ) and (7, 4) Hamming code. The top row exhibits the losses across the detector; the middle row across the decoder;
4 Fig. 2. The performance of the detector in terms of the information transfer ratio is shown for the uncoded case. The performance is independent of the data word length K Fig. 3. The widening gap between the Kullback-Leibler distances at the input and output of the hard decision detector illustrates why the information transfer ratio decreases with increasing SNR. This particular plot is generated with noise and with K = 4. and the bottom row across both systems. Because of the constant data rate constraint the information transfer ratio curves across the detector are scaled versions of the curves in Figure 2. The examples studied here show a relatively constant additional loss across the decoder. These curves are identical when plotted against probability of error. Why they are not monotonic is an issue we are studying. Apparently, a more efficient code, the (7,4) code here, yields larger information transfer ratios. The performance across the decoder depends upon the choice of the transmitted code words. In general, the dependence is related in a complicated way to Hamming distance, but for the (7, 4) code studied here, greater distance implies better performance. For example, instead of choosing two code words with a Hamming distance of 4 as in Figure 4, we could choose two with a Hamming distance of 7. As shown in Figure 5, compared to the right middle panel of Figure 4, better fidelity results for high SNR. Within the framework of information processing the concept of coding gain does not exist. Because of the Data Processing Theorem, error control coding simply can not regain the informational loss across the detector. Once the loss occurs no post-processing can be performed to compensate for it. More powerful codes and decoding Fig. 5. The information transfer ratio across the decoder for a (7, 4) code is shown (α = (), α 2 = 6 ()). The improved performance, compared with right middle plot of Figure 4, is due to the increase in Hamming distance from 4 to 7. schemes could conceivably improve the informational efficiency across the decoder. At present however, no methods or even approaches exist on how to design codes and decoding schemes to maximize across the decoder. Improvements can be made across the detector if we introduce soft decision detectors. In fact, it is not difficult to think of examples in which this is the case. Such investigations could possibly lead to using, for example, to systematically study soft decision decoding. A. APPENDIX Consider a general binary detection problem where r α and r α2 are two possible received signal vectors presented at the input of the detector under hypothesis α and α 2 respectively. Let p(r α ) and p(r α 2 ) be conditional probability density functions associated with each hypothesis. Denote the output decisions of the detector as Λ and Λ 2. The information transfer ratio equals = D Λ (α α 2 ) D r (α α 2 ) = P D log (P D /P F ) + ( P D ) log ( P D )/( P F ) p(r α ) log p(r α) p(r α 2) dr where P D is the probability of detection and P F is the probability of false alarm. Explicitly, p(λ α ) = P F p(λ 2 α ) = P F p(λ α 2 ) = P D p(λ 2 α 2 ) = P D. Maximizing is equivalent to maximizing the numerator which translates into finding values of P D and P F which maximize ( ) ( ) PD PD P D log + ( P D ) log = P F P F H(P D ) P D log P F ( P D ) log ( P F ). (8)
5 Across detector Across detector Across decoder Across decoder Overall Overall Fig. 4. Plots of the information transfer ratio across the detector and decoder for a (3, ) (left column) and a (7, 4) (right column) Hamming code are shown for various noise distributions. The detector makes hard decisions and the decoder uses minimum distance decoding. For the (3, ) code α = (), α 2 = 2 (); for the (7, 4) code α = (), α 2 = 5 (). We arbitrarily reference the plots to the all-zero code words. Since P D and P F are coupled they can not be independently optimized, so without loss of generality, assume P F = a and P D = a + l. Substituting these values into equation (8) and setting its derivative equal to zero we obtain [ (a 2 ] + al) + a + l log (a 2 =. + al) + a For a given value of a (P F ), we note that the derivative is positive for l >, negative for l <, and zero when l = (minimum). Thus to maximize the numerator of equation (8) we choose the largest possible l but constrained to l a. The upper bound results from the fact that P D and P F are probabilities and thus must be between zero and one. Formally, for a given false-alarm probability max l = max P D P F l< a P D = max p(r α 2 ) p(r α ) dr Λ Λ Therefore Λ should be defined as Λ = {r p(r α 2 ) > p(r α )} which is exactly the condition of the likelihood ratio test. This result is general and holds for all noise distributions. B. REFERENCES [] S. Sinanović and D. H. Johnson, Toward a theory of information processing, Submitted to IEEE Trans on Signal Processing, Jun 22. [2] T.M. Cover and J.A. Thomas, Elements of Information Theory, Wiley, 99. [3] Sinan Sinanović, Toward a Theory of Information Processing, 999, Master of Science Thesis, Rice University, Houston TX.
Correlations in Populations: Information-Theoretic Limits
Correlations in Populations: Information-Theoretic Limits Don H. Johnson Ilan N. Goodman dhj@rice.edu Department of Electrical & Computer Engineering Rice University, Houston, Texas Population coding Describe
More informationLecture 5: Likelihood ratio tests, Neyman-Pearson detectors, ROC curves, and sufficient statistics. 1 Executive summary
ECE 830 Spring 207 Instructor: R. Willett Lecture 5: Likelihood ratio tests, Neyman-Pearson detectors, ROC curves, and sufficient statistics Executive summary In the last lecture we saw that the likelihood
More informationSimultaneous and sequential detection of multiple interacting change points
Simultaneous and sequential detection of multiple interacting change points Long Nguyen Department of Statistics University of Michigan Joint work with Ram Rajagopal (Stanford University) 1 Introduction
More information2. What are the tradeoffs among different measures of error (e.g. probability of false alarm, probability of miss, etc.)?
ECE 830 / CS 76 Spring 06 Instructors: R. Willett & R. Nowak Lecture 3: Likelihood ratio tests, Neyman-Pearson detectors, ROC curves, and sufficient statistics Executive summary In the last lecture we
More informationDETECTION theory deals primarily with techniques for
ADVANCED SIGNAL PROCESSING SE Optimum Detection of Deterministic and Random Signals Stefan Tertinek Graz University of Technology turtle@sbox.tugraz.at Abstract This paper introduces various methods for
More informationDistributed Structures, Sequential Optimization, and Quantization for Detection
IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL., NO., JANUARY Distributed Structures, Sequential Optimization, and Quantization for Detection Michael A. Lexa, Student Member, IEEE and Don H. Johnson, Fellow,
More informationSymmetrizing the Kullback-Leibler distance
Symmetrizing the Kullback-Leibler distance Don H. Johnson Λ and Sinan Sinanović Computer and Information Technology Institute Department of Electrical and Computer Engineering Rice University Houston,
More information392D: Coding for the AWGN Channel Wednesday, January 24, 2007 Stanford, Winter 2007 Handout #6. Problem Set 2 Solutions
392D: Coding for the AWGN Channel Wednesday, January 24, 2007 Stanford, Winter 2007 Handout #6 Problem Set 2 Solutions Problem 2.1 (Cartesian-product constellations) (a) Show that if A is a K-fold Cartesian
More informationDecentralized Detection in Sensor Networks
IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL 51, NO 2, FEBRUARY 2003 407 Decentralized Detection in Sensor Networks Jean-François Chamberland, Student Member, IEEE, and Venugopal V Veeravalli, Senior Member,
More informationLecture 22: Error exponents in hypothesis testing, GLRT
10-704: Information Processing and Learning Spring 2012 Lecture 22: Error exponents in hypothesis testing, GLRT Lecturer: Aarti Singh Scribe: Aarti Singh Disclaimer: These notes have not been subjected
More informationBroadcast Detection Structures with Applications to Sensor Networks
Broadcast Detection Structures with Applications to Sensor Networks Michael A. Lexa * and Don H. Johnson Department of Electrical and Computer Engineering Rice University, Houston, TX 77251-1892 amlexa@rice.edu,
More informationChapter 2 Signal Processing at Receivers: Detection Theory
Chapter Signal Processing at Receivers: Detection Theory As an application of the statistical hypothesis testing, signal detection plays a key role in signal processing at receivers of wireless communication
More informationDecentralized Detection In Wireless Sensor Networks
Decentralized Detection In Wireless Sensor Networks Milad Kharratzadeh Department of Electrical & Computer Engineering McGill University Montreal, Canada April 2011 Statistical Detection and Estimation
More informationDistributed Binary Quantizers for Communication Constrained Large-scale Sensor Networks
Distributed Binary Quantizers for Communication Constrained Large-scale Sensor Networks Ying Lin and Biao Chen Dept. of EECS Syracuse University Syracuse, NY 13244, U.S.A. ylin20 {bichen}@ecs.syr.edu Peter
More informationUniversity of Siena. Multimedia Security. Watermark extraction. Mauro Barni University of Siena. M. Barni, University of Siena
Multimedia Security Mauro Barni University of Siena : summary Optimum decoding/detection Additive SS watermarks Decoding/detection of QIM watermarks The dilemma of de-synchronization attacks Geometric
More informationLecture 12. Block Diagram
Lecture 12 Goals Be able to encode using a linear block code Be able to decode a linear block code received over a binary symmetric channel or an additive white Gaussian channel XII-1 Block Diagram Data
More informationDiversity Performance of a Practical Non-Coherent Detect-and-Forward Receiver
Diversity Performance of a Practical Non-Coherent Detect-and-Forward Receiver Michael R. Souryal and Huiqing You National Institute of Standards and Technology Advanced Network Technologies Division Gaithersburg,
More informationIntroduction to Signal Detection and Classification. Phani Chavali
Introduction to Signal Detection and Classification Phani Chavali Outline Detection Problem Performance Measures Receiver Operating Characteristics (ROC) F-Test - Test Linear Discriminant Analysis (LDA)
More informationLimits of population coding
Limits of population coding Don H. Johnson Department of Electrical & Computer Engineering, MS 366 Rice University 6100 Main Street Houston, Texas 77251 1892 dhj@rice.edu Abstract To understand whether
More informationChapter 7: Channel coding:convolutional codes
Chapter 7: : Convolutional codes University of Limoges meghdadi@ensil.unilim.fr Reference : Digital communications by John Proakis; Wireless communication by Andreas Goldsmith Encoder representation Communication
More informationOn Design Criteria and Construction of Non-coherent Space-Time Constellations
On Design Criteria and Construction of Non-coherent Space-Time Constellations Mohammad Jaber Borran, Ashutosh Sabharwal, and Behnaam Aazhang ECE Department, MS-366, Rice University, Houston, TX 77005-89
More informationSIPCom8-1: Information Theory and Coding Linear Binary Codes Ingmar Land
SIPCom8-1: Information Theory and Coding Linear Binary Codes Ingmar Land Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.1 Overview Basic Concepts of Channel Coding Block Codes I:
More informationEECS 750. Hypothesis Testing with Communication Constraints
EECS 750 Hypothesis Testing with Communication Constraints Name: Dinesh Krithivasan Abstract In this report, we study a modification of the classical statistical problem of bivariate hypothesis testing.
More informationDetection Performance Limits for Distributed Sensor Networks in the Presence of Nonideal Channels
1 Detection Performance imits for Distributed Sensor Networks in the Presence of Nonideal Channels Qi Cheng, Biao Chen and Pramod K Varshney Abstract Existing studies on the classical distributed detection
More informationOptimal Mean-Square Noise Benefits in Quantizer-Array Linear Estimation Ashok Patel and Bart Kosko
IEEE SIGNAL PROCESSING LETTERS, VOL. 17, NO. 12, DECEMBER 2010 1005 Optimal Mean-Square Noise Benefits in Quantizer-Array Linear Estimation Ashok Patel and Bart Kosko Abstract A new theorem shows that
More informationCS264: Beyond Worst-Case Analysis Lecture #11: LP Decoding
CS264: Beyond Worst-Case Analysis Lecture #11: LP Decoding Tim Roughgarden October 29, 2014 1 Preamble This lecture covers our final subtopic within the exact and approximate recovery part of the course.
More information(each row defines a probability distribution). Given n-strings x X n, y Y n we can use the absence of memory in the channel to compute
ENEE 739C: Advanced Topics in Signal Processing: Coding Theory Instructor: Alexander Barg Lecture 6 (draft; 9/6/03. Error exponents for Discrete Memoryless Channels http://www.enee.umd.edu/ abarg/enee739c/course.html
More information4488 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 54, NO. 10, OCTOBER /$ IEEE
4488 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 54, NO. 10, OCTOBER 2008 List Decoding of Biorthogonal Codes the Hadamard Transform With Linear Complexity Ilya Dumer, Fellow, IEEE, Grigory Kabatiansky,
More informationLIKELIHOOD RECEIVER FOR FH-MFSK MOBILE RADIO*
LIKELIHOOD RECEIVER FOR FH-MFSK MOBILE RADIO* Item Type text; Proceedings Authors Viswanathan, R.; S.C. Gupta Publisher International Foundation for Telemetering Journal International Telemetering Conference
More informationSequential Procedure for Testing Hypothesis about Mean of Latent Gaussian Process
Applied Mathematical Sciences, Vol. 4, 2010, no. 62, 3083-3093 Sequential Procedure for Testing Hypothesis about Mean of Latent Gaussian Process Julia Bondarenko Helmut-Schmidt University Hamburg University
More informationImpact of channel-state information on coded transmission over fading channels with diversity reception
Impact of channel-state information on coded transmission over fading channels with diversity reception Giorgio Taricco Ezio Biglieri Giuseppe Caire September 4, 1998 Abstract We study the synergy between
More informationDigital Modulation 1
Digital Modulation 1 Lecture Notes Ingmar Land and Bernard H. Fleury Navigation and Communications () Department of Electronic Systems Aalborg University, DK Version: February 5, 27 i Contents I Basic
More informationChapter 2. Binary and M-ary Hypothesis Testing 2.1 Introduction (Levy 2.1)
Chapter 2. Binary and M-ary Hypothesis Testing 2.1 Introduction (Levy 2.1) Detection problems can usually be casted as binary or M-ary hypothesis testing problems. Applications: This chapter: Simple hypothesis
More informationLecture 7 September 24
EECS 11: Coding for Digital Communication and Beyond Fall 013 Lecture 7 September 4 Lecturer: Anant Sahai Scribe: Ankush Gupta 7.1 Overview This lecture introduces affine and linear codes. Orthogonal signalling
More informationAn introduction to basic information theory. Hampus Wessman
An introduction to basic information theory Hampus Wessman Abstract We give a short and simple introduction to basic information theory, by stripping away all the non-essentials. Theoretical bounds on
More informationg(.) 1/ N 1/ N Decision Decision Device u u u u CP
Distributed Weak Signal Detection and Asymptotic Relative Eciency in Dependent Noise Hakan Delic Signal and Image Processing Laboratory (BUSI) Department of Electrical and Electronics Engineering Bogazici
More informationThese outputs can be written in a more convenient form: with y(i) = Hc m (i) n(i) y(i) = (y(i); ; y K (i)) T ; c m (i) = (c m (i); ; c m K(i)) T and n
Binary Codes for synchronous DS-CDMA Stefan Bruck, Ulrich Sorger Institute for Network- and Signal Theory Darmstadt University of Technology Merckstr. 25, 6428 Darmstadt, Germany Tel.: 49 65 629, Fax:
More informationLecture 4: Proof of Shannon s theorem and an explicit code
CSE 533: Error-Correcting Codes (Autumn 006 Lecture 4: Proof of Shannon s theorem and an explicit code October 11, 006 Lecturer: Venkatesan Guruswami Scribe: Atri Rudra 1 Overview Last lecture we stated
More informationChapter 9 Fundamental Limits in Information Theory
Chapter 9 Fundamental Limits in Information Theory Information Theory is the fundamental theory behind information manipulation, including data compression and data transmission. 9.1 Introduction o For
More informationOne Lesson of Information Theory
Institut für One Lesson of Information Theory Prof. Dr.-Ing. Volker Kühn Institute of Communications Engineering University of Rostock, Germany Email: volker.kuehn@uni-rostock.de http://www.int.uni-rostock.de/
More informationOptimal Distributed Detection Strategies for Wireless Sensor Networks
Optimal Distributed Detection Strategies for Wireless Sensor Networks Ke Liu and Akbar M. Sayeed University of Wisconsin-Madison kliu@cae.wisc.edu, akbar@engr.wisc.edu Abstract We study optimal distributed
More informationPerformance of small signal sets
42 Chapter 5 Performance of small signal sets In this chapter, we show how to estimate the performance of small-to-moderate-sized signal constellations on the discrete-time AWGN channel. With equiprobable
More informationReliable Computation over Multiple-Access Channels
Reliable Computation over Multiple-Access Channels Bobak Nazer and Michael Gastpar Dept. of Electrical Engineering and Computer Sciences University of California, Berkeley Berkeley, CA, 94720-1770 {bobak,
More informationTurbo Codes for Deep-Space Communications
TDA Progress Report 42-120 February 15, 1995 Turbo Codes for Deep-Space Communications D. Divsalar and F. Pollara Communications Systems Research Section Turbo codes were recently proposed by Berrou, Glavieux,
More information16.36 Communication Systems Engineering
MIT OpenCourseWare http://ocw.mit.edu 16.36 Communication Systems Engineering Spring 2009 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms. 16.36: Communication
More informationAppendix B Information theory from first principles
Appendix B Information theory from first principles This appendix discusses the information theory behind the capacity expressions used in the book. Section 8.3.4 is the only part of the book that supposes
More informationChannel Coding and Interleaving
Lecture 6 Channel Coding and Interleaving 1 LORA: Future by Lund www.futurebylund.se The network will be free for those who want to try their products, services and solutions in a precommercial stage.
More informationPractical Polar Code Construction Using Generalised Generator Matrices
Practical Polar Code Construction Using Generalised Generator Matrices Berksan Serbetci and Ali E. Pusane Department of Electrical and Electronics Engineering Bogazici University Istanbul, Turkey E-mail:
More informationDistributed Detection and Estimation in Wireless Sensor Networks: Resource Allocation, Fusion Rules, and Network Security
Distributed Detection and Estimation in Wireless Sensor Networks: Resource Allocation, Fusion Rules, and Network Security Edmond Nurellari The University of Leeds, UK School of Electronic and Electrical
More informationInformation Theory and Hypothesis Testing
Summer School on Game Theory and Telecommunications Campione, 7-12 September, 2014 Information Theory and Hypothesis Testing Mauro Barni University of Siena September 8 Review of some basic results linking
More informationInformation Hiding and Covert Communication
Information Hiding and Covert Communication Andrew Ker adk @ comlab.ox.ac.uk Royal Society University Research Fellow Oxford University Computing Laboratory Foundations of Security Analysis and Design
More informationTHE potential for large-scale sensor networks is attracting
IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 55, NO. 1, JANUARY 2007 327 Detection in Sensor Networks: The Saddlepoint Approximation Saeed A. Aldosari, Member, IEEE, and José M. F. Moura, Fellow, IEEE
More informationMATH Examination for the Module MATH-3152 (May 2009) Coding Theory. Time allowed: 2 hours. S = q
MATH-315201 This question paper consists of 6 printed pages, each of which is identified by the reference MATH-3152 Only approved basic scientific calculators may be used. c UNIVERSITY OF LEEDS Examination
More informationUNIFORMLY MOST POWERFUL CYCLIC PERMUTATION INVARIANT DETECTION FOR DISCRETE-TIME SIGNALS
UNIFORMLY MOST POWERFUL CYCLIC PERMUTATION INVARIANT DETECTION FOR DISCRETE-TIME SIGNALS F. C. Nicolls and G. de Jager Department of Electrical Engineering, University of Cape Town Rondebosch 77, South
More informationarxiv:cs/ v1 [cs.it] 11 Sep 2006
0 High Date-Rate Single-Symbol ML Decodable Distributed STBCs for Cooperative Networks arxiv:cs/0609054v1 [cs.it] 11 Sep 2006 Zhihang Yi and Il-Min Kim Department of Electrical and Computer Engineering
More informationA Novel Asynchronous Communication Paradigm: Detection, Isolation, and Coding
A Novel Asynchronous Communication Paradigm: Detection, Isolation, and Coding The MIT Faculty has made this article openly available. Please share how this access benefits you. Your story matters. Citation
More information10-704: Information Processing and Learning Fall Lecture 24: Dec 7
0-704: Information Processing and Learning Fall 206 Lecturer: Aarti Singh Lecture 24: Dec 7 Note: These notes are based on scribed notes from Spring5 offering of this course. LaTeX template courtesy of
More informationDigital Transmission Methods S
Digital ransmission ethods S-7.5 Second Exercise Session Hypothesis esting Decision aking Gram-Schmidt method Detection.K.K. Communication Laboratory 5//6 Konstantinos.koufos@tkk.fi Exercise We assume
More informationA Systematic Description of Source Significance Information
A Systematic Description of Source Significance Information Norbert Goertz Institute for Digital Communications School of Engineering and Electronics The University of Edinburgh Mayfield Rd., Edinburgh
More informationEfficient Decoding of Permutation Codes Obtained from Distance Preserving Maps
2012 IEEE International Symposium on Information Theory Proceedings Efficient Decoding of Permutation Codes Obtained from Distance Preserving Maps Yeow Meng Chee and Punarbasu Purkayastha Division of Mathematical
More informationDigital Band-pass Modulation PROF. MICHAEL TSAI 2011/11/10
Digital Band-pass Modulation PROF. MICHAEL TSAI 211/11/1 Band-pass Signal Representation a t g t General form: 2πf c t + φ t g t = a t cos 2πf c t + φ t Envelope Phase Envelope is always non-negative,
More informationChapter 7. Error Control Coding. 7.1 Historical background. Mikael Olofsson 2005
Chapter 7 Error Control Coding Mikael Olofsson 2005 We have seen in Chapters 4 through 6 how digital modulation can be used to control error probabilities. This gives us a digital channel that in each
More informationLinear Programming Decoding of Binary Linear Codes for Symbol-Pair Read Channels
1 Linear Programming Decoding of Binary Linear Codes for Symbol-Pair Read Channels Shunsuke Horii, Toshiyasu Matsushima, and Shigeichi Hirasawa arxiv:1508.01640v2 [cs.it] 29 Sep 2015 Abstract In this paper,
More informationEXTENDING THE DORSCH DECODER FOR EFFICIENT SOFT DECISION DECODING OF LINEAR BLOCK CODES SEAN MICHAEL COLLISON
EXTENDING THE DORSCH DECODER FOR EFFICIENT SOFT DECISION DECODING OF LINEAR BLOCK CODES By SEAN MICHAEL COLLISON A thesis submitted in partial fulfillment of the requirements for the degree of MASTER OF
More informationHypothesis testing (cont d)
Hypothesis testing (cont d) Ulrich Heintz Brown University 4/12/2016 Ulrich Heintz - PHYS 1560 Lecture 11 1 Hypothesis testing Is our hypothesis about the fundamental physics correct? We will not be able
More informationLecture 7. Union bound for reducing M-ary to binary hypothesis testing
Lecture 7 Agenda for the lecture M-ary hypothesis testing and the MAP rule Union bound for reducing M-ary to binary hypothesis testing Introduction of the channel coding problem 7.1 M-ary hypothesis testing
More informationApplications of Information Geometry to Hypothesis Testing and Signal Detection
CMCAA 2016 Applications of Information Geometry to Hypothesis Testing and Signal Detection Yongqiang Cheng National University of Defense Technology July 2016 Outline 1. Principles of Information Geometry
More informationSummary: SER formulation. Binary antipodal constellation. Generic binary constellation. Constellation gain. 2D constellations
TUTORIAL ON DIGITAL MODULATIONS Part 8a: Error probability A [2011-01-07] 07] Roberto Garello, Politecnico di Torino Free download (for personal use only) at: www.tlc.polito.it/garello 1 Part 8a: Error
More informationFusion of Decisions Transmitted Over Fading Channels in Wireless Sensor Networks
Fusion of Decisions Transmitted Over Fading Channels in Wireless Sensor Networks Biao Chen, Ruixiang Jiang, Teerasit Kasetkasem, and Pramod K. Varshney Syracuse University, Department of EECS, Syracuse,
More informationLecture 8: Shannon s Noise Models
Error Correcting Codes: Combinatorics, Algorithms and Applications (Fall 2007) Lecture 8: Shannon s Noise Models September 14, 2007 Lecturer: Atri Rudra Scribe: Sandipan Kundu& Atri Rudra Till now we have
More informationCooperative Spectrum Sensing for Cognitive Radios under Bandwidth Constraints
Cooperative Spectrum Sensing for Cognitive Radios under Bandwidth Constraints Chunhua Sun, Wei Zhang, and haled Ben Letaief, Fellow, IEEE Department of Electronic and Computer Engineering The Hong ong
More informationExpectation propagation for signal detection in flat-fading channels
Expectation propagation for signal detection in flat-fading channels Yuan Qi MIT Media Lab Cambridge, MA, 02139 USA yuanqi@media.mit.edu Thomas Minka CMU Statistics Department Pittsburgh, PA 15213 USA
More informationMMSE DECODING FOR ANALOG JOINT SOURCE CHANNEL CODING USING MONTE CARLO IMPORTANCE SAMPLING
MMSE DECODING FOR ANALOG JOINT SOURCE CHANNEL CODING USING MONTE CARLO IMPORTANCE SAMPLING Yichuan Hu (), Javier Garcia-Frias () () Dept. of Elec. and Comp. Engineering University of Delaware Newark, DE
More informationEncoding or decoding
Encoding or decoding Decoding How well can we learn what the stimulus is by looking at the neural responses? We will discuss two approaches: devise and evaluate explicit algorithms for extracting a stimulus
More informationA New Interpretation of Information Rate
A New Interpretation of Information Rate reproduced with permission of AT&T By J. L. Kelly, jr. (Manuscript received March 2, 956) If the input symbols to a communication channel represent the outcomes
More informationThreshold Optimization for Capacity-Achieving Discrete Input One-Bit Output Quantization
Threshold Optimization for Capacity-Achieving Discrete Input One-Bit Output Quantization Rudolf Mathar Inst. for Theoretical Information Technology RWTH Aachen University D-556 Aachen, Germany mathar@ti.rwth-aachen.de
More informationA Generalized Restricted Isometry Property
1 A Generalized Restricted Isometry Property Jarvis Haupt and Robert Nowak Department of Electrical and Computer Engineering, University of Wisconsin Madison University of Wisconsin Technical Report ECE-07-1
More informationUncertainty. Jayakrishnan Unnikrishnan. CSL June PhD Defense ECE Department
Decision-Making under Statistical Uncertainty Jayakrishnan Unnikrishnan PhD Defense ECE Department University of Illinois at Urbana-Champaign CSL 141 12 June 2010 Statistical Decision-Making Relevant in
More informationHypothesis Testing with Communication Constraints
Hypothesis Testing with Communication Constraints Dinesh Krithivasan EECS 750 April 17, 2006 Dinesh Krithivasan (EECS 750) Hyp. testing with comm. constraints April 17, 2006 1 / 21 Presentation Outline
More informationUpper Bounds on the Capacity of Binary Intermittent Communication
Upper Bounds on the Capacity of Binary Intermittent Communication Mostafa Khoshnevisan and J. Nicholas Laneman Department of Electrical Engineering University of Notre Dame Notre Dame, Indiana 46556 Email:{mhoshne,
More informationDetection theory. H 0 : x[n] = w[n]
Detection Theory Detection theory A the last topic of the course, we will briefly consider detection theory. The methods are based on estimation theory and attempt to answer questions such as Is a signal
More informationOn Two Probabilistic Decoding Algorithms for Binary Linear Codes
On Two Probabilistic Decoding Algorithms for Binary Linear Codes Miodrag Živković Abstract A generalization of Sullivan inequality on the ratio of the probability of a linear code to that of any of its
More informationInteractions of Information Theory and Estimation in Single- and Multi-user Communications
Interactions of Information Theory and Estimation in Single- and Multi-user Communications Dongning Guo Department of Electrical Engineering Princeton University March 8, 2004 p 1 Dongning Guo Communications
More informationQUANTIZATION FOR DISTRIBUTED ESTIMATION IN LARGE SCALE SENSOR NETWORKS
QUANTIZATION FOR DISTRIBUTED ESTIMATION IN LARGE SCALE SENSOR NETWORKS Parvathinathan Venkitasubramaniam, Gökhan Mergen, Lang Tong and Ananthram Swami ABSTRACT We study the problem of quantization for
More informationMaximum Likelihood Decoding of Codes on the Asymmetric Z-channel
Maximum Likelihood Decoding of Codes on the Asymmetric Z-channel Pål Ellingsen paale@ii.uib.no Susanna Spinsante s.spinsante@univpm.it Angela Barbero angbar@wmatem.eis.uva.es May 31, 2005 Øyvind Ytrehus
More informationDiversity-Multiplexing Tradeoff in MIMO Channels with Partial CSIT. ECE 559 Presentation Hoa Pham Dec 3, 2007
Diversity-Multiplexing Tradeoff in MIMO Channels with Partial CSIT ECE 559 Presentation Hoa Pham Dec 3, 2007 Introduction MIMO systems provide two types of gains Diversity Gain: each path from a transmitter
More informationChapter 2. Error Correcting Codes. 2.1 Basic Notions
Chapter 2 Error Correcting Codes The identification number schemes we discussed in the previous chapter give us the ability to determine if an error has been made in recording or transmitting information.
More informationIntroduction to Statistical Inference
Structural Health Monitoring Using Statistical Pattern Recognition Introduction to Statistical Inference Presented by Charles R. Farrar, Ph.D., P.E. Outline Introduce statistical decision making for Structural
More informationBASICS OF DETECTION AND ESTIMATION THEORY
BASICS OF DETECTION AND ESTIMATION THEORY 83050E/158 In this chapter we discuss how the transmitted symbols are detected optimally from a noisy received signal (observation). Based on these results, optimal
More informationIN HYPOTHESIS testing problems, a decision-maker aims
IEEE SIGNAL PROCESSING LETTERS, VOL. 25, NO. 12, DECEMBER 2018 1845 On the Optimality of Likelihood Ratio Test for Prospect Theory-Based Binary Hypothesis Testing Sinan Gezici, Senior Member, IEEE, and
More informationCHAPTER 14. Based on the info about the scattering function we know that the multipath spread is T m =1ms, and the Doppler spread is B d =0.2 Hz.
CHAPTER 4 Problem 4. : Based on the info about the scattering function we know that the multipath spread is T m =ms, and the Doppler spread is B d =. Hz. (a) (i) T m = 3 sec (ii) B d =. Hz (iii) ( t) c
More informationCooperative Communication with Feedback via Stochastic Approximation
Cooperative Communication with Feedback via Stochastic Approximation Utsaw Kumar J Nicholas Laneman and Vijay Gupta Department of Electrical Engineering University of Notre Dame Email: {ukumar jnl vgupta}@ndedu
More informationMapper & De-Mapper System Document
Mapper & De-Mapper System Document Mapper / De-Mapper Table of Contents. High Level System and Function Block. Mapper description 2. Demodulator Function block 2. Decoder block 2.. De-Mapper 2..2 Implementation
More informationthat efficiently utilizes the total available channel bandwidth W.
Signal Design for Band-Limited Channels Wireless Information Transmission System Lab. Institute of Communications Engineering g National Sun Yat-sen University Introduction We consider the problem of signal
More informationFinding the best mismatched detector for channel coding and hypothesis testing
Finding the best mismatched detector for channel coding and hypothesis testing Sean Meyn Department of Electrical and Computer Engineering University of Illinois and the Coordinated Science Laboratory
More informationData Detection for Controlled ISI. h(nt) = 1 for n=0,1 and zero otherwise.
Data Detection for Controlled ISI *Symbol by symbol suboptimum detection For the duobinary signal pulse h(nt) = 1 for n=0,1 and zero otherwise. The samples at the output of the receiving filter(demodulator)
More informationCommunication Engineering Prof. Surendra Prasad Department of Electrical Engineering Indian Institute of Technology, Delhi
Communication Engineering Prof. Surendra Prasad Department of Electrical Engineering Indian Institute of Technology, Delhi Lecture - 41 Pulse Code Modulation (PCM) So, if you remember we have been talking
More informationLecture 8: Channel and source-channel coding theorems; BEC & linear codes. 1 Intuitive justification for upper bound on channel capacity
5-859: Information Theory and Applications in TCS CMU: Spring 23 Lecture 8: Channel and source-channel coding theorems; BEC & linear codes February 7, 23 Lecturer: Venkatesan Guruswami Scribe: Dan Stahlke
More informationRCA Analysis of the Polar Codes and the use of Feedback to aid Polarization at Short Blocklengths
RCA Analysis of the Polar Codes and the use of Feedback to aid Polarization at Short Blocklengths Kasra Vakilinia, Dariush Divsalar*, and Richard D. Wesel Department of Electrical Engineering, University
More informationPSK bit mappings with good minimax error probability
PSK bit mappings with good minimax error probability Erik Agrell Department of Signals and Systems Chalmers University of Technology 4196 Göteborg, Sweden Email: agrell@chalmers.se Erik G. Ström Department
More information