PROCESSING AND TRANSMISSION OF INFORMATION*

Similar documents
XII. PROCESSING AND TRANSMISSION OF INFORMATION

Motivation for Arithmetic Coding

P/, t. co P9 ALGEBRAIC DECODING FOR A BINARY ERASURE CHANNEL TECHNICAL REPORT 340 RESEARCH LABORATORY OF ELECTRONICS M. A. EPSTEIN

Department of Electrical & Electronics EE-333 DIGITAL SYSTEMS

EE376A: Homework #3 Due by 11:59pm Saturday, February 10th, 2018

1 Introduction to information theory

School of Computer Science and Electrical Engineering 28/05/01. Digital Circuits. Lecture 14. ENG1030 Electrical Physics and Electronics


Information Theory and Coding Prof. S. N. Merchant Department of Electrical Engineering Indian Institute of Technology, Bombay

PROCESSING AND TRANSMISSION OF INFORMATION* XV.

MATH 433 Applied Algebra Lecture 21: Linear codes (continued). Classification of groups.

ELEC Digital Logic Circuits Fall 2014 Sequential Circuits (Chapter 6) Finite State Machines (Ch. 7-10)

EE40 Lec 15. Logic Synthesis and Sequential Logic Circuits

MATH32031: Coding Theory Part 15: Summary

MATH 433 Applied Algebra Lecture 22: Review for Exam 2.

This research was partially supported by the Faculty Research and Development Fund of the University of North Carolina at Wilmington

Chapter 3 Linear Block Codes

Channel Coding I. Exercises SS 2017

Investigation of the Elias Product Code Construction for the Binary Erasure Channel

Information Theory CHAPTER. 5.1 Introduction. 5.2 Entropy

3. Coding theory 3.1. Basic concepts

The Radicans. James B. Wilson. April 13, 2002

Information and Entropy

Capacity of the Discrete Memoryless Energy Harvesting Channel with Side Information

Synchronous Sequential Circuit

King Fahd University of Petroleum and Minerals College of Computer Science and Engineering Computer Engineering Department

Why digital? Overview. Number Systems. Binary to Decimal conversion

UNIT I INFORMATION THEORY. I k log 2

Unit II Chapter 4:- Digital Logic Contents 4.1 Introduction... 4

A practical method for enumerating cosets of a finite abstract group

Some global properties of neural networks. L. Accardi and A. Aiello. Laboratorio di Cibernetica del C.N.R., Arco Felice (Na), Italy

Computer Arithmetic. MATH 375 Numerical Analysis. J. Robert Buchanan. Fall Department of Mathematics. J. Robert Buchanan Computer Arithmetic

Chapter 4. Data Transmission and Channel Capacity. Po-Ning Chen, Professor. Department of Communications Engineering. National Chiao Tung University

Logic. Combinational. inputs. outputs. the result. system can

Lecture 12. Block Diagram

Introduction to Convolutional Codes, Part 1

J. E. Cunningham, U. F. Gronemann, T. S. Huang, J. W. Pan, O. J. Tretiak, W. F. Schreiber

A little context This paper is concerned with finite automata from the experimental point of view. The behavior of these machines is strictly determin

IOSR Journal of Mathematics (IOSR-JM) e-issn: , p-issn: x. Volume 9, Issue 2 (Nov. Dec. 2013), PP

EE 229B ERROR CONTROL CODING Spring 2005

A Piggybacking Design Framework for Read-and Download-efficient Distributed Storage Codes

Turbo Codes are Low Density Parity Check Codes

Codes for Partially Stuck-at Memory Cells

ELEN Electronique numérique

IN this paper, we consider the capacity of sticky channels, a

ELCT201: DIGITAL LOGIC DESIGN

XOR - XNOR Gates. The graphic symbol and truth table of XOR gate is shown in the figure.

Plotkin s Bound in Codes Equipped with the Euclidean Weight Function

PROCESSING AND TRANSMISSION OF INFORMATION* A. A METHOD OF PLANAR PREDICTION FOR CODING PICTURES

Efficient Decoding of Permutation Codes Obtained from Distance Preserving Maps

MAHALAKSHMI ENGINEERING COLLEGE-TRICHY QUESTION BANK UNIT V PART-A. 1. What is binary symmetric channel (AUC DEC 2006)

A New Interpretation of Information Rate

MATH 433 Applied Algebra Lecture 19: Subgroups (continued). Error-detecting and error-correcting codes.

Permutation groups/1. 1 Automorphism groups, permutation groups, abstract

6.02 Fall 2012 Lecture #1

Chapter 5. Cyclic Codes

Run-length & Entropy Coding. Redundancy Removal. Sampling. Quantization. Perform inverse operations at the receiver EEE

3F1 Information Theory, Lecture 3

SIGNAL COMPRESSION Lecture Shannon-Fano-Elias Codes and Arithmetic Coding

Every time has a value associated with it, not just some times. A variable can take on any value within a range

MC9211 Computer Organization

SUPPLEMENTARY INFORMATION

Shannon-Fano-Elias coding

Communication Engineering Prof. Surendra Prasad Department of Electrical Engineering Indian Institute of Technology, Delhi

CpE358/CS381. Switching Theory and Logical Design. Summer

coe'p O hs THRESHOLD DECODING I. 1% JAMES L. MASSEY MASSACHUSETTS INSTITUTE OF TECHNOLOGY TECHNICAL REPORT 410

On Two Probabilistic Decoding Algorithms for Binary Linear Codes

Discrete Mathematics. Benny George K. September 22, 2011

A Graph-based Framework for Transmission of Correlated Sources over Multiple Access Channels

On Synchronous Variable Length Coding for Discrete Noiseless Chrannels

On the minimum neighborhood of independent sets in the n-cube

Digital electronics form a class of circuitry where the ability of the electronics to process data is the primary focus.

Z = F(X) Combinational circuit. A combinational circuit can be specified either by a truth table. Truth Table

SIR C.R.REDDY COLLEGE OF ENGINEERING ELURU DIGITAL INTEGRATED CIRCUITS (DIC) LABORATORY MANUAL III / IV B.E. (ECE) : I - SEMESTER

COMBINATORIAL GROUP THEORY NOTES

AFOSR Report No. ON A CLASS OF ERROR CORRECTING BINARY GROUP CODES by R. C. Bose and D. K. Ray-Chaudhuri September" 1959

BINARY CODES. Binary Codes. Computer Mathematics I. Jiraporn Pooksook Department of Electrical and Computer Engineering Naresuan University

Sequential Circuits Sequential circuits combinational circuits state gate delay

Channel Coding I. Exercises SS 2017

Ma/CS 6b Class 24: Error Correcting Codes

Introduction to Quantum Computation

Binary Linear Codes G = = [ I 3 B ] , G 4 = None of these matrices are in standard form. Note that the matrix 1 0 0

EECS150 - Digital Design Lecture 17 - Sequential Circuits 3 (Counters)

ENEL Digital Circuit Design. Final Examination

Asynchronous sequence circuits

Index coding with side information

Channel Capacity of Moore Automata

Binary addition (1-bit) P Q Y = P + Q Comments Carry = Carry = Carry = Carry = 1 P Q

ELCT201: DIGITAL LOGIC DESIGN

Inadmissible Class of Boolean Functions under Stuck-at Faults

MAHALAKSHMI ENGINEERING COLLEGE QUESTION BANK. SUBJECT CODE / Name: EC2252 COMMUNICATION THEORY UNIT-V INFORMATION THEORY PART-A

ELECTRONICS & COMMUNICATIONS DIGITAL COMMUNICATIONS

Binary Convolutional Codes of High Rate Øyvind Ytrehus

9 THEORY OF CODES. 9.0 Introduction. 9.1 Noise

17.1 Binary Codes Normal numbers we use are in base 10, which are called decimal numbers. Each digit can be 10 possible numbers: 0, 1, 2, 9.

Chapter 6. BCH Codes

The Matrix Algebra of Sample Statistics

The Design Procedure. Output Equation Determination - Derive output equations from the state table

Finite Mathematics. Nik Ruškuc and Colva M. Roney-Dougal

Upper Bounds on the Capacity of Binary Intermittent Communication

Transcription:

IX. PROCESSING AND TRANSMISSION OF INFORMATION* Prof. P. Elias E. Ferretti E. T. Schoen Prof. R. M. Fano K. Joannou F. F. Tung Prof. D. A. Huffman R. M. Lerner S. H. Unger Prof. C. E. Shannon J. B. O'Loughlin M. L. Wetherell Dr. M. V. Cerrillo L. S. Onyshkevych J. M. Wozencraft W. G. Breckenridge A. J. Osborne W. A. Youngblood W. W. Peterson A. ROW ASSIGNMENTS IN SEQUENTIAL SWITCHING CIRUCITS The terminal characteristics of asynchronous sequential switching circuits can be specified by means of flow tables (1). In an analogous manner, the external behavior of synchronous (or clocked) sequential circuits can be described by means of similar tables (2, 3), as shown in Fig. IX-1. A general synthesis procedure for either circuit can be carried out as follows. 1. Construction and simplification of the flow table (1). 2. Assignment to each row of the table of one or more states of a set of binaryvalued variables, which will be called "state variables." In the asynchronous case, these states must be so chosen that critical races never occur. That is, if any transition in the table calls for changing more than one state variable, then the order in which the changes occur must not affect the row that is ultimately reached. often necessitates the use of more than one state per row (4). This requirement No restriction of this kind applies in the case of synchronous systems; hence there is little reason to assign more than one state to a row. 3. Derivation from the flow table and row assignment of the specifications for a combinational circuit with feedback, as shown in reference 1; derivation by an analogous process for synchronous systems. 4. Synthesis of this combinational circuit by any convenient method. This report will discuss step 2. Let us first consider the synchronous case, and assume that one state is to be assigned to each row; and, for the sake of simplicity, that m, the number of rows in the table, is an integral power of two. Thus, if m = 2 n then n-state variables will yield the required number of states. Since any state can be assigned to any row, the number of possible arrangements is m!. Three of the 24 possible assignments are shown for the same table in Fig. IX-1. However, from a physical viewpoint, many of these assignments are essentially equivalent to one another, since they differ only in that some of the state variables have merely been relabelled. For example, the assignment in Fig. IX-1c can be obtained from the one in Fig. IX-la by complementing y 2 and then interchanging yl and y 2 It will be shown that the number of distinct assignments (those which, in general, may lead to different circuit specifications in step 3) is (2n - 1)!/n! for a 2n-row synchronous flow table. In the case of asynchronous circuits, the problem of race This work was supported in part by Purchase Order DDL-B158. (See the Introduction.)

x1x 2 : yly2: 00 00 11 0 01 10 10 10 01 Row 1 10 2 0 1 2 2 0 3 2 31 21 4 0 3 4 0 3 0 1 0 4 (a) x1x2: YIY2: 00 01 Row x1x2 yl2: 00 01 Row 10 1 2 0 1 1 0 0 0 1 1 1 2 00 01 11 0 10 10 10 21 2 0 3 2 3 21 4 0 3 10 21 20 31 10 31 21 4 0 4 0 3 0 10 4 10 40 30 10 (c) Fig. IX-1. Synchronous flow tables. The numbers with superscripts indicate the next row; the superscripts correspond to the output. Row assignments in (a) and (b) are distinct; those in (a) and (c) are equivalent. conditions restricts the allowable number of assignments to the extent that there are no nontrivial transformations that can always be made. In specific cases, however, several distinct assignments are possible. These statements will be proved by using some elementary principles of group theory (see ref. 5). If r represents a particular row assignment for an m-row synchronous table (where m = 2 n), and if p represents one of the m! permutation transformations that can be applied to a set of m elements, then the expression rp can be used to designate the new row assignment that results from permuting assignment r in accordance with transformation p. To take an example: if the zeros and ones that describe each state are thought of as forming binary numbers, then the states can be compactly written as the decimal equivalents of these numbers, so that, in Fig. IX-la, the internal states are

0, 1, 3, and 2, reading from the top down. If we apply the permutation (0)(1)(32), the assignment of Fig. IX-lb is obtained; the permutation (0132) transforms the same assignment into that of Fig. IX-ic. The set G of all permutations p (including the identity I) can easily be shown to be a noncomutative group with m! members. A permutation s [such as (0132)] that results when the state variables are interchanged and/or complemented, will be called "symmetric." The set of all of these transformations (members of which will henceforth be referred to as s i, i = 1, 2,... a subgroup of G containing n! 2 n members, and it constitutes the set of all transformations, leaving distances on an n-dimensional hypercube invariant (6). r Two row assignments, rl and r 2, will be said to be equivalent (written rl r 2 ), if = r 2 s i. Similarly, two transformations tl and t 2 will be defined as being equivalent if tl = t 2 si. It follows that, if a row assignment is operated upon separately by two equivalent transformations, the two resultant assignments will be equivalent. For each element p in G, form a subset of G that consists of the elements ps i for every s i in S. s 2 in These subsets are called "left cosets of S." Suppose that q 1 and q 2 belong to the same coset. Then, there are elements sl and S, and an element p in G so that q 1 = psl' q 2 = ps 2 and s 3 = s2s 1 (s2 is the inverse of s2). Since q 2 s 3 = (PS 2 ) (S2Sl) = p(s 2 s) s1 = = q 1, it follows that q 1 ~ q 2 Cosets possess the following properties (proved in sec. 9 of ref. 5): 1. Every member of G belongs to some coset of S. 2. Each coset contains the same number of elements as S. 3. Two cosets of S are either disjoint or identical. 4. S is a coset of itself. We have shown that members of the same coset are equivalent. Now we shall demonstrate that the converse is true. Suppose that q 1 - q 2. ) is Then, q 1 = q 2 s 3 and from property 1, q 2 = ps 2 for some p in G. If s1 = s2s 3, then q 1 = ps 2 s 3 = psl so that q 1 and q2 belong to a common coset. Therefore, two transformations are distinct if and only if they belong to different cosets. But according to Lagrange's theorem (which can be derived from the first three properties of the cosets listed above) the number of distinct cosets of G is equal to the number of elements in G divided by the number of elements in S, or (2n)! _ (2n - 1) 2nn! n (1) Hence, the number of distinct row assignment transformations and the number of distinct assignments are equal to term 1. For the special case of the four-row table, the problem of finding a set of three distinct assignments is quite simple. If we start with any arbitrary assignment (as in Fig. IX-la) and interchange the states assigned to the third and fourth rows, we have

, (IX. one distinct assignment (Fig. IX-Ib). A third assignment, distinct from each of the first two, can then be found by interchanging the states of rows two and three in the first assignment. Symbolically, the two transformations can be written as (0)(1)(32) and (0)(13)(2), respectively. It might be well to note here that, in any particular problem, the number of row assignments that lead to physically different circuits may be less than the number of distinct transformations given in term 1, owing to symmetries within the particular flow table. Now consider an asynchronous circuit in which transitions between all pairs of adjacent internal states (those states differing in the values of one variable only) occur at some point in the flow table. Any row assignment transformation that does not preserve distances on the n-dimensional hypercube whose vertices correspond to internal states of the system will introduce race conditions. Thus, only members of S can be used. This implies that, in general, no nontrivial transformations are possible, although they may exist for special examples. In the special case of a four-row asynchronous flow table that is strongly connected (in the sense that there are input sequences connecting all pairs of internal states), similar reasoning indicates that no nontrivial row assignment transformations exist. When nontrivial transformations are available, it would be highly desirable to have a method for ascertaining a priori which assignment is the best one - according to some criterion such as the minimization of the number of necessary combinational elements. The usefulness of a process of this kind can be appreciated, if it is realized that there are 840 distinct row assignments for any eight-row synchronous flow table. Unfortunately, this seems to be an exceedingly difficult problem and I have no suggestions as to how it can be approached. S. H. Unger References 1. D. A. Huffman, Technical Report 274, Research Laboratory of Electronics, M.I.T. Jan. 10, 1954. y 2. D. A. Huffman, Proceedings of the Symposium on Information Networks, Polytechni Institute of Brooklyn, New York, April 12-14, 1954. 3. E. F. Moore, Gedanken-experiments on Sequential Machines, Automata Studies (Princeton University Press, 1956). c 4. D. A. Huffman, Technical Report 293, Research Laboratory of Electronics, M.I.T. March 14, 1955. 5. G. D. Birkhoff and S. MacLane, A Survey of Modern Algebra (Macmillan Company, New York, 1953), Chap. VI. 6. D. Slepian, Can. J. Math. 5, 185-193 (1954).

B. CODING FOR BINARY SYMMETRIC CHANNEL Recent work by C. E. Shannon and P. Elias has established bounds on the exponential behavior of probability of error for finite-length messages transmitted over idealized noisy channels. They have also shown that random coding is exponentially equivalent to the best-possible coding, for transmission rates near to capacity (1, 2). The physical implementation of a random-coding procedure is difficult because the number of possible messages increases exponentially with message length. Even with moderate message lengths, this number can become enormous for reasonable rates of transmission. Small message lengths, however, preclude the strong, large-sample, statistical assertions that are necessary to assure a small probability of error. Research is in progress toward the determination of a physically realizable coding and decoding procedure for the idealized Binary Symmetric Channel. This procedure will be a compromise between the demands of reasonable equipment on the one hand, and acceptable performance on the other. A random-code message development, using a sliding parity-check generator, is envisioned at the transmitter. The transmitted message of length n is divided for coding and decoding purposes into several smaller sublengths n 1 < n 2 = n 1 + A 1 < n 3 = n 2 + A 2 <... < n = ni- 1 + Ai - 1 <.'". n. In going from any length n i to ni+ 1, the transmitter inserts a number 5. < A. of completely arbitrary information digits, filling the rest of the increment Ai with "random" check digits generated as the mod 2 sum of the sliding check pattern and as much of the message as has already been determined. Correspondingly, at each stage n i of decoding, the receiver progressively compares the received message with the set of possible transmitted messages, and discards those members of the set that are beneath a certain threshold of probability. At each stage, a number of possible messages whose individual a posteriori probabilities sum to an acceptable aggregate is retained. In the next stage, then, the receiver need only compute those possible messages that have prefixes which are members of the stored subset. In the end, the process is made to converge to the selection of a single message of high probability. The over-all probability of error is equal to one minus the cumulative probability of having retained the correct message at each stage of the procedure. It has been shown that the number m. of prefixes that should be retained at the ith 1 stage can be made to vary as l/n i by adopting suitable constraints on the incremental rate R i = 5i/ i. As n i increases, R i can also increase, giving an effective average rate R reasonably close to the maximum rate that is specified design probability of error and block-length n. permitted under ideal coding for the J. M. Wozencraft References 1. C. E. Shannon, The rate of approach to ideal coding (Abstract), IRE Convention Record, Part 4, 1955. 2. P. Elias, Coding for noisy channel, IRE Convention Record, Part 4, 1955.