The Channel Capacity of Constrained Codes: Theory and Applications

Size: px
Start display at page:

Download "The Channel Capacity of Constrained Codes: Theory and Applications"

Transcription

1 The Channel Capacity of Constrained Codes: Theory and Applications Xuerong Yong 1 The Problem and Motivation The primary purpose of coding theory channel capacity; Shannon capacity. Only when we transmit information at a rate below its capacity can we make reliable transmission. Good code re- The capacity is usually unknown. quires matching capacity.

2 2 Outline The Problem and Motivation Constrained Codes Encoding and Decoding Perron-Frobenius Theory Characterizations of Capacity Representative Research and Open Problems 2

3 3 Constrained Code as a Language Encoding of random data as a constrained code is accomplished by means of a finite state machine. A constrained code can be thought of as a regular language, so its encoder can be chosen as a finite state deterministic automaton. 3

4 4 Constrained Code for Recording Codes in magnetic, digital and optical recordings: (d,k)- RLL codes are the codes over binary alphabet {, 1} where d (k) is the minimum (maximum) permitted number of s separating consecutive 1 s in a legal binary sequence S = s 1 s 2. The k is imposed to guarantee sufficient sign-changes in the recording waveform to prevent clock drifting in the clock synchronization; d is used to prevent intersymbol interference. Multiple-spaced (d, k, s)-rll codes, where s indicates that the runlengths of s must be given in the form d + is, where i is an integer. Others: 4

5 5 Data Storage Devices Appear in computer center at business location and in desk-top workstation in office, etc. Variety of such devices includes: Conventional diskette and hard diskette drivers; optical read-only drivers such as CD, CD-ROM drivers. Magnetic tape drivers, digital audio tape systems and digital compact cassette audio tape systems. The data recording and retrieval process are illustrated in Figure 1. 5

6 data data Compression Encoder Compression Decoder Error Correction Encoder Modulation Encoder Error Correction Decoder Modulation Decoder Signal Generator Detector Write Equalizer Read Equalizer Figure 1: Data recording schematic. 6

7 6 Encoding and Decoding Given a code, we can construct an encoder, which accepts an input block of p user bits and generates a length-q codeword. The sequences obtained by concatenating the lengthq codewords satisfy the constraint. The encoder should automatically be decodable. The state dependent decoder accepts a length-q codeword and produces a length-p block of user bits. It is desirable that the code have the highest rate possible. Shannon proved that the rate p/q can not exceed the capacity. 7

8 7 Representations of Constrained Codes The sequences permitted to appear in a given channel can be represented by a graph (encoder). Conversely, an encoder for the channel may give words only from this system. The graph that characterises the configurations is called a labeled graph, a labeled directed multi-graph G = (V, E, L). The labeled graph is conveniently expressed by a matrix, called its adjacency matrix A, where the entry (A) u,v is the number of edges from vertex u to vertex v in the graph G. 8

9 8 Some Definitions Let A be an n n matrix. Its eigenvalues are n roots of the characteristic polynomial det(λi A). Let λ(a) be the largest absolute value of the eigenvalues of A. Right, left eigenvectors: For nonzero vectors x, y, Ax = λx, y t A = λy t, where t stands for the transpose. period of a graph G is the greatest common divisor of the lengths of all cycles in G. A graph is called primitive if its period is 1. A constrained system S is called irreducible or primitive if its graph is strongly connected or primitive. 9

10 9 Perron-Frobenius Theorem Theorem. Let A be a nonnegative irreducible matrix. Then λ(a) is an eigenvalue of A and A has positive right and left eigenvectors associated with the eigenvalue λ(a). λ(a) is a simple eigenvalue of A, i.e. λ(a) appears as a root of the characteristic polynomial of A with multiplicity 1. min u v (A) u,v λ(a) max u v (A) u,v. 1

11 1 Characterization of Capacity Viewing from combinatorics, algebra and probability. Combinatorial description: The capacity cap(s) measures the growth rate of the number N(n; S) of sequences of length n in S. Precisely, Example: log cap α (S) = n lim α N(n; S). n Algebraic description: Let S be an irreducible constrained system and let G be an irreducible lossless presentation of S. Then cap(s) = log λ(a G ), where A G is the adjacency matrix of G. 11

12 Probabilistic Description: An n n nonnegative matrix Q = (q ij ) is stochastic if all its row sums are 1. Let y = (y 1, y 2,..., y n ) t, n i=1 y i = 1 be the left eigenvector corresponding to 1. The entropy of Q is H(Q) = n n q ij log q ij. y i i=1 j=1 Let p ij be the probability that there is an edge from vertex u i to vertex u j in the labeled graph G of a constrained system S. Then Q = (p ij ) is stochastic. Let S be a primitive constrained system and let G be a primitive presentation of S. Then sup Q H(Q) = cap(s) = log λ(a G ). 12

13 11 Capacity of 1-D Constrained Codes The capacity of 1-D codes is paid much attention. The following are the representatives. Norris and Bloomberg considered (1981) the cap(s d,k) (1) of 1-D (d, k)-rll constraints for a large range of parameters and proved cap(s d,k) (1) = log 2 λ 1 where λ 1 is the largest root of the polynomial f d,k (x) = x k+1 x k d... x 1, if k <, f d, (x) = x d+1 x d 1, if k =. Ashley and Siegel proved (1987) that for all d 1 we have cap(s (1) d, ) = cap(s (1) d 1,2d 1). Their approaches are combinatorics and algebra. 13

14 12 Capacity of 2-D Constrained codes 2-D system S has constraints both horizontally and vertically. 2-D (d, k)-rll system S (2) d,k satisfies the RLL constraint both two directions. The capacity cap(s) measures the growth rate of the number N(m, n; S) of m n arrays in S: cap(s) = lim n, m 1 nm log 2 N(m, n; S), where N(m, n; S) equals the number of m n (, 1) matrices that satisfy S. cap(s d,k) (2) cap(s d,k). (1) However, the 1-D and 2-D capacities are different. Ashley and Marcus (1996) proved that cap(s 1,2) (1) = , whereas cap(s 1,2) (2) = (Kato and Zeger, 1999). 14

15 13 2-D RLL Constrained code S (2) 1, Called hard-square, or hard-core lattice gas system (Forchhammer and Justesen, 1999), Engel (1982) called Fibonacci number of a lattice, Calkin and Wilf (1998) called independent sets in grid graph. Denote it simply by S HS. It contains all matrices that do not have adjacent horizontal or vertical 1 s. For example, the first one of the following two matrices satisfies the constraint but the second does not ,

16 14 Work on Capacity Weber seems to be the first to consider this problem. Weber (1988) obtained cap(s HS) 1.554, Engel (199) cap(s HS) Calkin and Wilf (1998) cap(s HS) , i.e cap(s HS ) Later, Weeks and Blahut (1998) improved the these bounds, and recently further refined by Nagy and Zeger (21) and become cap(s HS )

17 15 The Tools Used in S HS The basic tool used is the transfer matrix technique. Consider the set V m of possible 1 m strings that are allowed to appear as rows in the constrained matrix. We say that the pair (v i, v j ) is valid if v i, v i V m can be put together without violating the constraint. The transfer matrix T m is defined as (T m ) ij = 1 if (v i, v j ) is valid and otherwise. It is symmetric and primitive. 17

18 16 Lowerbounding the Capacity N(m, n; S) = 1 t Tm n 1 1 where 1 is the appropriate size vector of all 1s, and m 1,n 1. Let λ m be the largest eigenvalue of T m. Then using Perron-Frobenius Theorem, log cap(s) = lim 2 λ m m m. By maximum principle, for an n n real symmetric matrix A, the largest eigenvalue λ n satisfies λ n max x x t Ax x t x. p, q, cap(s) log 2 ( λ p+2q ) /p (1) λ 2q bounding cap(s) from below by computing the first few λ. 18

19 17 Upperbounding the Capacity Their upper bound is obtained by counting the number of cylinders; the arrays that the leftmost and rightmost columns can be put next to each other without violating the constraint. This is equivalent to constructing a related transfer matrix B 2p for an arbitrary integer p such that trace(a 2p ) = 1 t B2p m 1 1, and then using the fact λ m (trace(a 2p )) 1 2p = (1 t B m 1 2p 1 ) 1 2p, and computing the 2pth positive root of the largest eigenvalue of B 2p. 19

20 18 The Difficulties The largest eigenvalue λ m of T m is computed by power method (although there are several methods, this method has been considered to be most applicable). [The method has n square complexity where n is the size of the matrix.] The crucial fact is that the size of the matrices grows exponentially (T m is an F m+3 F m+3 matrix and F m ( 1+ )m 5 2 ) so exactly calculating any more than the first few λ m is very difficult. Better approaches involve eigen-spaces of the first two largest eigenvalues. 2

21 19 General Technique In the case that the transfer matrix is symmetric, almost all of the research into considering the capacity has been using transfer matrix technique maximum principle Perron-Frobenius theory power method BUT since the size usually grows exponentially, the research concentrated on getting better bounds using loworder λ m would be desirable. 21

22 2 Read/Write isolated memory A serial, binary (, 1) memory is said to be read isolated if no consecutive positions can store 1 s in the memory. Freiman and Wyner (1964, 1965) first considered this problem, and then Kautz (1965) explored a subcase. A serial, binary (, 1) memory is said to be write isolated if no two consecutive positions can be changed during rewriting (Cohen, 1993). These two memories have same capacity A read/write isolated memory is a binary linearly ordered, rewritable storage medium obeying both the two restrictions, which considered by Cohn in

23 21 The Boundings It is equivalent to finding the capacity of a constrained system S RW in which no matrix can contain a summatrix like (11), ( ) 1, or ( ) The transfer matrix is symmetric and primitive. He proved that for m, cap(s RW ) log 2 λ m m. (2) His lower bound came from a combinatorial argument. 23

24 Diamond Hexagonal Square Figure 2: Three kinds of checkerboard constraints 22 Checkerboard Constraints A 2-D arrangement of zeros and ones, where every 1 is surrounded by a specific pattern of s, considered by Weeks and Blahut (1998). Three kinds: Diamond, Hexagonal and Square. If every one 1 is surrounded by l rings of zeros, the constraint is lth order. First two orders are shown in Figure 2. 24

25 23 The Techniques in Checkerboard All transfer matrices (for the first order) are symmetric. So Weeks and Blahut used the same approaches as the Calkin and Wilf did. When the order exceeds 1, the transfer matrix is not symmetric but irreducible. Applying Perron-Frobenius Theorem they obtained some rough lower and upper bounds on capacity. And then using these bounds, a numerical convergencespeeding technique called Richardson extrapolation was developed tighter results. 25

26 24 2-D RLL Constrained Systems S (2) d,k Kato and Zeger (1999) studied 2-D RLL S (2) (d,k) and proved that ( ) for d >, cap(s (2) d,k) = if and only if k = d + 1. ( ) Inequalities between cap(s,k) (2) and cap(s 1, ) (2) and between cap(s d,k) (2) and cap(s 1, ). (2) ( ) As d grows, cap(s d, ) (2) decays to zero exactly at the cap(s (2) d, rate (log 2 d)/d, i.e. lim ) d (log 2 d)/d = 1. They do not use the transfer matrix approach but rather a combinatorial one in their proofs. 26

27 25 Zero Capacity In a recent paper (1999) by Ito, Kato, Nagy and Zeger it is proved for the n-d constraint S (n) d,k n 2 that ( ) for d >, and k d cap(s (n) d,k ) = if and only if k = d + 1. and ( ) for d >, and k d, if and only if k 2d. lim n cap(s(n) d,k ) = 27

28 K K K K K K K K K K K K K K K Figure 3: An example on 6 1 board 26 Nonattacking Kings Probem Nonattacking Kings: the kings are not allowed to be placed consecutively both horizontally and vertically in a lattice, and one in every 2 2 chessboard. Wilf (1995) considered this problem from combinatorics. Let N(m, n) stand for the number of ways that mn nonattacking kings can be placed on a 2m 2n chessboard. He obtained N(m, n) = (c m n + d m )(m + 1) n + O(θ n m), n +. 28

29 27 Its Transfer Matrix This is a constrained 2-D (1, )-RLL constrained system. The transfer matrix is neither symmetric nor irreducible ,

30 28 Two Recent Approaches Transfer matrix is symmetric: a work by Nagy and Zeger (21) provides a technique for upper bounding the 3-D RLL constraint S,1. (3) they extended the ideas of Calkin and Wilf (1998) to prove that cap(s (3),1) They used cylinders, and introduced toroidal constraint. Thansfer matrix T is non-symmetric: Forchhammer (2) provides a new technique based on transfer matrix to upperbound the capacity of a 3-D constraint. 3

31 29 Construction of Constrained Codes Siegel and Wolf (1998) gave lower bounds on the capacities of two 2-D constrained systems S (2) d, and S (2),k called bit-stuffing encoder. Bit-stuffing Encoder They describe a mapping of 1-D constrained system S (1) 2d, to 2-D constrained system S d,, (2) then represent the 2-D constrained system S (2) d, as the lower right quadrant of a rectangular grid. Then cap(s (2) d, ) cap(s (1) 2d, ). 31

32 3 Approach by Analyzing a Bit-stuffing Encoder A binary data sequence is first converted to a sequence of statistically independent binary digits with the probability of a 1 equal to p and the probability of a equal to 1 p. This conversion occurs at a rate penalty of entropy function H 2 (p) = p log 2 p (1 p) log 2 (1 p). Using the Bit-stuffing map 1-D S (1) 2d, to 2-D S (2) d,. Under certain assumptions they obtained lower bounds on cap(s d, ) (2) and cap(s,k). (2) 32

33 31 Results based on Bit-stuffing Technique Two recent papers both by Roth, Siegel, Wolf (1999, 21), who describe efficient schemes, based on the bit stuffing technique, for constructing codes satisfying the S d,, (2) S,k, (2) and S (2) 1, (Hard-Square). Their average code rate is within 1% of the capacity cap ( S (2) 1, ). 33

34 32 Possible Work - Developing a Software 1. Generate the constrained sequences recursively 2. Generate the transfer matrices recursively. 3. Find the corresponding compressed matrices. 4. Compute the largest eigenvalues of the compressed matrix. 5. Estimate the bounds on capacity. 6. Check the accuracy: if the error bound is bigger than a designed tolerance, then go back to (1). 34

35 32.1 Conjecture It is known that log 2 ( λ 2m+1 λ 2m ) cap(s). (3) There is a conjecture (Engel, 199) for hard square system: log 2 ( λ 2m λ 2m 1 ) cap(s). (4) Our calculations indicate that the conjecture seems also true for other constraints. If it is true, it will refine many obtained results. 35

The Channel Capacity of One and Two-Dimensional Constrained Codes. Xuerong Yong Department of Computer Science Hong Kong UST

The Channel Capacity of One and Two-Dimensional Constrained Codes. Xuerong Yong Department of Computer Science Hong Kong UST The Channel Capacity of One and Two-Dimensional Constrained Codes Xuerong Yong Department of Computer Science Hong Kong UST OUTLINE 1. Introduction 2. Capacity and Its Properties 3. Examples of Previous

More information

Constrained Coding Techniques for Advanced Data Storage Devices

Constrained Coding Techniques for Advanced Data Storage Devices Constrained Coding Techniques for Advanced Data Storage Devices Paul H. Siegel Director, CMRR Electrical and Computer Engineering University of California, San Diego 2/8/05 Outline Digital recording channel

More information

Asymptotic Capacity of Two-Dimensional Channels With Checkerboard Constraints

Asymptotic Capacity of Two-Dimensional Channels With Checkerboard Constraints IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 49, NO. 9, SEPTEMBER 2003 2115 Asymptotic Capacity of Two-Dimensional Channels With Checkerboard Constraints Zsigmond Nagy and Kenneth Zeger, Fellow, IEEE

More information

Optimal Block-Type-Decodable Encoders for Constrained Systems

Optimal Block-Type-Decodable Encoders for Constrained Systems IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 49, NO. 5, MAY 2003 1231 Optimal Block-Type-Decodable Encoders for Constrained Systems Panu Chaichanavong, Student Member, IEEE, Brian H. Marcus, Fellow, IEEE

More information

The Continuing Miracle of Information Storage Technology Paul H. Siegel Director, CMRR University of California, San Diego

The Continuing Miracle of Information Storage Technology Paul H. Siegel Director, CMRR University of California, San Diego The Continuing Miracle of Information Storage Technology Paul H. Siegel Director, CMRR University of California, San Diego 10/15/01 1 Outline The Shannon Statue A Miraculous Technology Information Theory

More information

Coding and Bounds for Two-Dimensional Constraints. Ido Tal

Coding and Bounds for Two-Dimensional Constraints. Ido Tal Coding and Bounds for Two-Dimensional Constraints Ido Tal Coding and Bounds for Two-Dimensional Constraints Research Thesis Submitted in partial fulfillment of the requirements for the degree of Doctor

More information

On Row-by-Row Coding for 2-D Constraints

On Row-by-Row Coding for 2-D Constraints On Row-by-Row Coding for 2-D Constraints Ido Tal Tuvi Etzion Ron M. Roth Computer Science Department, Technion, Haifa 32000, Israel. Email: {idotal, etzion, ronny}@cs.technion.ac.il Abstract A constant-rate

More information

CSCI 2570 Introduction to Nanocomputing

CSCI 2570 Introduction to Nanocomputing CSCI 2570 Introduction to Nanocomputing Information Theory John E Savage What is Information Theory Introduced by Claude Shannon. See Wikipedia Two foci: a) data compression and b) reliable communication

More information

Definition A finite Markov chain is a memoryless homogeneous discrete stochastic process with a finite number of states.

Definition A finite Markov chain is a memoryless homogeneous discrete stochastic process with a finite number of states. Chapter 8 Finite Markov Chains A discrete system is characterized by a set V of states and transitions between the states. V is referred to as the state space. We think of the transitions as occurring

More information

Eigenvectors Via Graph Theory

Eigenvectors Via Graph Theory Eigenvectors Via Graph Theory Jennifer Harris Advisor: Dr. David Garth October 3, 2009 Introduction There is no problem in all mathematics that cannot be solved by direct counting. -Ernst Mach The goal

More information

Section 1.7: Properties of the Leslie Matrix

Section 1.7: Properties of the Leslie Matrix Section 1.7: Properties of the Leslie Matrix Definition: A matrix A whose entries are nonnegative (positive) is called a nonnegative (positive) matrix, denoted as A 0 (A > 0). Definition: A square m m

More information

Chapter 7. Error Control Coding. 7.1 Historical background. Mikael Olofsson 2005

Chapter 7. Error Control Coding. 7.1 Historical background. Mikael Olofsson 2005 Chapter 7 Error Control Coding Mikael Olofsson 2005 We have seen in Chapters 4 through 6 how digital modulation can be used to control error probabilities. This gives us a digital channel that in each

More information

A Questionable Distance-Regular Graph

A Questionable Distance-Regular Graph A Questionable Distance-Regular Graph Rebecca Ross Abstract In this paper, we introduce distance-regular graphs and develop the intersection algebra for these graphs which is based upon its intersection

More information

Ma/CS 6b Class 20: Spectral Graph Theory

Ma/CS 6b Class 20: Spectral Graph Theory Ma/CS 6b Class 20: Spectral Graph Theory By Adam Sheffer Eigenvalues and Eigenvectors A an n n matrix of real numbers. The eigenvalues of A are the numbers λ such that Ax = λx for some nonzero vector x

More information

Lecture 12. Block Diagram

Lecture 12. Block Diagram Lecture 12 Goals Be able to encode using a linear block code Be able to decode a linear block code received over a binary symmetric channel or an additive white Gaussian channel XII-1 Block Diagram Data

More information

Preliminaries and Complexity Theory

Preliminaries and Complexity Theory Preliminaries and Complexity Theory Oleksandr Romanko CAS 746 - Advanced Topics in Combinatorial Optimization McMaster University, January 16, 2006 Introduction Book structure: 2 Part I Linear Algebra

More information

UNIT I INFORMATION THEORY. I k log 2

UNIT I INFORMATION THEORY. I k log 2 UNIT I INFORMATION THEORY Claude Shannon 1916-2001 Creator of Information Theory, lays the foundation for implementing logic in digital circuits as part of his Masters Thesis! (1939) and published a paper

More information

Estimating the Capacity of the 2-D Hard Square Constraint Using Generalized Belief Propagation

Estimating the Capacity of the 2-D Hard Square Constraint Using Generalized Belief Propagation Estimating the Capacity of the 2-D Hard Square Constraint Using Generalized Belief Propagation Navin Kashyap (joint work with Eric Chan, Mahdi Jafari Siavoshani, Sidharth Jaggi and Pascal Vontobel, Chinese

More information

Introduction to Techniques for Counting

Introduction to Techniques for Counting Introduction to Techniques for Counting A generating function is a device somewhat similar to a bag. Instead of carrying many little objects detachedly, which could be embarrassing, we put them all in

More information

Error Detection and Correction: Hamming Code; Reed-Muller Code

Error Detection and Correction: Hamming Code; Reed-Muller Code Error Detection and Correction: Hamming Code; Reed-Muller Code Greg Plaxton Theory in Programming Practice, Spring 2005 Department of Computer Science University of Texas at Austin Hamming Code: Motivation

More information

Dominating Configurations of Kings

Dominating Configurations of Kings Dominating Configurations of Kings Jolie Baumann July 11, 2006 Abstract In this paper, we are counting natural subsets of graphs subject to local restrictions, such as counting independent sets of vertices,

More information

Linear Algebra for Machine Learning. Sargur N. Srihari

Linear Algebra for Machine Learning. Sargur N. Srihari Linear Algebra for Machine Learning Sargur N. srihari@cedar.buffalo.edu 1 Overview Linear Algebra is based on continuous math rather than discrete math Computer scientists have little experience with it

More information

Detailed Proof of The PerronFrobenius Theorem

Detailed Proof of The PerronFrobenius Theorem Detailed Proof of The PerronFrobenius Theorem Arseny M Shur Ural Federal University October 30, 2016 1 Introduction This famous theorem has numerous applications, but to apply it you should understand

More information

arxiv: v1 [cs.it] 5 Sep 2008

arxiv: v1 [cs.it] 5 Sep 2008 1 arxiv:0809.1043v1 [cs.it] 5 Sep 2008 On Unique Decodability Marco Dalai, Riccardo Leonardi Abstract In this paper we propose a revisitation of the topic of unique decodability and of some fundamental

More information

ON MULTI-AVOIDANCE OF RIGHT ANGLED NUMBERED POLYOMINO PATTERNS

ON MULTI-AVOIDANCE OF RIGHT ANGLED NUMBERED POLYOMINO PATTERNS INTEGERS: ELECTRONIC JOURNAL OF COMBINATORIAL NUMBER THEORY 4 (2004), #A21 ON MULTI-AVOIDANCE OF RIGHT ANGLED NUMBERED POLYOMINO PATTERNS Sergey Kitaev Department of Mathematics, University of Kentucky,

More information

When Data Must Satisfy Constraints Upon Writing

When Data Must Satisfy Constraints Upon Writing When Data Must Satisfy Constraints Upon Writing Erik Ordentlich, Ron M. Roth HP Laboratories HPL-2014-19 Abstract: We initiate a study of constrained codes in which any codeword can be transformed into

More information

Parity Dominating Sets in Grid Graphs

Parity Dominating Sets in Grid Graphs Parity Dominating Sets in Grid Graphs John L. Goldwasser and William F. Klostermeyer Dept. of Mathematics West Virginia University Morgantown, WV 26506 Dept. of Computer and Information Sciences University

More information

Ma/CS 6b Class 20: Spectral Graph Theory

Ma/CS 6b Class 20: Spectral Graph Theory Ma/CS 6b Class 20: Spectral Graph Theory By Adam Sheffer Recall: Parity of a Permutation S n the set of permutations of 1,2,, n. A permutation σ S n is even if it can be written as a composition of an

More information

1 Ex. 1 Verify that the function H(p 1,..., p n ) = k p k log 2 p k satisfies all 8 axioms on H.

1 Ex. 1 Verify that the function H(p 1,..., p n ) = k p k log 2 p k satisfies all 8 axioms on H. Problem sheet Ex. Verify that the function H(p,..., p n ) = k p k log p k satisfies all 8 axioms on H. Ex. (Not to be handed in). looking at the notes). List as many of the 8 axioms as you can, (without

More information

Uncertainity, Information, and Entropy

Uncertainity, Information, and Entropy Uncertainity, Information, and Entropy Probabilistic experiment involves the observation of the output emitted by a discrete source during every unit of time. The source output is modeled as a discrete

More information

On Synchronous Variable Length Coding for Discrete Noiseless Chrannels

On Synchronous Variable Length Coding for Discrete Noiseless Chrannels INFORMATION AND CONTROL 15, 155--164 (1969) On Synchronous Variable Length Coding for Discrete Noiseless Chrannels P. A. FRANASZEK IBM Watson Research Center, Yorktown Heights, New York A method is presented

More information

On Row-by-Row Coding for 2-D Constraints

On Row-by-Row Coding for 2-D Constraints On Row-by-Row Coding for 2-D Constraints Ido Tal Tuvi Etzion Ron M. Roth Abstract A constant-rate encoder decoder pair is presented for a fairly large family of two-dimensional (2-D) constraints. Encoding

More information

The Hamming Codes and Delsarte s Linear Programming Bound

The Hamming Codes and Delsarte s Linear Programming Bound The Hamming Codes and Delsarte s Linear Programming Bound by Sky McKinley Under the Astute Tutelage of Professor John S. Caughman, IV A thesis submitted in partial fulfillment of the requirements for the

More information

Bit-Stuffing Algorithms for Crosstalk Avoidance in High-Speed Switching

Bit-Stuffing Algorithms for Crosstalk Avoidance in High-Speed Switching Bit-Stuffing Algorithms for Crosstalk Avoidance in High-Speed Switching Cheng-Shang Chang, Jay Cheng, Tien-Ke Huang, Xuan-Chao Huang, Duan-Shin Lee, and Chao-Yi Chen Institute of Communications Engineering

More information

Notes on Linear Algebra and Matrix Theory

Notes on Linear Algebra and Matrix Theory Massimo Franceschet featuring Enrico Bozzo Scalar product The scalar product (a.k.a. dot product or inner product) of two real vectors x = (x 1,..., x n ) and y = (y 1,..., y n ) is not a vector but a

More information

Information Theory CHAPTER. 5.1 Introduction. 5.2 Entropy

Information Theory CHAPTER. 5.1 Introduction. 5.2 Entropy Haykin_ch05_pp3.fm Page 207 Monday, November 26, 202 2:44 PM CHAPTER 5 Information Theory 5. Introduction As mentioned in Chapter and reiterated along the way, the purpose of a communication system is

More information

64 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 1, JANUARY Coding for the Optical Channel: The Ghost-Pulse Constraint

64 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 1, JANUARY Coding for the Optical Channel: The Ghost-Pulse Constraint 64 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 1, JANUARY 2006 Coding for the Optical Channel: The Ghost-Pulse Constraint Navin Kashyap, Member, IEEE, Paul H. Siegel, Fellow, IEEE, and Alexander

More information

Exact Probability of Erasure and a Decoding Algorithm for Convolutional Codes on the Binary Erasure Channel

Exact Probability of Erasure and a Decoding Algorithm for Convolutional Codes on the Binary Erasure Channel Exact Probability of Erasure and a Decoding Algorithm for Convolutional Codes on the Binary Erasure Channel Brian M. Kurkoski, Paul H. Siegel, and Jack K. Wolf Department of Electrical and Computer Engineering

More information

9.1. Unit 9. Implementing Combinational Functions with Karnaugh Maps or Memories

9.1. Unit 9. Implementing Combinational Functions with Karnaugh Maps or Memories . Unit Implementing Combinational Functions with Karnaugh Maps or Memories . Outcomes I can use Karnaugh maps to synthesize combinational functions with several outputs I can determine the appropriate

More information

Lecture 4 Noisy Channel Coding

Lecture 4 Noisy Channel Coding Lecture 4 Noisy Channel Coding I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw October 9, 2015 1 / 56 I-Hsiang Wang IT Lecture 4 The Channel Coding Problem

More information

In particular, if A is a square matrix and λ is one of its eigenvalues, then we can find a non-zero column vector X with

In particular, if A is a square matrix and λ is one of its eigenvalues, then we can find a non-zero column vector X with Appendix: Matrix Estimates and the Perron-Frobenius Theorem. This Appendix will first present some well known estimates. For any m n matrix A = [a ij ] over the real or complex numbers, it will be convenient

More information

Stochastic processes. MAS275 Probability Modelling. Introduction and Markov chains. Continuous time. Markov property

Stochastic processes. MAS275 Probability Modelling. Introduction and Markov chains. Continuous time. Markov property Chapter 1: and Markov chains Stochastic processes We study stochastic processes, which are families of random variables describing the evolution of a quantity with time. In some situations, we can treat

More information

Physical Layer and Coding

Physical Layer and Coding Physical Layer and Coding Muriel Médard Professor EECS Overview A variety of physical media: copper, free space, optical fiber Unified way of addressing signals at the input and the output of these media:

More information

Optimization of Quadratic Forms: NP Hard Problems : Neural Networks

Optimization of Quadratic Forms: NP Hard Problems : Neural Networks 1 Optimization of Quadratic Forms: NP Hard Problems : Neural Networks Garimella Rama Murthy, Associate Professor, International Institute of Information Technology, Gachibowli, HYDERABAD, AP, INDIA ABSTRACT

More information

ADAPTIVE CONSTRAINED CODING

ADAPTIVE CONSTRAINED CODING BUDAPEST UNIVERSITY OF TECHNOLOGY AND ECONOMICS ADAPTIVE CONSTRAINED CODING PhD Thesis by PETER VÁMOS 2013 Contents 1 Introduction 3 1.1 Spectrum Shaping Coding Algorithms..................... 4 1.2 An

More information

AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES

AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES JOEL A. TROPP Abstract. We present an elementary proof that the spectral radius of a matrix A may be obtained using the formula ρ(a) lim

More information

Induction and recursion. Chapter 5

Induction and recursion. Chapter 5 Induction and recursion Chapter 5 Chapter Summary Mathematical Induction Strong Induction Well-Ordering Recursive Definitions Structural Induction Recursive Algorithms Mathematical Induction Section 5.1

More information

On Unique Decodability, McMillan s Theorem and the Expected Length of Codes

On Unique Decodability, McMillan s Theorem and the Expected Length of Codes 1 On Unique Decodability, McMillan s Theorem and the Expected Length of Codes Technical Report R.T. 200801-58, Department of Electronics for Automation, University of Brescia, Via Branze 38-25123, Brescia,

More information

Chapter Summary. Mathematical Induction Strong Induction Well-Ordering Recursive Definitions Structural Induction Recursive Algorithms

Chapter Summary. Mathematical Induction Strong Induction Well-Ordering Recursive Definitions Structural Induction Recursive Algorithms 1 Chapter Summary Mathematical Induction Strong Induction Well-Ordering Recursive Definitions Structural Induction Recursive Algorithms 2 Section 5.1 3 Section Summary Mathematical Induction Examples of

More information

Nonnegative Matrices I

Nonnegative Matrices I Nonnegative Matrices I Daisuke Oyama Topics in Economic Theory September 26, 2017 References J. L. Stuart, Digraphs and Matrices, in Handbook of Linear Algebra, Chapter 29, 2006. R. A. Brualdi and H. J.

More information

Codes over Subfields. Chapter Basics

Codes over Subfields. Chapter Basics Chapter 7 Codes over Subfields In Chapter 6 we looked at various general methods for constructing new codes from old codes. Here we concentrate on two more specialized techniques that result from writing

More information

Language Acquisition and Parameters: Part II

Language Acquisition and Parameters: Part II Language Acquisition and Parameters: Part II Matilde Marcolli CS0: Mathematical and Computational Linguistics Winter 205 Transition Matrices in the Markov Chain Model absorbing states correspond to local

More information

Review of Linear Algebra

Review of Linear Algebra Review of Linear Algebra Definitions An m n (read "m by n") matrix, is a rectangular array of entries, where m is the number of rows and n the number of columns. 2 Definitions (Con t) A is square if m=

More information

Introduction to Search Engine Technology Introduction to Link Structure Analysis. Ronny Lempel Yahoo Labs, Haifa

Introduction to Search Engine Technology Introduction to Link Structure Analysis. Ronny Lempel Yahoo Labs, Haifa Introduction to Search Engine Technology Introduction to Link Structure Analysis Ronny Lempel Yahoo Labs, Haifa Outline Anchor-text indexing Mathematical Background Motivation for link structure analysis

More information

Image Data Compression

Image Data Compression Image Data Compression Image data compression is important for - image archiving e.g. satellite data - image transmission e.g. web data - multimedia applications e.g. desk-top editing Image data compression

More information

Modular numbers and Error Correcting Codes. Introduction. Modular Arithmetic.

Modular numbers and Error Correcting Codes. Introduction. Modular Arithmetic. Modular numbers and Error Correcting Codes Introduction Modular Arithmetic Finite fields n-space over a finite field Error correcting codes Exercises Introduction. Data transmission is not normally perfect;

More information

Chapter 3 Transformations

Chapter 3 Transformations Chapter 3 Transformations An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Linear Transformations A function is called a linear transformation if 1. for every and 2. for every If we fix the bases

More information

Improved Gilbert-Varshamov Bound for Constrained Systems

Improved Gilbert-Varshamov Bound for Constrained Systems Improved Gilbert-Varshamov Bound for Constrained Systems Brian H. Marcus Ron M. Roth December 10, 1991 Abstract Nonconstructive existence results are obtained for block error-correcting codes whose codewords

More information

Computability and Complexity Theory: An Introduction

Computability and Complexity Theory: An Introduction Computability and Complexity Theory: An Introduction meena@imsc.res.in http://www.imsc.res.in/ meena IMI-IISc, 20 July 2006 p. 1 Understanding Computation Kinds of questions we seek answers to: Is a given

More information

8. Diagonalization.

8. Diagonalization. 8. Diagonalization 8.1. Matrix Representations of Linear Transformations Matrix of A Linear Operator with Respect to A Basis We know that every linear transformation T: R n R m has an associated standard

More information

Central Groupoids, Central Digraphs, and Zero-One Matrices A Satisfying A 2 = J

Central Groupoids, Central Digraphs, and Zero-One Matrices A Satisfying A 2 = J Central Groupoids, Central Digraphs, and Zero-One Matrices A Satisfying A 2 = J Frank Curtis, John Drew, Chi-Kwong Li, and Daniel Pragel September 25, 2003 Abstract We study central groupoids, central

More information

An Introduction to Entropy and Subshifts of. Finite Type

An Introduction to Entropy and Subshifts of. Finite Type An Introduction to Entropy and Subshifts of Finite Type Abby Pekoske Department of Mathematics Oregon State University pekoskea@math.oregonstate.edu August 4, 2015 Abstract This work gives an overview

More information

Math 315: Linear Algebra Solutions to Assignment 7

Math 315: Linear Algebra Solutions to Assignment 7 Math 5: Linear Algebra s to Assignment 7 # Find the eigenvalues of the following matrices. (a.) 4 0 0 0 (b.) 0 0 9 5 4. (a.) The characteristic polynomial det(λi A) = (λ )(λ )(λ ), so the eigenvalues are

More information

Discrete Structures, Final Exam

Discrete Structures, Final Exam Discrete Structures, Final Exam Monday, May 11, 2009 SOLUTIONS 1. (40 pts) Short answer. Put your answer in the box. No partial credit. [ ] 0 1 (a) If A = and B = 1 0 [ ] 0 0 1. 0 1 1 [ 0 1 1 0 0 1 ],

More information

The Poisson Channel with Side Information

The Poisson Channel with Side Information The Poisson Channel with Side Information Shraga Bross School of Enginerring Bar-Ilan University, Israel brosss@macs.biu.ac.il Amos Lapidoth Ligong Wang Signal and Information Processing Laboratory ETH

More information

1 Introduction to information theory

1 Introduction to information theory 1 Introduction to information theory 1.1 Introduction In this chapter we present some of the basic concepts of information theory. The situations we have in mind involve the exchange of information through

More information

Lecture 3: Error Correcting Codes

Lecture 3: Error Correcting Codes CS 880: Pseudorandomness and Derandomization 1/30/2013 Lecture 3: Error Correcting Codes Instructors: Holger Dell and Dieter van Melkebeek Scribe: Xi Wu In this lecture we review some background on error

More information

Outline. MSRI-UP 2009 Coding Theory Seminar, Week 2. The definition. Link to polynomials

Outline. MSRI-UP 2009 Coding Theory Seminar, Week 2. The definition. Link to polynomials Outline MSRI-UP 2009 Coding Theory Seminar, Week 2 John B. Little Department of Mathematics and Computer Science College of the Holy Cross Cyclic Codes Polynomial Algebra More on cyclic codes Finite fields

More information

Orthogonal Arrays & Codes

Orthogonal Arrays & Codes Orthogonal Arrays & Codes Orthogonal Arrays - Redux An orthogonal array of strength t, a t-(v,k,λ)-oa, is a λv t x k array of v symbols, such that in any t columns of the array every one of the possible

More information

Wieland drift for triangular fully packed loop configurations

Wieland drift for triangular fully packed loop configurations Wieland drift for triangular fully packed loop configurations Sabine Beil Ilse Fischer Fakultät für Mathematik Universität Wien Wien, Austria {sabine.beil,ilse.fischer}@univie.ac.at Philippe Nadeau Institut

More information

MATH Examination for the Module MATH-3152 (May 2009) Coding Theory. Time allowed: 2 hours. S = q

MATH Examination for the Module MATH-3152 (May 2009) Coding Theory. Time allowed: 2 hours. S = q MATH-315201 This question paper consists of 6 printed pages, each of which is identified by the reference MATH-3152 Only approved basic scientific calculators may be used. c UNIVERSITY OF LEEDS Examination

More information

THIS work is motivated by the goal of finding the capacity

THIS work is motivated by the goal of finding the capacity IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 53, NO. 8, AUGUST 2007 2693 Improved Lower Bounds for the Capacity of i.i.d. Deletion Duplication Channels Eleni Drinea, Member, IEEE, Michael Mitzenmacher,

More information

3F1 Information Theory, Lecture 3

3F1 Information Theory, Lecture 3 3F1 Information Theory, Lecture 3 Jossy Sayir Department of Engineering Michaelmas 2013, 29 November 2013 Memoryless Sources Arithmetic Coding Sources with Memory Markov Example 2 / 21 Encoding the output

More information

On Counting the Number of Tilings of a Rectangle with Squares of Size 1 and 2

On Counting the Number of Tilings of a Rectangle with Squares of Size 1 and 2 1 2 3 47 6 23 11 Journal of Integer Sequences, Vol. 20 (2017), Article 17.2.2 On Counting the Number of Tilings of a Rectangle with Squares of Size 1 and 2 Johan Nilsson LIPN Université Paris 13 93430

More information

IITM-CS6845: Theory Toolkit February 3, 2012

IITM-CS6845: Theory Toolkit February 3, 2012 IITM-CS6845: Theory Toolkit February 3, 2012 Lecture 4 : Derandomizing the logspace algorithm for s-t connectivity Lecturer: N S Narayanaswamy Scribe: Mrinal Kumar Lecture Plan:In this lecture, we will

More information

QUANTUM COMPUTER SIMULATION

QUANTUM COMPUTER SIMULATION Chapter 2 QUANTUM COMPUTER SIMULATION Chapter 1 discussed quantum computing in non-technical terms and in reference to simple, idealized physical models. In this chapter we make the underlying mathematics

More information

Let s now begin to formalize our analysis of sequential machines Powerful methods for designing machines for System control Pattern recognition Etc.

Let s now begin to formalize our analysis of sequential machines Powerful methods for designing machines for System control Pattern recognition Etc. Finite State Machines Introduction Let s now begin to formalize our analysis of sequential machines Powerful methods for designing machines for System control Pattern recognition Etc. Such devices form

More information

Chapter 3 Linear Block Codes

Chapter 3 Linear Block Codes Wireless Information Transmission System Lab. Chapter 3 Linear Block Codes Institute of Communications Engineering National Sun Yat-sen University Outlines Introduction to linear block codes Syndrome and

More information

Q = Set of states, IE661: Scheduling Theory (Fall 2003) Primer to Complexity Theory Satyaki Ghosh Dastidar

Q = Set of states, IE661: Scheduling Theory (Fall 2003) Primer to Complexity Theory Satyaki Ghosh Dastidar IE661: Scheduling Theory (Fall 2003) Primer to Complexity Theory Satyaki Ghosh Dastidar Turing Machine A Turing machine is an abstract representation of a computing device. It consists of a read/write

More information

Eugene Wigner [4]: in the natural sciences, Communications in Pure and Applied Mathematics, XIII, (1960), 1 14.

Eugene Wigner [4]: in the natural sciences, Communications in Pure and Applied Mathematics, XIII, (1960), 1 14. Introduction Eugene Wigner [4]: The miracle of the appropriateness of the language of mathematics for the formulation of the laws of physics is a wonderful gift which we neither understand nor deserve.

More information

Entropy Vectors and Network Information Theory

Entropy Vectors and Network Information Theory Entropy Vectors and Network Information Theory Sormeh Shadbakht and Babak Hassibi Department of Electrical Engineering Caltech Lee Center Workshop May 25, 2007 1 Sormeh Shadbakht and Babak Hassibi Entropy

More information

Induction and Recursion

Induction and Recursion . All rights reserved. Authorized only for instructor use in the classroom. No reproduction or further distribution permitted without the prior written consent of McGraw-Hill Education. Induction and Recursion

More information

NP, polynomial-time mapping reductions, and NP-completeness

NP, polynomial-time mapping reductions, and NP-completeness NP, polynomial-time mapping reductions, and NP-completeness In the previous lecture we discussed deterministic time complexity, along with the time-hierarchy theorem, and introduced two complexity classes:

More information

Enumerating Distinct Chessboard Tilings and Generalized Lucas Sequences Part I

Enumerating Distinct Chessboard Tilings and Generalized Lucas Sequences Part I Enumerating Distinct Chessboard Tilings and Generalized Lucas Sequences Part I Daryl DeFord Washington State University January 28, 2013 Roadmap 1 Problem 2 Symmetry 3 LHCCRR as Vector Spaces 4 Generalized

More information

COUNTING AND ENUMERATING SPANNING TREES IN (di-) GRAPHS

COUNTING AND ENUMERATING SPANNING TREES IN (di-) GRAPHS COUNTING AND ENUMERATING SPANNING TREES IN (di-) GRAPHS Xuerong Yong Version of Feb 5, 9 pm, 06: New York Time 1 1 Introduction A directed graph (digraph) D is a pair (V, E): V is the vertex set of D,

More information

Four new upper bounds for the stability number of a graph

Four new upper bounds for the stability number of a graph Four new upper bounds for the stability number of a graph Miklós Ujvári Abstract. In 1979, L. Lovász defined the theta number, a spectral/semidefinite upper bound on the stability number of a graph, which

More information

Linear Algebra. Matrices Operations. Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0.

Linear Algebra. Matrices Operations. Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0. Matrices Operations Linear Algebra Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0 The rectangular array 1 2 1 4 3 4 2 6 1 3 2 1 in which the

More information

The Kolakoski Sequence and Some Fast Algorithms, Part 2

The Kolakoski Sequence and Some Fast Algorithms, Part 2 The Kolakoski Sequence and Some Fast Algorithms, Part 2 Richard P. Brent Australian National University and University of Newcastle 1 October 2017 Joint work with Judy-anne Osborn Copyright c 2017, R.

More information

Quantum Computing Lecture 8. Quantum Automata and Complexity

Quantum Computing Lecture 8. Quantum Automata and Complexity Quantum Computing Lecture 8 Quantum Automata and Complexity Maris Ozols Computational models and complexity Shor s algorithm solves, in polynomial time, a problem for which no classical polynomial time

More information

The Perron Frobenius theorem and the Hilbert metric

The Perron Frobenius theorem and the Hilbert metric The Perron Frobenius theorem and the Hilbert metric Vaughn Climenhaga April 7, 03 In the last post, we introduced basic properties of convex cones and the Hilbert metric. In this post, we loo at how these

More information

3. Coding theory 3.1. Basic concepts

3. Coding theory 3.1. Basic concepts 3. CODING THEORY 1 3. Coding theory 3.1. Basic concepts In this chapter we will discuss briefly some aspects of error correcting codes. The main problem is that if information is sent via a noisy channel,

More information

Linear Algebra. The analysis of many models in the social sciences reduces to the study of systems of equations.

Linear Algebra. The analysis of many models in the social sciences reduces to the study of systems of equations. POLI 7 - Mathematical and Statistical Foundations Prof S Saiegh Fall Lecture Notes - Class 4 October 4, Linear Algebra The analysis of many models in the social sciences reduces to the study of systems

More information

642:550, Summer 2004, Supplement 6 The Perron-Frobenius Theorem. Summer 2004

642:550, Summer 2004, Supplement 6 The Perron-Frobenius Theorem. Summer 2004 642:550, Summer 2004, Supplement 6 The Perron-Frobenius Theorem. Summer 2004 Introduction Square matrices whose entries are all nonnegative have special properties. This was mentioned briefly in Section

More information

5-VERTEX MODELS, GELFAND-TSETLIN PATTERNS AND SEMISTANDARD YOUNG TABLEAUX

5-VERTEX MODELS, GELFAND-TSETLIN PATTERNS AND SEMISTANDARD YOUNG TABLEAUX 5-VERTEX MODELS, GELFAND-TSETLIN PATTERNS AND SEMISTANDARD YOUNG TABLEAUX TANTELY A. RAKOTOARISOA 1. Introduction In statistical mechanics, one studies models based on the interconnections between thermodynamic

More information

GF(2 m ) arithmetic: summary

GF(2 m ) arithmetic: summary GF(2 m ) arithmetic: summary EE 387, Notes 18, Handout #32 Addition/subtraction: bitwise XOR (m gates/ops) Multiplication: bit serial (shift and add) bit parallel (combinational) subfield representation

More information

And for polynomials with coefficients in F 2 = Z/2 Euclidean algorithm for gcd s Concept of equality mod M(x) Extended Euclid for inverses mod M(x)

And for polynomials with coefficients in F 2 = Z/2 Euclidean algorithm for gcd s Concept of equality mod M(x) Extended Euclid for inverses mod M(x) Outline Recall: For integers Euclidean algorithm for finding gcd s Extended Euclid for finding multiplicative inverses Extended Euclid for computing Sun-Ze Test for primitive roots And for polynomials

More information

Agreement algorithms for synchronization of clocks in nodes of stochastic networks

Agreement algorithms for synchronization of clocks in nodes of stochastic networks UDC 519.248: 62 192 Agreement algorithms for synchronization of clocks in nodes of stochastic networks L. Manita, A. Manita National Research University Higher School of Economics, Moscow Institute of

More information

1 The independent set problem

1 The independent set problem ORF 523 Lecture 11 Spring 2016, Princeton University Instructor: A.A. Ahmadi Scribe: G. Hall Tuesday, March 29, 2016 When in doubt on the accuracy of these notes, please cross chec with the instructor

More information

List Decoding of Reed Solomon Codes

List Decoding of Reed Solomon Codes List Decoding of Reed Solomon Codes p. 1/30 List Decoding of Reed Solomon Codes Madhu Sudan MIT CSAIL Background: Reliable Transmission of Information List Decoding of Reed Solomon Codes p. 2/30 List Decoding

More information

Ma/CS 6b Class 25: Error Correcting Codes 2

Ma/CS 6b Class 25: Error Correcting Codes 2 Ma/CS 6b Class 25: Error Correcting Codes 2 By Adam Sheffer Recall: Codes V n the set of binary sequences of length n. For example, V 3 = 000,001,010,011,100,101,110,111. Codes of length n are subsets

More information