Entropic and informatic matroids

Size: px
Start display at page:

Download "Entropic and informatic matroids"

Transcription

1 Entropic and informatic matroids Emmanuel Abbe Abstract This paper studies dependencies between random variables by means of information theoretic functionals such as the entropy and the mutual information. In particular, extremal dependencies (defined in terms of these functionals) are shown to have combinatorial structures corresponding to linear matroids. From a combinatorial point of view, this provides a characterization of a new class of matroids, and from a probabilistic point of view, this provides a characterization of extremal dependencies between random variables. 1 Introduction Independence structures arising between random variables can be studied by means of information theoretic functionals. The subject of matroid theory studies abstractions of the notion of independence, such as the independence in linear algebra, to other combinatorial and algebraic structures, such as for examples graphs. Matroid theory emerged in the thirties [19] as a subject of combinatorics, and connections with the Shannon entropy and notions of probabilistic independence appeared in 1978 in [6] (and later on in [11]), mainly for the case of polymatroids, and with more specific results for the case of matroids in 1994 in [12]. The entropy is a rather natural measure for the study of dependencies between random variables, and it has an information theoretic operational meaning. The entropy function of a random vector {X i } i E having joint distribution P on Z m q (note that one can assume that the support of X i is Z w.l.o.g., as long as the X i are assumed to be supported on finite sets) maps the subsets S of an index set E to the entropy H(X[S]) =: f P (S), where X[S] = {X i } i S. Using properties of the entropy, we have that f P ( ) = 0, f P (S) f P (T ) if S T E and f P (S T ) + f P (S T ) f P (S) + f P (T ), for S, T E. This, by definition, makes (E, f P ) a polymatroid [6]. By considering random vectors for which f is integral, and using the logarithm in base q in the entropy definition so that f P (S) S, we have a matroid. We define such matroids as follows: Definition 1. A matroid with ground set [m] and rank function f is q-entropic if there exists a probability distribution P on Z m q such that f = f P. There are different motivations in studying entropic (poly)matroids. First, entropic (poly)matroids provide another interesting class of independence structures to be studied as a branch of matroid theory. Then, many information theoretic problems can be formulated as properties of these (poly)matroids, cf. Section 2.4, in particular, as explained in [10], the full characterization of entropic polymatroids would have a major impact on network information theory and network coding problems. 1

2 This paper investigates a new class of dependence structures by means of another fundamental information theoretic functional, namely the mutual information (defined by Shannon in 1948). We define in Section 2.2 a notion of information function for a set of random variables, by mapping subsets ot {0, 1} m to the mutual information of dependent random variables, in analogy to the entropy function. Using properties of the mutual information, and as shown in [9], this also leads to a class of polymatroids, which we call informatic polymatroids. We provide in Section 4 a connection between entropic and informatic polymatroids, showing that entropic matroids are the duals of a special class of informatic matroids, although there is not a one-to-one correspondence (and the class of informatic matroids may be richer). We then study the class of matroids resulting from the informatic polymatroids (hence the resulting notions of independent sets). We show in Section 5 that in a large variety of cases, informatic, as well as entropic matroids, correspond to linear matroids. Again, there are different possible interests in these results. It first provides characterizations of this new class of matroids and brings new tools to analyze them, such as the duality structure shown in Section 5 and the recurrence structure of Section 5.1. Indeed, although Section 5 uses representation theorems of matroid theory (such as Tutte s theorem) to show that certain informatic matroids are binary, Section 5.1 shows how properties of the mutual information allow the use of inductive proofs that circumvent the use of Tutte s theorem. This is in particular useful when considering informatic matroids which are representable on fields of cardinality more than three, cf. Section 6.2, as there is no proved results in matroid theory providing finite lists of excluded minors in these cases. A second impact is for information theoretic problems, such as for network settings, in particular for the multiple access channel or for the compression of correlated sources, as already used in [1, 2]. Next section introduces the formal definitions of entropy and information functions (and entropic and informatic matroids), and a brief review of some of their properties. 2 Probabilistic dependencies and information measures 2.1 Entropy Let X be a finite set called the alphabet and M 1 (X ) denote the set of probability distributions on X. Let E m := {1,..., m} where m Z + and let X 1,..., X m be random variables supported on X. We use the notation X[S] := {X i : i S}, S E m, (1) so that X[E m ] is a random vector supported on X m. Assume that X[E m ] has an arbitrary probability distribution. Definition 2. We define the Shannon entropy (or entropy) of a random variable X supported on X with a probability distribution P by H(X) = E log X P (X). We also define the entropy of a random vector X[E m ] supported on X m with a probability distribution Q by H(X[E m ]) = E log X Q(X[E m ]). Example: when X = {0, 1} and X takes value 1 with probability p, the entropy of X is given by H(X) = p log 2 p (1 p) log 2 (1 p). 2

3 Note that in general 0 H(X) 1 and 0 H(X[E m ]) m. The entropy is a measure of the randomness; the distributions on X with minimal entropy (of zero) are the delta distributions and the unique distribution on X with a maximal entropy (of one) is the uniform distribution. Any value within (0, 1) can be achieved by the entropy of a random variable on any finite set (of cardinality at least 2). We have H(X[E m ]) i E m H(X i ) with equality if and only if X[E m ] has i.i.d. components. As mentioned previously, to achieve H(X[E m ]) = m, the distribution on X[E m ] must be uniform, or equivalently, X[E m ] must have i.i.d. uniform components. Hence, dependencies can only reduce the entropy. Definition 3. The entropy function of a distribution Q on X m is given by f Q : {0, 1} Em R + (2) S H(X[S]) (3) where X[E m ] is distributed according to Q. A function f : {0, 1} Em R + is called an entropy function if it is the entropy function of some distribution on X m for some finite set X. We just saw that an entropy function must satisfy certain inequalities, such as f(e m ) m i=1 f(i), and clearly, f must also me monotonic (with respect to inclusions). The following result allows to derive a large class of inequalities satisfied by entropy functions. Lemma 1. [6] An entropy function f : {0, 1} Em R + satisfies the following inequalities. (P 1) f( ) = 0. (P 2) If S 1 S 2 E m, then f(s 1 ) f(s 2 ). (P 3) If S 1, S 2 E m, then f(s 1 S 2 ) + f(s 1 S 2 ) f(s 1 ) + f(s 2 ). The third inequality is called the submodularity. The class of functions which is defined by satisfying (P1), (P2) and (P3) are called β-functions [5] and (E m, f) is called a polymatroid. There is a large class of β-functions, and in this note, we are interested in characterizing further the special case of entropy functions, for example, in determining when a polymatroid can be entropic (i.e., when a β-function can be an entropy function). We will however mostly focus on the special case of polymatroids having integral β-function, i.e., matroids. Several motivations for studying these functions are given in Section 2.4. In words, an integral entropy function should represent an extremal allocation of the randomness. For example, a random variable X on X has entropy 0 or 1 if and only if it is deterministic or purely random (i.e., uniform). If two random variables X 1 and X 2 have an integral entropy function, say f(1) = f(2) = f(1, 2) = 1, it means that X 1 and X 2 have individually full randomness but jointly no increase of randomness occurs; this is the case if X 1 is uniform and X 2 is a deterministic function of X 1 (e.g., X 2 = X 1 ), which leads to a fully dependent distribution. If instead f(1, 2) = 2, then X 1 and X 2 are independent and uniform. In that sense, integral entropy functions describe extremal allocation of the randomness and dependencies. We also refer to [13] for further details and references on entropy functions. We now define the class of information functions and informatic matroids. 3

4 2.2 Mutual information Definition 4. Let X and Y be two finite sets, called respectively the input and output alphabets and assume that X Y. Recall that M(X ) denotes the set of probability measures on X. A channel W with input alphabet X and output alphabet Y is a collection of conditional probability measures {W ( x) M(Y) : x X }. For fixed alphabets, we denote the set of channels by M(Y X ). Definition 5. The mutual information of a probability measure µ M(X Y) is defined by I(µ) = D(µ µ X µ Y ) = E log X µ(x, Y ) µ X (X) µ Y (Y ), where µ X and µ Y are respectively the marginals in X and Y of µ (the function D( ) is the Kullback- Leibler divergence or relative entropy). If X and Y are two random variables on respectively X and Y with joint distribution 1 µ = P W, then I(X; Y ) denotes I(µ) and we have I(X; Y ) = H(Y ) H(Y X) = H(X) H(X Y ), where H(Y X) = E X E Y X log X W (Y X) is the conditional entropy (it is a scalar, as opposed to the conditional expectation). The mutual information I(X; Y ) is a measure of the dependency between X and Y. We have 0 I(X; Y ) 1, where I(X; Y ) = 0 means that X and Y are independent and I(X; Y ) = 1 means that Y is a deterministic function of X. Definition 6. The uniform mutual information of a channel W M(Y X ) is given by I(W ) := I(U X W ), where U X is the uniform distribution on X. Definition 7. For a fixed X and Y, we call an element W of M(Y X m ) a multiple access channel (MAC) with m users, with input alphabet X and output alphabet Y. (It is simply a channel having input alphabet X m and output alphabet Y, but we keep the terminology used in information theory as it better reflects some of the underlying applications.) A binary MAC is a MAC for which X = {0, 1}. Definition 8. The information function of a MAC W M(Y X m ) with input distributions P 1,..., P m M(X ) is defined by the function g P1,...,P m,w : {0, 1} Em R + S I(X[S]; Y, X[S c ]), (4) where (X[E m ], Y ) is distributed according to (P 1... P m ) W. If P 1 =... = P m = U X, we call this function the uniform information function and we denote it by g W. A function f : {0, 1} Em R + is called an information function if it is the the information function of some MAC for some input distributions on some alphabet. As for entropy functions, information functions satisfy the polymatroidal inequalities. 1 We used the notation µ = P W to denote a joint distribution with marginal distribution in X given by P and with conditional distribution given by W, i.e., µ(x, y) = P (x)w (y x). 4

5 Lemma 2. [9] An information function f : {0, 1} Em R + satisfies inequalities (P1), (P2) and (P3) and (E m, f) is a polymatroid, called an informatic polymatroid. Remark: The difference between an entropy and information function is that the latter one computes the dependencies between independent inputs X 1,..., X m and an output Y arbitrarily correlated with the inputs, whereas an entropy function computes the dependencies between arbitrarily correlated inputs X 1,..., X m. A natural question is to inquiry about the relationship/hierarchy between these two types of functions; we will show in Section 4 that information functions are in a sense more general than entropy functions, at least when X is a finite field. 2.3 Properties We provide in this section few properties of the entropy and the mutual information which will be used in the next sections. The proof of these identities can be found in [3]. For any random variables X 1, X 2, we have H(X 1, X 2 ) = H(X 1 ) + H(X 2 X 1 ). (5) For any random variables X, Y, Z forming a Markov chain, i.e., X Y Z, we have H(X Y, Z) = H(X Y ). (6) For any random variables X, Y, Z defined on a common field we have H(X + Y Y, Z) = H(X Y, Z). (7) For any random variables X 1, X 2, X 3 we have the following identity which called the chain rule I(X 1, X 2 ; X 3 ) = I(X 1 ; X 3 ) + I(X 2 ; X 3 X 1 ). (8) Notation: when there is no ambiguity, we drop the commas between random variables in the mutual information or entropy terms, although we always keep the semi-column in the mutual information. Example: I(X 1 X 2 ; X 3 ) = I(X 1, X 2 ; X 3 ). 2.4 Operational meanings All quantities defined previously have operational meanings in information theory. The entropy H(P ) is the minimal rate at which a source drawn i.i.d. under P can be losslessly compressed, i.e., such that there exists a reconstruction method which recovers exactly the original source with high probability. For a given channel W and for any input distribution P, I(P W ) is an achievable rate for reliable communication on a discrete memoryless channel with transition probability W. In particular, I(W ) is an achievable rate and the largest achievable rate is given by the capacity C = max P M(X ) I(P W ). An operational meaning of the information function is the following: the region {(R 1,..., R m ) : 0 i S R i I(X[S]; Y X[S c ]), S E m } represents achievable rates on a memoryless MAC W, when the m users are not allowed to cooperate during the communication. (If the m users were allowed to cooperate, rates given by I(P W ) for 5

6 any P M(X m ) would be achievable and we are back the case of a single mutual information.) If there are no restriction on the input distributions, the closure of the convex hull of all such regions (for any input distributions) gives the capacity region of the MAC. We refer to [3] for more details on these problems. More generally, many network information theory problems can be formulated as linear optimization problems over polyhedra resulting from the previously defined polyamtroids [3], in particular for MACs [16]. Thus, determining and characterizing the possible entropic and informatic polymatroids can lead to solutions of a large set of information-theoretic problems. The case of matroids rather than polymatroids is also of significant importance in some these applications, such as for network coding problems [8, 20, 4] and references therein, which are also directly related the problem of finding non-shannon inequalities. Another subject of coding theory, called polar coding, allowed to recently established fascinating connections between multi-user coding problems and matroid theory. In [1], it is shown that for an i.i.d. sequence of random vectors X i [E m ], having an arbitrary distribution µ on F m 2, there exists a low-complexity deterministic matrix G n which transforms the random matrix X = [X 1,..., X n ] into a random matrix Y = [Y 1,..., Y n ] such that S H(Y j [S] Y 1,..., Y j 1 ) is with high probability an integral entropy function. In other words, this allows to split an arbitrary entropic polymatroid (induced by µ) into a convex combination of entropic matroids conserving the rank. We refer to [1] for more details on this result and to [2] for a related work corresponding to the case of informatic matroids. 3 Matroids: review of basics Definition 9. A matroid M is an ordered pair (E, I), where E is a finite set called the ground set and I is a collection of a subsets of E called the independent sets, which satisfies: (I1) I. (I2) If I I and I I, then I I. (I3) If I 1, I 2 I and I 1 < I 2, then there exists an element e I 2 I 1 such that I 1 {e} I. Definition 10. Let M be a matroid given by (E, I). A basis is a maximal (with respect to the inclusion) subset of E which is independent. The collection of bases is denoted by B. Note that all the subsets of the bases are the independent sets. Hence, a matroid can be defined by its bases. An dependent set is a subset of E which is not independent. The collection of dependent sets is denoted by D = I c. A circuit is a minimal (w.r. to the inclusion) subset of E which is dependent. The collection of circuits is denoted by C. Definition 11. The rank function of a matroid M is the function r : P(E) Z + which assigns to any S E the cardinality r(s) of a maximal independent set contained in (or equal to) S. Lemma 3. The rank function satisfies the following properties. (R1) If X E, then r(x) X. (R2) If X 1 X 2 E, then r(x 1 ) r(x 2 ). (R3) If X 1, X 2 E, then r(x 1 X 2 ) + r(x 1 X 2 ) r(x 1 ) + r(x 2 ). 6

7 Note: all the objects that we have defined so far (independent sets, dependent sets, bases, circuits, rank function) can be used to define a matroid, i.e., we can define a matroid as a ground set E with a collection of circuits or a ground set E with a rank function, etc. Moreover, each of these objects can be characterized by a set of axioms, as for example in the following lemma. Lemma 4. Let E be a finite set and r : P(E) Z +, then r is a rank function of a matroid on E if and only if r satisfies (R1), (R2) and (R3). Definition 12. A vector matroid over a field F is a matroid whose ground set is given by the column index set of a matrix A defined over F, and whose independent sets are given by the column index subsets indicating linearly independent columns. We denote such a matroid by M = M[A] and call A a representative matrix of the matroid. For a vector matroid, the objects defined previously (dependent sets, bases, rank function) naturally match with the objects defined by the corresponding linear algebraic definition. The matroid theory is also connected to other fields such as graph theory. For an undirected graph, the set of edges define a ground set and a collection of edges that does not contain a cycle defines an independent set. A major problem in matroid theory, consist in identifying whether a given matroid belongs to a certain class of structured matroids, such as vector matroids or graphic matroids. We are particularly interested here in the problem of determining whether a given matroid can be expressed as a vector matroid over a finite field. Definition 13. A matroid is representable over a field F if it is isomorphic to a vector matroid over the field F. A F 2 -representable matroid is called a binary matroid. We review here some basic construction defined on matroids. Definition 14. Let M be a matroid with ground set E and independent sets I. Let S E. The restriction of M to S, denoted by M S, is the matroid whose ground set is S and whose independent sets are the independent sets in I which are contained in (or equal to) S. The contraction of M by S, denoted by M/S, is the matroid whose ground set is E S and whose independent sets are the subsets I of E S for which there exists a basis B of M S such that I B I. We will see an equivalent definition of the contraction operation when defining the dual of matroid. A matroid N that is obtained from M by a sequence of restrictions and contractions is called a minor of M. The following matroid is of particular interest to us. Definition 15. Let m, n Z + with m n. Let E be a finite set with n elements and B the collection of m-element subsets of E. One can easily check that this collection determines the bases of a matroid on E. This matroid is denoted by U m,n and is called the uniform matroid of rank m on a set with n elements. The following are two major theorems concerning the representation theory of binary matroids. Theorem 1. [Tutte] A matroid is binary if and only if it has no minor that is U 2,4. Theorem 2. [Whitney] A matroid is binary if and only if the symmetric sum ( ) of any two circuits is the union of disjoint circuits. 7

8 Remark 1. In a binary matroid, the circuit space of a matroid is equal to the kernel of its representative matrix. Indeed, if we multiply a circuit C by the representative matrix A, we are summing the columns corresponding to a circuit. But this sum must be 0, since a circuit is a minimal dependent set, and therefore, we can express one of the columns as the sum of the others. Next, we introduce the duality subject, which will play a central role in the applications of our next section. Theorem 3. Let M be a matroid on E with a set of bases B. Let B = {E B : B B}. Then B is the set of bases of a matroid on E. We denote this matroid by M and call it the dual of M. Lemma 5. If r is the rank function of M, then the rank function of M is given by r (S) = r(s c ) + S r(e). We can then define the contraction operation via duality. Definition 16. The contraction of M by S is given by the dual of the restriction of M on S, i.e., M/S = (M S). Finally, we recall the definition of a polymatroid. Definition 17. A polymatroid is an ordered pair of a finite set E called the ground set and a β-function ρ : P(E) R + which satisfies (R1) f( ) = 0. (R2) If X 1 X 2 E, then f(x 1 ) f(x 2 ). (R3) If X 1, X 2 E, then f(x 1 X 2 ) + f(x 1 X 2 ) f(x 1 ) + f(x 2 ). The region of R m defined by {(R 1,..., R m ) : R S f(s), S E} is called a polyhedron. We refer to [14, 17, 18] for more details on matroid theory. 4 Relationship between entropic and informatic matroids Theorem 4. Any entropy function defined for X being a finite field is the dual of an information function. This theorem says that (when restricted to entropy functions for X being a finite field) we can realize an entropy function from as the dual of an information function. In that sense, the study of entropic matroids results for the study of informatic matroids, and there might not be a reciprocal statement. The construction of the MAC realizing the dual entropy function is an additive noise MAC with uniform inputs; note all MACs are of this type. Proof. Let f be an entropy function for a distribution Q on X m, i.e., f Q = f. Consider a specific MAC which consist of an additive noise perturbation of the inputs, i.e., Y [E m ] = X[E m ] Z[E m ], (9) 8

9 where X[E m ] is an m-dimensional random vector with i.i.d. uniform components over F q and Z[E m ] is an m-dimensional random vector of arbitrary distribution on F m q, independent of X[E m ]. Then, g(s) := I(X[S]; Y X[S c ]) = S H(X[S] Y X[S c ]) (10) = S H(Y [S] Z[S] Y, (Y [S c ] Z[S c ])) (11) = S H(Z[S] Y, Z[S c ]) (12) = S H(Z[S] Z[S c ]) (13) = H(Z[S c ]) + S H(Z[E m ]) (14) = f(s c ) + S f(e m ) (15) = f (S) (16) where (10) comes from the chain rule of the mutual information, (11) follows from (9), (12) follows from (7), (13) follows from (6) and (14) follows from (5). There is variety of MACs which are not additive noise, for example, any MAC whose output alphabet is not coinciding with X m (although even if the output alphabet matches with X m, the MAC may not be an additive perturbation of the inputs). For example a MAC which takes two binary inputs and produces an output which is equal to X 1 + X 2 (real addition) is not an additive noise MAC. Hence, there may be informatic polymatroids that cannot be realized by entropic polymatroids in the way described previously, and similarly for the class of matroids studied in [12]. However, we show in next section that both entropic and informatic matroids for binary alphabets (with uniform input alphabets for the MAC) are all possible binary matroids. 5 Characterization of informatic matroids and extremal dependencies This section characterizes informatic matroids when the input alphabet is binary and the input distributions are uniform. Definition 18. An information function defined for the input alphabet F q and for uniform input distributions is called a q-information function. We call the corresponding (poly)matroid a 2-informatic (poly)matroid. Theorem 5. A matroid is 2-informatic if and only if it is binary. Corollary 1. A matroid is 2-entropic if and only if it is binary. To prove this theorem, we first prove the following lemma. Lemma 6. U 2,4 is not 2-informatic. Note that since since U 2,4 is a matroid, in order to exclude it, we must show that it violates a property specific to the informatic matroids, which is related to the problem of finding non-shannon inequalities [8, 20, 4]. Proof. Assume that the rank function of U 2,4 is the 2-information function of a MAC. We then have I(X[i, j]; Y ) = 0, (17) I(X[i, j]; Y X[k, l]) = 2, (18) 9

10 for all i, j, k, l distinct in {1, 2, 3, 4}. Let y 0 be in the support of Y. For x F 4 2, define P(x y 0) = W (y 0 x)/ z F 4 W (y 0 z). Then from (18), P(0, 0,, y 0 ) = 0 for any choice of, which is not 2 0, 0 and P(0, 1,, y 0 ) = 0 for any choice of, which is not 1, 1. On the other hand, from (17), P(0, 1, 1, 1 y 0 ) must be equal to p 0. However, we have form (18) that P(1, 0,, y 0 ) = 0 for any choice of, (even for 1, 1 since we now have P(0, 1, 1, 1 y 0 ) > 0). At the same time, this implies that the average of P(1, 0,, y 0 ) over, is zero. This brings a contradiction, since from (17), this average must equal to p 0. Proof of Theorem 5. We start with the converse. Let M be a binary matroid on E with representative matrix A. Let W be the deterministic MAC defined by the matrix A, i.e., whose output is Y = AX[E m ]; we then clearly have that M is isomorphic to the 2-informatic 2 matroid induced by W. For the direct part, let M be a 2-informatic matroid. We already know from Lemma 6 that M cannot contain U 2,4 as a minor. If instead U 2,4 is obtained by a contraction of S c from M, i.e., M/S c = U 2,4, it means that (M S) = U 2,4. Since U 2,4 is self dual, we have M S = U 2,4. Let us denote by r the rank function of M. We have for any Q E r (Q) = Q + r(q c ) r(e), = Q + I(X[Q c ]; Y X[Q]) I(X[E]; Y ), = Q I(X[Q]; Y ), where the last equality follows from the chain rule of the mutual information. Since r ( ) restricted to S is the rank function of U 2,4, we have in particular that is, r (T ) = 2, T S s.t. T = 2 r (S) = 2, 2 I(X[T ]; Y ) = 2, T S s.t. T = 2 (19) 4 I(X[S]; Y ) = 2. This implies, by the chain rule of the mutual information, I(X[T ]; Y X[S T ]) = 2, T S s.t. T = 2. (20) Hence, from the proof of Lemma 6, (19),(20) cannot simultaneously hold and U 2,4 cannot be a minor of M. From Tutte s Theorem (cf. Theorem 1), M is binary. Previous theorem gives a characterization of 2-informatic matroids. Note that, if we were interested in characterizing binary matroids through 2-informatic matroids, we may as well consider linear deterministic MACs over F 2 to start with (since we have a one-to-one correspondence). We now formally establish the connection between all possible MACs leading to a 2-informatic matroid and the linear deterministic MAC constructed from the linear form corresponding to that 2-informatic 2 Note that for every binary matrix A, we can also construct an entropic matroid with binary alphabet which is isomorphic to the matroid induced by A, by defining a random vector X = A t Z where Z contains i.i.d. uniform binary components. Hence Theorem 5 also implies that a matroid is entropic on a binary alphabet if and only if it is binary. There might however not exist a one-to-one correspondence between entropic and informatic polymatroids, as discussed in Section 4. 10

11 matroid. We show that all these MACs are in a sense equivalent to each other. We then provide in Section 6.1 elements allowing to extend these results to polymatroids that are close to 2-entropic matroids. The equivalence shown in the following theorem is then very useful as there is a large variety of quasi 2-informatic matroids. Theorem 6. Let W be a binary MAC with an integral 2-information function, and let M[W ] denote the corresponding 2-informatic matroid. Let Y be the output of W when the input X[E m ] (with i.i.d. uniform components) is sent and let A be a matrix representation of M[W ]. Then I(AX[E m ]; Y ) = ranka = I(X[E m ]; Y ). This theorem says that for a binary MAC with integral 2-information function, the output contains all the information about the corresponding linear form of the inputs and nothing more. In that sense, MACs with integral 2-information functions are equivalent to linear deterministic MACs. Proof. Let M = M[W ] with M[W ] = M[A] and let us assume that M has rank R. Let B be the set of bases of M and let B be the set of bases of M. Since r(b) = B = R for any B B, we have Moreover, the rank function of M is given by r(b) = I(X[B]; Y X[B c ]) = R, B B. (21) r (S) = S I(X[S]; Y ) and for all D B, we have r (D) = D = E m R. Hence or equivalently r (D) = E m R = E m R I(X[D]; Y ), D B, I(X[D]; Y ) = 0, D B. (22) Hence, form (21) and (22) and the fact that B = {E m B : B B}, we have I(X[B]; Y X[B c ]) = r, B B, (23) I(X[B c ]; Y ) = 0, B B. (24) Note that (23) means that if any realization of the output Y is given together with any realization of X[B c ], we can determine X[B]. Moreover, (24) means that X[B c ] is independent of Y. Let us analyze how these conditions translate in terms of probability distributions. Let y 0 Supp(Y ). We define p 0 (x) := W (y 0 x)/ W (y 0 x ), x F m 2. x F m 2 From (23), if p 0 (x) > 0, we must have p 0 (x ) = 0 for any x such that x [B c ] = x[b c ] for some B c B. From (24), we have that x :x [B c ]=x[b c ] p 0 (x) = 2 R m, B B, x[b c ] F m R 2. 11

12 Hence, for any B B and any x[b c ] F m R 2, we have p 0 (x ) = 2 R m, (25) x :x [B c ]=x[b c ] x :x [B c ]=x[b c ] p 0 (x ) = 2 R m. (26) Let := 2 R m. Previous constraints imply that p 0 (x) {0, } for any x F m 2 and that the number of x with p 0 (x) = is exactly 2 Em r. Let us assume w.l.o.g. that p 0 ( 0) =, where 0 is the all 0 vector. Note that we know one solution that satisfies previous conditions. Namely, the solution that assigns a to all vectors belonging to KerA. As expected, dimker(a) = E m rank(a) = E m r. We want to show that there cannot be any other assignment of the s in agreement with the matroid M. In the following, we consider elements of F m 2 as binary vectors or as subsets of E m, since F m 2 = 2 Em. The field operations on F m 2 translate into set operations on 2Em, in particular, the component wise modulo 2 addition x 1 + x 2 of binary vectors corresponds to the symmetric different x 1 x 2 of sets, and the component wise multiplication x 1 x 2 of binary vectors corresponds to the intersection x 1 x 2 of sets. We now check which are the assignments which would not violate (25) and (26). We have assumed w.l.o.g that 0 is assigned, hence is assigned. From (25), any x for which x[b c ] = 0 for some B B, must be assigned 0. Note that x[b c ] = 0 x B c = 0 x B x I, where I is the collection of independent sets of M. Hence, the elements which are assigned 0 by checking the condition (25) are the independent sets of M, besides which is assigned. For B B and s F m 2, we define and Note that I s (B) = s + I(B), indeed: I(B) := {I I : I B} I s (B) := {x : x[b c ] = s[b c ]}. x[b c ] = s[b c ] x B c = s B c (x + s) B c = 0 x + s B x s + I(B). Now, if r(s) = r(t ) for two sets S and T with T S, we have I(X[S T ]; Y X[S c ]) = 0. This means that (Y, X[S c ]) is independent of X[S T ]. distributions, this means that or equivalently, P X[S T ] Y X[S c ](x[s T ] y 0 x[s c ]) = 1 2 T, x[s T ], x[sc ] p 0 (x[e]) = 1 2 T p 0 (x[e]), x[t ], x[s c ]. S T S From the point of view of probability 12

13 Hence, if we set the components of x F m 2 to frozen values on Sc, then, no matter how we freeze the components of x on T, the average of p 0 ( ) on S T must be the same. Let C C be a circuit. By the definition of circuits, if we remove any element of C we have a basis of M C. Let B be such a basis, we then have r(c) = r(b). We now want to freeze the values on C c and B in two ways. 1. If we pick d = C B c, then I d (B) = {x : x C B}. These are the elements that are strictly contained in C, i.e., elements of I, including. Therefore, the average of p 0 ( ) must be for this freezing. 2. If we pick d = C, we already know that the average of p 0 ( ) must be, but we have I d (B) = {x : x + C C B}. These are the elements containing B, possibly elements of C B but nothing else. Therefore, the options are x = C or x I. This forces C to be assigned. Hence, we have shown that all circuits of M must be assigned. This in turns imply several other 0 assignments. Namely, C + (I ) (27) C C must be assigned 0. Let us next consider a union of two disjoint circuits, D = C 1 C 2. Then, if we remove any single elements of D, say by removing an element of C 1, we obtain a disjoint union of an independent set and a circuit, say I C 2. Hence, r(c 1 C 2 ) = r(i C 2 ). We can then use the same technique as previously, but this time, we need to use that (27) is assigned 0. Note that is important to assume that the union is disjoint, in order to guarantee that C 2 + I C 2 = I I. We can then use an induction to show that any union of disjoint circuits must be assigned. Finally, for a binary matroid, any symmetric difference of two circuits is given by a union of disjoint circuits (this can be directly checked but notice that it is contained as one of the implications of Theorem 2 due to Whitney). Hence, the space generated by the circuits, seen as elements of (F m 2, +) must be assigned, and using Remark 1, we conclude the proof since we have assigned the 1/ numbers of without any degrees of freedom, and the assignment is done on KerA. 5.1 Recursions using mutual information properties In this section we re-derive some of the results of previous section using inductive arguments. We start by checking a result similar to Theorem 6 for the case m = 3. Lemma 7. Let W be a binary MAC with 2 users. Let X[E 2 ] with i.i.d. uniform binary components and let Y be the output of W when X[E] is sent. If I(X[1]; Y X[2]), I(X[2]; Y X[1]) and I(X[1]X[2]; Y ) have specified integer values, then I(X[1]; Y ), I(X[2]; Y ) and I(X[1] + X[2]; Y ) have specified values in {0, 1}, and vice-versa. 13

14 Proof. Let I := [I(X[1]; Y X[2]), I(X[2]; Y X[1]), I(X[1]X[2]; Y )] J := [I(X[1]; Y ), I(X[2]; Y ), I(X[1] + X[2]; Y )]. Note that by the polymatroid property of the mutual information, we have I {[0, 0, 0], [0, 1, 1], [1, 0, 1], [1, 1, 1], [1, 1, 2]}. (28) Let y supp(y ) and for any x F 2 2 define P(x y) = W (y x)/ z F 2 W (y z) (recall that W is the 2 MAC with inputs X[1], X[2] and output Y ). Assume w.l.o.g. that p 0 := P(0, 0 y) > 0. If I = [0, 0, 0] we clearly must have J = [0, 0, 0]. If I = [, 1, 1], we have I(X[2]; Y X[1]) = 1 and we can determine X[2] by observing X[1] and Y, which implies P(01 y) = 0. Moreover, since I(X[1]; Y ) = I(X[1]X[2]; Y ) I(X[2]; Y X[1]) = 0, i.e., X[1] is independent of Y, we must have that x[2] P(x[1]x[2] y) is uniform, and hence, P(00 y) = 1/2, P(10 y) + P(11 y) = 1/2. Now, if = 1, by a symmetric argument as before, we must have P(11 y) = 1/2 and hence we the input pairs 00 and 11 have each probability half (a similar situation occurs when assuming that P(x y) > 0 for x (0, 0)), and we can only recover X[1] + X[2] from Y, i.e., J = [0, 0, 1]. If instead = 0, we then have I(X[2]; Y ) = I(X[1]X[2]; Y ) I(X[1]; Y X[2]) = 1 and from a realization of Y we can determine X[2], i.e., P(10) = 1/2 and J = [0, 1, 0]. If I = [1, 0, 1], by symmetry with the previous case, we have J = [1, 0, 0]. If I = [1, 1, 2], we can recover all inputs from Y, hence J = [1, 1, 1]. For the converse statement, Note that J must be given by [0, 0, 0], [0, 1, 0], [1, 0, 0], [0, 0, 1] or [1, 1, 1]. Clearly, the case [0, 0, 0] implies I = [0, 0, 0]. For the case J = [0, 1, 0], note that I(X[2]; Y ) = 1 implies h(x[2] Y ) = 0, i.e., for any y supp(y ), h(x[2] Y = y) = 0. This means that for any y supp(y ), if p 2 (x[2] y) > 0 for some x[2], we must have p 2 ( x[2] y) = 0 for x[2] x[2]. We use p i, i = 1, 2, for the probability distribution of X[i] given the realization Y = y and p 12 for the probability distribution of (X[1], X[2]) given Y = y. Assume now (w.l.o.g.) that p 12 (0, 0 y) > 0. Since p 2 (x[2] y) = x[1] p 12(x[1]x[2] y), previous observation implies that p 12 (01 y) = p 12 (11 y) = 0. Moreover, I(X[1]; Y ) = 0 implies that h(x[1] Y = y) = 1, i.e., for any realization of Y, the marginal of X[1] is uniform, which implies p 12 (00 y) = p 12 (10 y) = 1/2. Hence, if we are given the realization of X[1] and Y, we can decide what X[2] must be, and this holds no matter which values of (X[1], X[2]) is assigned a positive probability, i.e., I(X[2]; Y X[1]) = 1. If instead we are given X[2] and Y, we can not infer anything about X[1], i.e., I(X[1]; Y X[2]) = 0. Finally, by the chain rule, I(X[1]X[2]; Y ) = 1. The case where [I(X[1]; Y ), I(X[2]; Y ), I(X[1] + X[2]; Y )] is equal to [1, 0, 0] can be treated symmetrically and the other cases in a similar fashion. 14

15 Lemma 8. Let W be a binary MAC with m users. Let X[E m ] with i.i.d. uniform binary components and let Y be the output of W when X[E] is sent. If I(X[S]; Y X[S c ]) has a specified integer value for any S E m, then I(X[E m ] S; Y ) has a specified value in {0, 1} for any S E m, and vice-versa. Note: X[E m ] S = i S X[i] A recursive argument for the proof of the direct part of this Lemma was suggested by Eren Şaşoğlu [15]. The direct statement in the Lemma is a consequence of Theorem 6 but is proved here using the recursive approach. Proof. Let I[S](W ) be assigned an integer for any S E m. By the chain rule of the mutual information I(X[E m ]; Y ) = I(X[S]; Y ) + I(X[S c ]; Y X[S]), and we can determine I(X[S]; Y ) for any S. Since for any T S I(X[S]; Y ) = I(X[T ]; Y ) + I(X[S T ]; Y X[T ]), we can also determine I(X[S]; Y X[T ]) for any S, T E m with S T =. Hence we can determine and using Lemma 7, we can determine for any S E m with {1, 2} / S, hence I(X[1], X[2]; Y X[S]) I(X[1]; Y X[S]X[2]) I(X[2]; Y X[S]X[1]) I(X[1] + X[2]; Y X[S]) I(X[i] + X[j]; Y ) for any i, j E m. Assume now that we have determined I( T X[i]; Y X[S]) for any T with T k and S E m T. Let T = {1,..., k} and let S {k + 2,..., m}. I( T X[i], X[k + 1]; Y X[S]) = I(X[k + 1]; Y X[S]) + I( T X[i]; Y X[S]X[k + 1]), in particular, we can determine I(X[k + 1]; Y T X[i], X[S]) = I( T I( T X[i], X[k + 1]; Y X[S]) X[i]; Y X[S]) 15

16 and I( T I( T X[i], X[k + 1]; Y X[S]) X[i]; Y X[S]X[k + 1]) I(X[k + 1]; Y T X[i], X[S]) and using Lemma 7, we can determine I( T X[i] + X[k + 1]; Y X[S]) hence I( T X[i]; Y ) for any T E m with T = k + 1. Hence, inducting this argument, we can determine I( T X[i]; Y ) for any T E m. For the converse statement, assume that we are given I(X[E m ] S; Y ) {0, 1} for any S E m. In particular, I(X i ; Y ), I(X i ; Y ) and I(X i + X j ; Y ) is determined for any i, j E m, and hence, from Lemma 7, we have that I(X i ; Y X i ) and I(X i X j ; Y ) are determined (and integer valued) for any i, j E m. Note that we can also determine I(X[E m ] T ; Y X[i]) for any T E m and i E m T ; indeed, we know I(X[E m ] T ; Y ) for any T E m, so for i E m T, we know I(X[i] + X[E m ] T ; Y ), (29) I(X[i]; Y ), (30) I(X[E m ] T ; Y ), (31) and hence, using Lemma 7, we can determine I(X[E m ] T ; Y X[i]). Let us assume now that we have determined I(X[S]; Y X[F S]) for any F such that F k and S F, as well as I(X[E m ] T ; Y X[K]) for any K such that K k 1 and T E m K. We have already checked that this can be determined for k = 2. We now check that we can also determine these quantities for k + 1 instead of k. Let K with K = k 1. Assume w.l.o.g. that 1, 2, 3 / K. Since we assume to know I(X[1]; Y X[K]), (32) I(X[1] + X[2]; Y X[K]), (33) I(X[1] + X[2] + X[3]; Y X[K]), (34) using Lemma 7, we can determine I(X[1] + X[2]; Y X[K 3]). Using a similar argument we can determine I(X[E m ] T ; Y X[K]) for any K such that K k and T E m K. Moreover, since we now know I(X[1] + X[2]; Y X[K]) and also I(X[1]; Y X[K]), (35) I(X[2]; Y X[K]), (36) 16

17 we can determine with Lemma 7 I(X[1]; Y X[K 2]), (37) I(X[2]; Y X[K 1]), (38) I(X[1]X[2]; Y X[K]), (39) and hence, we can determine I(X[K 1 ]; Y X[K 2 ]) for K 1 2 and K 1 + K 2 k + 1. From the chain rule of the mutual information, we have I(X[1]X[2]X[3]; Y X[K 3]) = I(X[1]X[2]; Y X[K 3]) + I(X[3]; Y X[K 3]X[1]X[2]) (40) and both term in the right hand side above are already determined. Hence, by iterating the chain rule argument, we can determine I(X[S]; Y X[F S]) for any F such that F k + 1 and S F. Finally, we can iterate these arguments on k to reach F = E m, i.e., to determine an integer for I(X[S]; Y X[S c ]) for any S E m. 6 Extensions 6.1 Informatic polymatroids that are quasi matroids In this section, we provide technical steps necessary to extend previous results to polymatroids which are close to matroids, allowing us to lift the results obtained using combinatorial arguments in the matroid setting to a polymatroid setting. This is also relevant for applications in information theory, such as in [1, 2]. Lemma 9. Let W be a binary MAC with m users. Let X[E m ] with i.i.d. uniform binary components and let Y be the output of W when X[E] is sent. Let ε > 0, if I(X[S]; Y X[S c ]) has a specified value in Z + ( ε, ε) for any S E m, then I(X[E m ] S; Y ) has a specified value in [0, o ε (1)) (1 o ε (1), 1] for any S E m. Note: X[E m ] S = i S X[i] The converse of this statement also holds. This lemma follows from the results of previous sections and from the following lemmas. Lemma 10. For two random variables X, Y such that X is binary uniform and I(X; Y ) < ε, we have Pr{y : P X Y ( y) U( ) 1 < ε 1/2 } 1 (2 ln 2 ε 1/2 ) 1/2, where U is the binary uniform measure. Proof. Since I(X; Y ) < ε, we have D(P XY P X P Y ) < ε and from Pinsker s inequality we get P XY P X P Y 1 = y 1 2 ln 2 P Q 2 1 D(P Q) P Y (y) P X Y ( y) U( ) 1 (2 ln 2 ε) 1/2. 17

18 Therefore, by Markov s inequality, we have Pr{y : P X Y ( y) U( ) 1 a} and by choosing a = ε 1/4, we get the desired inequality. (2 ln 2 ε)1/2 a Lemma 11. For two random variables X, Y such that X is binary uniform and h(x Y ) < ε, define E ε by y E ε Pr{X = 0 Y = y} Pr{X = 1 Y = y} ε, then with γ(ε) 0 when ε 0. Pr{E ε } 1 γ(ε), This lemma tells us that if Pr{X = 0 Y = y} is not small, we must have that Pr{X = 1 Y = y} is small with high probability. It is given as a problem in [7]. 6.2 q-informatic matroids The results of Section 5 are expected to generalize to the q-ary alphabet case, where q is a prime or power of a prime and the following must hold: a matroid is q-representable if and only if it is q-informatic. One can then equivalently characterize q-ary matroids by characterizing rank functions which are representable by q-ary alphabets MAC. An interesting point here, is that there is no existing theorem which provides a finite list of excluded minors for q-representable matroids. However, the recursion technique of Section 5.1 seems to admit a natural extension to q-ary fields, since the properties of the mutual information used in those recursions hold even if the alphabet of the MAC is not a field. This may provide an interesting aspect of representing q-ary matroids by means of informatic or entropic matroids. Moreover, we have restricted ourselves in this paper to analyzing informatic function having a uniform distribution on the inputs. An interesting direction to pursue is to consider more general input distributions and see if this may lead to non-linear matroids. References [1] E. Abbe, Extracting randomness and dependencies via a matrix polarization, Information Theory and Applications Workshop, San Diego, February Available on arxiv: v1. [2] E. Abbe and E. Telatar, Polar codes for the m-user multiple access channel and matroids, International Zurich Seminar on Communications (IZS), Zurich, March Available on arxiv: v1. [3] T. M. Cover and J. A. Thomas, Elements of Information Theory, John Wiley & Sons, New York, NY, [4] R. Dougherty, C. Freiling, and K. Zeger, Matroids, networks and non- shannon information inequalities, IEEE Trans. on Information Theory, submitted January [5] J. Edmonds, Submodular functions, matroids and certain polyhedra, Lecture Notes in Computer Science, Springer,

19 [6] S. Fujishije, Polymatroidal dependence structure of a set of random variables, Information and Control, vol. 39, pp , [7] R. Gallager, Information Theory and Reliable Communication, John Wiley & Sons, [8] T. S. Han, A uniqueness of shannon s information distance and related nonnegativity problems, J. Comb., Inform. Syst. Sci., vol. 6, no. 4, pp , [9] S.V. Hanly and P.A. Whiting, Constraints on capacity in a multi- user channel, International Symposium on Information Theory, Trondheim, Norway, [10] B. Hassibi and S. Shadbakht, A construction of entropic vectors, ITA workshop at UCSD, San Diego, February [11] L. Lovász, Submodular functions and convexity, in Mathematical Programming - The State of the Art, A. Bachem, M. Grötschel, and B. Korte, Eds. Berlin: Springer-Verlag, 1982, pp [12] F. Matú s, Probabilistic conditional independence structures and matroid theory: background, Int. J. of General Systems 22, pp [13] F. Matú s, Two Constructions on Limits of Entropy Functions, IEEE Tran. Inform. Theory, Vol. 53, No. 1, January [14] J. Oxley, Matroid Theory, Oxford Science Publications, New York, [15] E. Şaşoğlu, private communications. [16] D. Tse and S. Hanly, Multi-access Fading Channels: Part I: Polymatroid Structure, Optimal Resource Allocation and Throughput Capacities, IEEE Trans. Inform. Theory, vol. IT-44, no. 7, pp , November [17] D.J.A. Welsh, Matroid Theory. London, U.K.: Academic, [18] N. White, Theory of Matroids. Encyclopedia of Mathematics and its Applications, 26, Cambridge: Cambridge University Press, [19] H. Whitney, On the abstract properties of linear dependence, American Journal of Mathematics (The Johns Hopkins University Press), 57 (3): , [20] Z. Zhang and R. Yeung, On characterization of entropy function via information inequalities, IEEE Trans. on Information Theory, vol. 44, no. 4, pp ,

Polar codes for the m-user MAC and matroids

Polar codes for the m-user MAC and matroids Research Collection Conference Paper Polar codes for the m-user MAC and matroids Author(s): Abbe, Emmanuel; Telatar, Emre Publication Date: 2010 Permanent Link: https://doi.org/10.3929/ethz-a-005997169

More information

Superposition Encoding and Partial Decoding Is Optimal for a Class of Z-interference Channels

Superposition Encoding and Partial Decoding Is Optimal for a Class of Z-interference Channels Superposition Encoding and Partial Decoding Is Optimal for a Class of Z-interference Channels Nan Liu and Andrea Goldsmith Department of Electrical Engineering Stanford University, Stanford CA 94305 Email:

More information

Entropy Vectors and Network Information Theory

Entropy Vectors and Network Information Theory Entropy Vectors and Network Information Theory Sormeh Shadbakht and Babak Hassibi Department of Electrical Engineering Caltech Lee Center Workshop May 25, 2007 1 Sormeh Shadbakht and Babak Hassibi Entropy

More information

x log x, which is strictly convex, and use Jensen s Inequality:

x log x, which is strictly convex, and use Jensen s Inequality: 2. Information measures: mutual information 2.1 Divergence: main inequality Theorem 2.1 (Information Inequality). D(P Q) 0 ; D(P Q) = 0 iff P = Q Proof. Let ϕ(x) x log x, which is strictly convex, and

More information

Entropy and Ergodic Theory Lecture 4: Conditional entropy and mutual information

Entropy and Ergodic Theory Lecture 4: Conditional entropy and mutual information Entropy and Ergodic Theory Lecture 4: Conditional entropy and mutual information 1 Conditional entropy Let (Ω, F, P) be a probability space, let X be a RV taking values in some finite set A. In this lecture

More information

Bounding the Entropic Region via Information Geometry

Bounding the Entropic Region via Information Geometry Bounding the ntropic Region via Information Geometry Yunshu Liu John MacLaren Walsh Dept. of C, Drexel University, Philadelphia, PA 19104, USA yunshu.liu@drexel.edu jwalsh@coe.drexel.edu Abstract This

More information

A computational approach for determining rate regions and codes using entropic vector bounds

A computational approach for determining rate regions and codes using entropic vector bounds 1 A computational approach for determining rate regions and codes using entropic vector bounds Congduan Li, John MacLaren Walsh, Steven Weber Drexel University, Dept. of ECE, Philadelphia, PA 19104, USA

More information

Chapter 2: Entropy and Mutual Information. University of Illinois at Chicago ECE 534, Natasha Devroye

Chapter 2: Entropy and Mutual Information. University of Illinois at Chicago ECE 534, Natasha Devroye Chapter 2: Entropy and Mutual Information Chapter 2 outline Definitions Entropy Joint entropy, conditional entropy Relative entropy, mutual information Chain rules Jensen s inequality Log-sum inequality

More information

Matroid Optimisation Problems with Nested Non-linear Monomials in the Objective Function

Matroid Optimisation Problems with Nested Non-linear Monomials in the Objective Function atroid Optimisation Problems with Nested Non-linear onomials in the Objective Function Anja Fischer Frank Fischer S. Thomas ccormick 14th arch 2016 Abstract Recently, Buchheim and Klein [4] suggested to

More information

Algebraic matroids are almost entropic

Algebraic matroids are almost entropic accepted to Proceedings of the AMS June 28, 2017 Algebraic matroids are almost entropic František Matúš Abstract. Algebraic matroids capture properties of the algebraic dependence among elements of extension

More information

Chapter 4. Data Transmission and Channel Capacity. Po-Ning Chen, Professor. Department of Communications Engineering. National Chiao Tung University

Chapter 4. Data Transmission and Channel Capacity. Po-Ning Chen, Professor. Department of Communications Engineering. National Chiao Tung University Chapter 4 Data Transmission and Channel Capacity Po-Ning Chen, Professor Department of Communications Engineering National Chiao Tung University Hsin Chu, Taiwan 30050, R.O.C. Principle of Data Transmission

More information

An Introduction to Transversal Matroids

An Introduction to Transversal Matroids An Introduction to Transversal Matroids Joseph E Bonin The George Washington University These slides and an accompanying expository paper (in essence, notes for this talk, and more) are available at http://homegwuedu/

More information

Dual Consistent Systems of Linear Inequalities and Cardinality Constrained Polytopes. Satoru FUJISHIGE and Jens MASSBERG.

Dual Consistent Systems of Linear Inequalities and Cardinality Constrained Polytopes. Satoru FUJISHIGE and Jens MASSBERG. RIMS-1734 Dual Consistent Systems of Linear Inequalities and Cardinality Constrained Polytopes By Satoru FUJISHIGE and Jens MASSBERG December 2011 RESEARCH INSTITUTE FOR MATHEMATICAL SCIENCES KYOTO UNIVERSITY,

More information

Reliable Computation over Multiple-Access Channels

Reliable Computation over Multiple-Access Channels Reliable Computation over Multiple-Access Channels Bobak Nazer and Michael Gastpar Dept. of Electrical Engineering and Computer Sciences University of California, Berkeley Berkeley, CA, 94720-1770 {bobak,

More information

Characterizing the Region of Entropic Vectors via Information Geometry

Characterizing the Region of Entropic Vectors via Information Geometry Characterizing the Region of Entropic Vectors via Information Geometry John MacLaren Walsh Department of Electrical and Computer Engineering Drexel University Philadelphia, PA jwalsh@ece.drexel.edu Thanks

More information

arxiv: v4 [cs.it] 17 Oct 2015

arxiv: v4 [cs.it] 17 Oct 2015 Upper Bounds on the Relative Entropy and Rényi Divergence as a Function of Total Variation Distance for Finite Alphabets Igal Sason Department of Electrical Engineering Technion Israel Institute of Technology

More information

(Classical) Information Theory III: Noisy channel coding

(Classical) Information Theory III: Noisy channel coding (Classical) Information Theory III: Noisy channel coding Sibasish Ghosh The Institute of Mathematical Sciences CIT Campus, Taramani, Chennai 600 113, India. p. 1 Abstract What is the best possible way

More information

An upper bound on l q norms of noisy functions

An upper bound on l q norms of noisy functions Electronic Collouium on Computational Complexity, Report No. 68 08 An upper bound on l norms of noisy functions Alex Samorodnitsky Abstract Let T ɛ, 0 ɛ /, be the noise operator acting on functions on

More information

Polar Codes for Arbitrary DMCs and Arbitrary MACs

Polar Codes for Arbitrary DMCs and Arbitrary MACs Polar Codes for Arbitrary DMCs and Arbitrary MACs Rajai Nasser and Emre Telatar, Fellow, IEEE, School of Computer and Communication Sciences, EPFL Lausanne, Switzerland Email: rajainasser, emretelatar}@epflch

More information

On Secret Sharing Schemes, Matroids and Polymatroids

On Secret Sharing Schemes, Matroids and Polymatroids On Secret Sharing Schemes, Matroids and Polymatroids Jaume Martí-Farré, Carles Padró Dep. de Matemàtica Aplicada 4, Universitat Politècnica de Catalunya, Barcelona, Spain {jaumem,cpadro}@ma4.upc.edu June

More information

This document is downloaded from DR-NTU, Nanyang Technological University Library, Singapore.

This document is downloaded from DR-NTU, Nanyang Technological University Library, Singapore. This document is downloaded from DR-NTU, Nanyang Technological University Library, Singapore. Title Finite nilpotent metacyclic groups never violate the Ingleton inequality Author(s) Stancu, Radu; Oggier,

More information

Multiaccess Channels with State Known to One Encoder: A Case of Degraded Message Sets

Multiaccess Channels with State Known to One Encoder: A Case of Degraded Message Sets Multiaccess Channels with State Known to One Encoder: A Case of Degraded Message Sets Shivaprasad Kotagiri and J. Nicholas Laneman Department of Electrical Engineering University of Notre Dame Notre Dame,

More information

Capacity Region of Reversely Degraded Gaussian MIMO Broadcast Channel

Capacity Region of Reversely Degraded Gaussian MIMO Broadcast Channel Capacity Region of Reversely Degraded Gaussian MIMO Broadcast Channel Jun Chen Dept. of Electrical and Computer Engr. McMaster University Hamilton, Ontario, Canada Chao Tian AT&T Labs-Research 80 Park

More information

ECE 4400:693 - Information Theory

ECE 4400:693 - Information Theory ECE 4400:693 - Information Theory Dr. Nghi Tran Lecture 8: Differential Entropy Dr. Nghi Tran (ECE-University of Akron) ECE 4400:693 Lecture 1 / 43 Outline 1 Review: Entropy of discrete RVs 2 Differential

More information

Lecture 5 - Information theory

Lecture 5 - Information theory Lecture 5 - Information theory Jan Bouda FI MU May 18, 2012 Jan Bouda (FI MU) Lecture 5 - Information theory May 18, 2012 1 / 42 Part I Uncertainty and entropy Jan Bouda (FI MU) Lecture 5 - Information

More information

Reachability-based matroid-restricted packing of arborescences

Reachability-based matroid-restricted packing of arborescences Egerváry Research Group on Combinatorial Optimization Technical reports TR-2016-19. Published by the Egerváry Research Group, Pázmány P. sétány 1/C, H 1117, Budapest, Hungary. Web site: www.cs.elte.hu/egres.

More information

Received: 1 September 2018; Accepted: 10 October 2018; Published: 12 October 2018

Received: 1 September 2018; Accepted: 10 October 2018; Published: 12 October 2018 entropy Article Entropy Inequalities for Lattices Peter Harremoës Copenhagen Business College, Nørre Voldgade 34, 1358 Copenhagen K, Denmark; harremoes@ieee.org; Tel.: +45-39-56-41-71 Current address:

More information

An Introduction of Tutte Polynomial

An Introduction of Tutte Polynomial An Introduction of Tutte Polynomial Bo Lin December 12, 2013 Abstract Tutte polynomial, defined for matroids and graphs, has the important property that any multiplicative graph invariant with a deletion

More information

Tree sets. Reinhard Diestel

Tree sets. Reinhard Diestel 1 Tree sets Reinhard Diestel Abstract We study an abstract notion of tree structure which generalizes treedecompositions of graphs and matroids. Unlike tree-decompositions, which are too closely linked

More information

Variable Length Codes for Degraded Broadcast Channels

Variable Length Codes for Degraded Broadcast Channels Variable Length Codes for Degraded Broadcast Channels Stéphane Musy School of Computer and Communication Sciences, EPFL CH-1015 Lausanne, Switzerland Email: stephane.musy@ep.ch Abstract This paper investigates

More information

ELEC546 Review of Information Theory

ELEC546 Review of Information Theory ELEC546 Review of Information Theory Vincent Lau 1/1/004 1 Review of Information Theory Entropy: Measure of uncertainty of a random variable X. The entropy of X, H(X), is given by: If X is a discrete random

More information

Submodular Functions, Optimization, and Applications to Machine Learning

Submodular Functions, Optimization, and Applications to Machine Learning Submodular Functions, Optimization, and Applications to Machine Learning Spring Quarter, Lecture http://www.ee.washington.edu/people/faculty/bilmes/classes/ee_spring_0/ Prof. Jeff Bilmes University of

More information

A General Model for Matroids and the Greedy Algorithm

A General Model for Matroids and the Greedy Algorithm RIMS Preprint No. 1590 A General Model for Matroids and the Greedy Algorithm Ulrich Faigle Mathematisches Institut/ZAIK Universität zu Köln D-50931 Köln, Germany Satoru Fujishige Research Institute for

More information

Capacity of the Discrete Memoryless Energy Harvesting Channel with Side Information

Capacity of the Discrete Memoryless Energy Harvesting Channel with Side Information 204 IEEE International Symposium on Information Theory Capacity of the Discrete Memoryless Energy Harvesting Channel with Side Information Omur Ozel, Kaya Tutuncuoglu 2, Sennur Ulukus, and Aylin Yener

More information

Optimal Power Control in Decentralized Gaussian Multiple Access Channels

Optimal Power Control in Decentralized Gaussian Multiple Access Channels 1 Optimal Power Control in Decentralized Gaussian Multiple Access Channels Kamal Singh Department of Electrical Engineering Indian Institute of Technology Bombay. arxiv:1711.08272v1 [eess.sp] 21 Nov 2017

More information

Matroid Bounds on the Region of Entropic Vectors

Matroid Bounds on the Region of Entropic Vectors Matroid Bounds on the Region of Entropic Vectors Congduan Li, John MacLaren Walsh, Steven Weber Drexel University, Dept of ECE, Philadelphia, PA 19104, USA congduanli@drexeledu, jwalsh@coedrexeledu, sweber@coedrexeledu

More information

FRACTIONAL PACKING OF T-JOINS. 1. Introduction

FRACTIONAL PACKING OF T-JOINS. 1. Introduction FRACTIONAL PACKING OF T-JOINS FRANCISCO BARAHONA Abstract Given a graph with nonnegative capacities on its edges, it is well known that the capacity of a minimum T -cut is equal to the value of a maximum

More information

Matroid Representation of Clique Complexes

Matroid Representation of Clique Complexes Matroid Representation of Clique Complexes Kenji Kashiwabara 1, Yoshio Okamoto 2, and Takeaki Uno 3 1 Department of Systems Science, Graduate School of Arts and Sciences, The University of Tokyo, 3 8 1,

More information

Lecture 4 Noisy Channel Coding

Lecture 4 Noisy Channel Coding Lecture 4 Noisy Channel Coding I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw October 9, 2015 1 / 56 I-Hsiang Wang IT Lecture 4 The Channel Coding Problem

More information

1 Matroid intersection

1 Matroid intersection CS 369P: Polyhedral techniques in combinatorial optimization Instructor: Jan Vondrák Lecture date: October 21st, 2010 Scribe: Bernd Bandemer 1 Matroid intersection Given two matroids M 1 = (E, I 1 ) and

More information

On the Duality between Multiple-Access Codes and Computation Codes

On the Duality between Multiple-Access Codes and Computation Codes On the Duality between Multiple-Access Codes and Computation Codes Jingge Zhu University of California, Berkeley jingge.zhu@berkeley.edu Sung Hoon Lim KIOST shlim@kiost.ac.kr Michael Gastpar EPFL michael.gastpar@epfl.ch

More information

ON CONTRACTING HYPERPLANE ELEMENTS FROM A 3-CONNECTED MATROID

ON CONTRACTING HYPERPLANE ELEMENTS FROM A 3-CONNECTED MATROID ON CONTRACTING HYPERPLANE ELEMENTS FROM A 3-CONNECTED MATROID RHIANNON HALL Abstract. Let K 3,n, n 3, be the simple graph obtained from K 3,n by adding three edges to a vertex part of size three. We prove

More information

On the intersection of infinite matroids

On the intersection of infinite matroids On the intersection of infinite matroids Elad Aigner-Horev Johannes Carmesin Jan-Oliver Fröhlich University of Hamburg 9 July 2012 Abstract We show that the infinite matroid intersection conjecture of

More information

A Summary of Multiple Access Channels

A Summary of Multiple Access Channels A Summary of Multiple Access Channels Wenyi Zhang February 24, 2003 Abstract In this summary we attempt to present a brief overview of the classical results on the information-theoretic aspects of multiple

More information

ECE Information theory Final (Fall 2008)

ECE Information theory Final (Fall 2008) ECE 776 - Information theory Final (Fall 2008) Q.1. (1 point) Consider the following bursty transmission scheme for a Gaussian channel with noise power N and average power constraint P (i.e., 1/n X n i=1

More information

A Proof of the Converse for the Capacity of Gaussian MIMO Broadcast Channels

A Proof of the Converse for the Capacity of Gaussian MIMO Broadcast Channels A Proof of the Converse for the Capacity of Gaussian MIMO Broadcast Channels Mehdi Mohseni Department of Electrical Engineering Stanford University Stanford, CA 94305, USA Email: mmohseni@stanford.edu

More information

ECE598: Information-theoretic methods in high-dimensional statistics Spring 2016

ECE598: Information-theoretic methods in high-dimensional statistics Spring 2016 ECE598: Information-theoretic methods in high-dimensional statistics Spring 06 Lecture : Mutual Information Method Lecturer: Yihong Wu Scribe: Jaeho Lee, Mar, 06 Ed. Mar 9 Quick review: Assouad s lemma

More information

Submodular Functions, Optimization, and Applications to Machine Learning

Submodular Functions, Optimization, and Applications to Machine Learning Submodular Functions, Optimization, and Applications to Machine Learning Spring Quarter, Lecture http://www.ee.washington.edu/people/faculty/bilmes/classes/eeb_spring_0/ Prof. Jeff Bilmes University of

More information

Optimization in Information Theory

Optimization in Information Theory Optimization in Information Theory Dawei Shen November 11, 2005 Abstract This tutorial introduces the application of optimization techniques in information theory. We revisit channel capacity problem from

More information

Characteristic-Dependent Linear Rank Inequalities and Network Coding Applications

Characteristic-Dependent Linear Rank Inequalities and Network Coding Applications haracteristic-ependent Linear Rank Inequalities and Network oding pplications Randall ougherty, Eric Freiling, and Kenneth Zeger bstract Two characteristic-dependent linear rank inequalities are given

More information

Partial cubes: structures, characterizations, and constructions

Partial cubes: structures, characterizations, and constructions Partial cubes: structures, characterizations, and constructions Sergei Ovchinnikov San Francisco State University, Mathematics Department, 1600 Holloway Ave., San Francisco, CA 94132 Abstract Partial cubes

More information

Lecture 5: Channel Capacity. Copyright G. Caire (Sample Lectures) 122

Lecture 5: Channel Capacity. Copyright G. Caire (Sample Lectures) 122 Lecture 5: Channel Capacity Copyright G. Caire (Sample Lectures) 122 M Definitions and Problem Setup 2 X n Y n Encoder p(y x) Decoder ˆM Message Channel Estimate Definition 11. Discrete Memoryless Channel

More information

Lecture 11: Polar codes construction

Lecture 11: Polar codes construction 15-859: Information Theory and Applications in TCS CMU: Spring 2013 Lecturer: Venkatesan Guruswami Lecture 11: Polar codes construction February 26, 2013 Scribe: Dan Stahlke 1 Polar codes: recap of last

More information

The Poisson Channel with Side Information

The Poisson Channel with Side Information The Poisson Channel with Side Information Shraga Bross School of Enginerring Bar-Ilan University, Israel brosss@macs.biu.ac.il Amos Lapidoth Ligong Wang Signal and Information Processing Laboratory ETH

More information

Strongly chordal and chordal bipartite graphs are sandwich monotone

Strongly chordal and chordal bipartite graphs are sandwich monotone Strongly chordal and chordal bipartite graphs are sandwich monotone Pinar Heggernes Federico Mancini Charis Papadopoulos R. Sritharan Abstract A graph class is sandwich monotone if, for every pair of its

More information

Linearly Representable Entropy Vectors and their Relation to Network Coding Solutions

Linearly Representable Entropy Vectors and their Relation to Network Coding Solutions 2009 IEEE Information Theory Workshop Linearly Representable Entropy Vectors and their Relation to Network Coding Solutions Asaf Cohen, Michelle Effros, Salman Avestimehr and Ralf Koetter Abstract In this

More information

A Single-letter Upper Bound for the Sum Rate of Multiple Access Channels with Correlated Sources

A Single-letter Upper Bound for the Sum Rate of Multiple Access Channels with Correlated Sources A Single-letter Upper Bound for the Sum Rate of Multiple Access Channels with Correlated Sources Wei Kang Sennur Ulukus Department of Electrical and Computer Engineering University of Maryland, College

More information

Lecture 2: August 31

Lecture 2: August 31 0-704: Information Processing and Learning Fall 206 Lecturer: Aarti Singh Lecture 2: August 3 Note: These notes are based on scribed notes from Spring5 offering of this course. LaTeX template courtesy

More information

Course 311: Michaelmas Term 2005 Part III: Topics in Commutative Algebra

Course 311: Michaelmas Term 2005 Part III: Topics in Commutative Algebra Course 311: Michaelmas Term 2005 Part III: Topics in Commutative Algebra D. R. Wilkins Contents 3 Topics in Commutative Algebra 2 3.1 Rings and Fields......................... 2 3.2 Ideals...............................

More information

A combinatorial algorithm minimizing submodular functions in strongly polynomial time

A combinatorial algorithm minimizing submodular functions in strongly polynomial time A combinatorial algorithm minimizing submodular functions in strongly polynomial time Alexander Schrijver 1 Abstract We give a strongly polynomial-time algorithm minimizing a submodular function f given

More information

Information Theory and Statistics Lecture 2: Source coding

Information Theory and Statistics Lecture 2: Source coding Information Theory and Statistics Lecture 2: Source coding Łukasz Dębowski ldebowsk@ipipan.waw.pl Ph. D. Programme 2013/2014 Injections and codes Definition (injection) Function f is called an injection

More information

4F5: Advanced Communications and Coding Handout 2: The Typical Set, Compression, Mutual Information

4F5: Advanced Communications and Coding Handout 2: The Typical Set, Compression, Mutual Information 4F5: Advanced Communications and Coding Handout 2: The Typical Set, Compression, Mutual Information Ramji Venkataramanan Signal Processing and Communications Lab Department of Engineering ramji.v@eng.cam.ac.uk

More information

An instantaneous code (prefix code, tree code) with the codeword lengths l 1,..., l N exists if and only if. 2 l i. i=1

An instantaneous code (prefix code, tree code) with the codeword lengths l 1,..., l N exists if and only if. 2 l i. i=1 Kraft s inequality An instantaneous code (prefix code, tree code) with the codeword lengths l 1,..., l N exists if and only if N 2 l i 1 Proof: Suppose that we have a tree code. Let l max = max{l 1,...,

More information

THE DIRECT SUM, UNION AND INTERSECTION OF POSET MATROIDS

THE DIRECT SUM, UNION AND INTERSECTION OF POSET MATROIDS SOOCHOW JOURNAL OF MATHEMATICS Volume 28, No. 4, pp. 347-355, October 2002 THE DIRECT SUM, UNION AND INTERSECTION OF POSET MATROIDS BY HUA MAO 1,2 AND SANYANG LIU 2 Abstract. This paper first shows how

More information

3. If a choice is broken down into two successive choices, the original H should be the weighted sum of the individual values of H.

3. If a choice is broken down into two successive choices, the original H should be the weighted sum of the individual values of H. Appendix A Information Theory A.1 Entropy Shannon (Shanon, 1948) developed the concept of entropy to measure the uncertainty of a discrete random variable. Suppose X is a discrete random variable that

More information

Submodular Functions, Optimization, and Applications to Machine Learning

Submodular Functions, Optimization, and Applications to Machine Learning Submodular Functions, Optimization, and Applications to Machine Learning Spring Quarter, Lecture 12 http://www.ee.washington.edu/people/faculty/bilmes/classes/ee596b_spring_2016/ Prof. Jeff Bilmes University

More information

(each row defines a probability distribution). Given n-strings x X n, y Y n we can use the absence of memory in the channel to compute

(each row defines a probability distribution). Given n-strings x X n, y Y n we can use the absence of memory in the channel to compute ENEE 739C: Advanced Topics in Signal Processing: Coding Theory Instructor: Alexander Barg Lecture 6 (draft; 9/6/03. Error exponents for Discrete Memoryless Channels http://www.enee.umd.edu/ abarg/enee739c/course.html

More information

An Alternative Proof of Channel Polarization for Channels with Arbitrary Input Alphabets

An Alternative Proof of Channel Polarization for Channels with Arbitrary Input Alphabets An Alternative Proof of Channel Polarization for Channels with Arbitrary Input Alphabets Jing Guo University of Cambridge jg582@cam.ac.uk Jossy Sayir University of Cambridge j.sayir@ieee.org Minghai Qin

More information

Lecture 1: The Multiple Access Channel. Copyright G. Caire 12

Lecture 1: The Multiple Access Channel. Copyright G. Caire 12 Lecture 1: The Multiple Access Channel Copyright G. Caire 12 Outline Two-user MAC. The Gaussian case. The K-user case. Polymatroid structure and resource allocation problems. Copyright G. Caire 13 Two-user

More information

Lecture 10 February 4, 2013

Lecture 10 February 4, 2013 UBC CPSC 536N: Sparse Approximations Winter 2013 Prof Nick Harvey Lecture 10 February 4, 2013 Scribe: Alexandre Fréchette This lecture is about spanning trees and their polyhedral representation Throughout

More information

Symmetries in the Entropy Space

Symmetries in the Entropy Space Symmetries in the Entropy Space Jayant Apte, Qi Chen, John MacLaren Walsh Abstract This paper investigates when Shannon-type inequalities completely characterize the part of the closure of the entropy

More information

arxiv: v1 [math.co] 28 Oct 2016

arxiv: v1 [math.co] 28 Oct 2016 More on foxes arxiv:1610.09093v1 [math.co] 8 Oct 016 Matthias Kriesell Abstract Jens M. Schmidt An edge in a k-connected graph G is called k-contractible if the graph G/e obtained from G by contracting

More information

Hodge theory for combinatorial geometries

Hodge theory for combinatorial geometries Hodge theory for combinatorial geometries June Huh with Karim Adiprasito and Eric Katz June Huh 1 / 48 Three fundamental ideas: June Huh 2 / 48 Three fundamental ideas: The idea of Bernd Sturmfels that

More information

Source and Channel Coding for Correlated Sources Over Multiuser Channels

Source and Channel Coding for Correlated Sources Over Multiuser Channels Source and Channel Coding for Correlated Sources Over Multiuser Channels Deniz Gündüz, Elza Erkip, Andrea Goldsmith, H. Vincent Poor Abstract Source and channel coding over multiuser channels in which

More information

Discrete Memoryless Channels with Memoryless Output Sequences

Discrete Memoryless Channels with Memoryless Output Sequences Discrete Memoryless Channels with Memoryless utput Sequences Marcelo S Pinho Department of Electronic Engineering Instituto Tecnologico de Aeronautica Sao Jose dos Campos, SP 12228-900, Brazil Email: mpinho@ieeeorg

More information

ACO Comprehensive Exam October 14 and 15, 2013

ACO Comprehensive Exam October 14 and 15, 2013 1. Computability, Complexity and Algorithms (a) Let G be the complete graph on n vertices, and let c : V (G) V (G) [0, ) be a symmetric cost function. Consider the following closest point heuristic for

More information

Finite connectivity in infinite matroids

Finite connectivity in infinite matroids Finite connectivity in infinite matroids Henning Bruhn Paul Wollan Abstract We introduce a connectivity function for infinite matroids with properties similar to the connectivity function of a finite matroid,

More information

Lecture 6 I. CHANNEL CODING. X n (m) P Y X

Lecture 6 I. CHANNEL CODING. X n (m) P Y X 6- Introduction to Information Theory Lecture 6 Lecturer: Haim Permuter Scribe: Yoav Eisenberg and Yakov Miron I. CHANNEL CODING We consider the following channel coding problem: m = {,2,..,2 nr} Encoder

More information

The cocycle lattice of binary matroids

The cocycle lattice of binary matroids Published in: Europ. J. Comb. 14 (1993), 241 250. The cocycle lattice of binary matroids László Lovász Eötvös University, Budapest, Hungary, H-1088 Princeton University, Princeton, NJ 08544 Ákos Seress*

More information

TOWARDS A SPLITTER THEOREM FOR INTERNALLY 4-CONNECTED BINARY MATROIDS VII

TOWARDS A SPLITTER THEOREM FOR INTERNALLY 4-CONNECTED BINARY MATROIDS VII TOWARDS A SPLITTER THEOREM FOR INTERNALLY 4-CONNECTED BINARY MATROIDS VII CAROLYN CHUN AND JAMES OXLEY Abstract. Let M be a 3-connected binary matroid; M is internally 4- connected if one side of every

More information

6.1 Main properties of Shannon entropy. Let X be a random variable taking values x in some alphabet with probabilities.

6.1 Main properties of Shannon entropy. Let X be a random variable taking values x in some alphabet with probabilities. Chapter 6 Quantum entropy There is a notion of entropy which quantifies the amount of uncertainty contained in an ensemble of Qbits. This is the von Neumann entropy that we introduce in this chapter. In

More information

Non-isomorphic Distribution Supports for Calculating Entropic Vectors

Non-isomorphic Distribution Supports for Calculating Entropic Vectors Non-isomorphic Distribution Supports for Calculating Entropic Vectors Yunshu Liu & John MacLaren Walsh Adaptive Signal Processing and Information Theory Research Group Department of Electrical and Computer

More information

TOWARDS A SPLITTER THEOREM FOR INTERNALLY 4-CONNECTED BINARY MATROIDS II

TOWARDS A SPLITTER THEOREM FOR INTERNALLY 4-CONNECTED BINARY MATROIDS II TOWARDS A SPLITTER THEOREM FOR INTERNALLY 4-CONNECTED BINARY MATROIDS II CAROLYN CHUN, DILLON MAYHEW, AND JAMES OXLEY Abstract. Let M and N be internally 4-connected binary matroids such that M has a proper

More information

Linear Algebra Lecture Notes-I

Linear Algebra Lecture Notes-I Linear Algebra Lecture Notes-I Vikas Bist Department of Mathematics Panjab University, Chandigarh-6004 email: bistvikas@gmail.com Last revised on February 9, 208 This text is based on the lectures delivered

More information

Polynomiality of Linear Programming

Polynomiality of Linear Programming Chapter 10 Polynomiality of Linear Programming In the previous section, we presented the Simplex Method. This method turns out to be very efficient for solving linear programmes in practice. While it is

More information

5 Mutual Information and Channel Capacity

5 Mutual Information and Channel Capacity 5 Mutual Information and Channel Capacity In Section 2, we have seen the use of a quantity called entropy to measure the amount of randomness in a random variable. In this section, we introduce several

More information

Amobile satellite communication system, like Motorola s

Amobile satellite communication system, like Motorola s I TRANSACTIONS ON INFORMATION THORY, VOL. 45, NO. 4, MAY 1999 1111 Distributed Source Coding for Satellite Communications Raymond W. Yeung, Senior Member, I, Zhen Zhang, Senior Member, I Abstract Inspired

More information

EE229B - Final Project. Capacity-Approaching Low-Density Parity-Check Codes

EE229B - Final Project. Capacity-Approaching Low-Density Parity-Check Codes EE229B - Final Project Capacity-Approaching Low-Density Parity-Check Codes Pierre Garrigues EECS department, UC Berkeley garrigue@eecs.berkeley.edu May 13, 2005 Abstract The class of low-density parity-check

More information

2014 IEEE International Symposium on Information Theory. Two-unicast is hard. David N.C. Tse

2014 IEEE International Symposium on Information Theory. Two-unicast is hard. David N.C. Tse Two-unicast is hard Sudeep Kamath ECE Department, University of California, San Diego, CA, USA sukamath@ucsd.edu David N.C. Tse EECS Department, University of California, Berkeley, CA, USA dtse@eecs.berkeley.edu

More information

Ideal Hierarchical Secret Sharing Schemes

Ideal Hierarchical Secret Sharing Schemes Ideal Hierarchical Secret Sharing Schemes Oriol Farràs Carles Padró June 30, 2011 Abstract Hierarchical secret sharing is among the most natural generalizations of threshold secret sharing, and it has

More information

EE376A: Homeworks #4 Solutions Due on Thursday, February 22, 2018 Please submit on Gradescope. Start every question on a new page.

EE376A: Homeworks #4 Solutions Due on Thursday, February 22, 2018 Please submit on Gradescope. Start every question on a new page. EE376A: Homeworks #4 Solutions Due on Thursday, February 22, 28 Please submit on Gradescope. Start every question on a new page.. Maximum Differential Entropy (a) Show that among all distributions supported

More information

Capacity of AWGN channels

Capacity of AWGN channels Chapter 3 Capacity of AWGN channels In this chapter we prove that the capacity of an AWGN channel with bandwidth W and signal-tonoise ratio SNR is W log 2 (1+SNR) bits per second (b/s). The proof that

More information

Introduction to Information Theory. Uncertainty. Entropy. Surprisal. Joint entropy. Conditional entropy. Mutual information.

Introduction to Information Theory. Uncertainty. Entropy. Surprisal. Joint entropy. Conditional entropy. Mutual information. L65 Dept. of Linguistics, Indiana University Fall 205 Information theory answers two fundamental questions in communication theory: What is the ultimate data compression? What is the transmission rate

More information

Dept. of Linguistics, Indiana University Fall 2015

Dept. of Linguistics, Indiana University Fall 2015 L645 Dept. of Linguistics, Indiana University Fall 2015 1 / 28 Information theory answers two fundamental questions in communication theory: What is the ultimate data compression? What is the transmission

More information

TUTTE POLYNOMIALS OF q-cones

TUTTE POLYNOMIALS OF q-cones TUTTE POLYNOMIALS OF q-cones JOSEPH E. BONIN AND HONGXUN QIN ABSTRACT. We derive a formula for the Tutte polynomial t(g ; x, y) of a q-cone G of a GF (q)-representable geometry G in terms of t(g; x, y).

More information

RESEARCH ARTICLE. An extension of the polytope of doubly stochastic matrices

RESEARCH ARTICLE. An extension of the polytope of doubly stochastic matrices Linear and Multilinear Algebra Vol. 00, No. 00, Month 200x, 1 15 RESEARCH ARTICLE An extension of the polytope of doubly stochastic matrices Richard A. Brualdi a and Geir Dahl b a Department of Mathematics,

More information

Only Intervals Preserve the Invertibility of Arithmetic Operations

Only Intervals Preserve the Invertibility of Arithmetic Operations Only Intervals Preserve the Invertibility of Arithmetic Operations Olga Kosheleva 1 and Vladik Kreinovich 2 1 Department of Electrical and Computer Engineering 2 Department of Computer Science University

More information

Chapter 1. Preliminaries

Chapter 1. Preliminaries Introduction This dissertation is a reading of chapter 4 in part I of the book : Integer and Combinatorial Optimization by George L. Nemhauser & Laurence A. Wolsey. The chapter elaborates links between

More information

On the Polarization Levels of Automorphic-Symmetric Channels

On the Polarization Levels of Automorphic-Symmetric Channels On the Polarization Levels of Automorphic-Symmetric Channels Rajai Nasser Email: rajai.nasser@alumni.epfl.ch arxiv:8.05203v2 [cs.it] 5 Nov 208 Abstract It is known that if an Abelian group operation is

More information

An Extended Fano s Inequality for the Finite Blocklength Coding

An Extended Fano s Inequality for the Finite Blocklength Coding An Extended Fano s Inequality for the Finite Bloclength Coding Yunquan Dong, Pingyi Fan {dongyq8@mails,fpy@mail}.tsinghua.edu.cn Department of Electronic Engineering, Tsinghua University, Beijing, P.R.

More information