Mutual Information, Synergy and Some Curious Phenomena for Simple Channels

Similar documents
PERFECTLY secure key agreement has been studied recently

Lecture 4 Noisy Channel Coding

Communication Engineering Prof. Surendra Prasad Department of Electrical Engineering Indian Institute of Technology, Delhi

AN INFORMATION THEORY APPROACH TO WIRELESS SENSOR NETWORK DESIGN

Information Theory. Coding and Information Theory. Information Theory Textbooks. Entropy

Shannon s Noisy-Channel Coding Theorem

A Comparison of Two Achievable Rate Regions for the Interference Channel

X 1 : X Table 1: Y = X X 2

Representation of Correlated Sources into Graphs for Transmission over Broadcast Channels

ECE Information theory Final (Fall 2008)

Chapter 4. Data Transmission and Channel Capacity. Po-Ning Chen, Professor. Department of Communications Engineering. National Chiao Tung University

Lecture 11: Quantum Information III - Source Coding

Layered Synthesis of Latent Gaussian Trees

Problem Set: TT Quantum Information

Lecture 18: Quantum Information Theory and Holevo s Bound

The Tightness of the Kesten-Stigum Reconstruction Bound for a Symmetric Model With Multiple Mutations

Neural coding Ecological approach to sensory coding: efficient adaptation to the natural environment

Error Exponent Region for Gaussian Broadcast Channels

Chapter I: Fundamental Information Theory

Lower Bounds for Testing Bipartiteness in Dense Graphs

Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science Algorithms For Inference Fall 2014

On the Capacity Region of the Gaussian Z-channel

1 Using standard errors when comparing estimated values

Mathematics-I Prof. S.K. Ray Department of Mathematics and Statistics Indian Institute of Technology, Kanpur. Lecture 1 Real Numbers

Noisy channel communication

Lecture 2: Perfect Secrecy and its Limitations

ECE 534 Information Theory - Midterm 2

Superposition Encoding and Partial Decoding Is Optimal for a Class of Z-interference Channels

+ + ( + ) = Linear recurrent networks. Simpler, much more amenable to analytic treatment E.g. by choosing

ECE 4400:693 - Information Theory

Multiaccess Channels with State Known to One Encoder: A Case of Degraded Message Sets

Shannon s noisy-channel theorem

Entropies & Information Theory

EE376A - Information Theory Final, Monday March 14th 2016 Solutions. Please start answering each question on a new page of the answer booklet.

EE229B - Final Project. Capacity-Approaching Low-Density Parity-Check Codes

The Robustness of Stochastic Switching Networks

Lecture 8: Shannon s Noise Models

Lecture 1: Introduction, Entropy and ML estimation

3F1: Signals and Systems INFORMATION THEORY Examples Paper Solutions

Lecture 3: Channel Capacity

A New Achievable Region for Gaussian Multiple Descriptions Based on Subset Typicality

Chapter 2: Entropy and Mutual Information. University of Illinois at Chicago ECE 534, Natasha Devroye

Introduction to Information Theory. Uncertainty. Entropy. Surprisal. Joint entropy. Conditional entropy. Mutual information.

Dept. of Linguistics, Indiana University Fall 2015

LECTURE 15. Last time: Feedback channel: setting up the problem. Lecture outline. Joint source and channel coding theorem

Frans M.J. Willems. Authentication Based on Secret-Key Generation. Frans M.J. Willems. (joint work w. Tanya Ignatenko)

On Source-Channel Communication in Networks

The Capacity Region of the Gaussian Cognitive Radio Channels at High SNR

Lecture 6 I. CHANNEL CODING. X n (m) P Y X

An introduction to basic information theory. Hampus Wessman

Lecture 11: Continuous-valued signals and differential entropy

6196 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 9, SEPTEMBER 2011

Information maximization in a network of linear neurons

Information Theory. Mark van Rossum. January 24, School of Informatics, University of Edinburgh 1 / 35

Tree sets. Reinhard Diestel

Bounds on Mutual Information for Simple Codes Using Information Combining

EE5139R: Problem Set 7 Assigned: 30/09/15, Due: 07/10/15

Bayesian probability theory and generative models

1. Introduction to commutative rings and fields

Chapter 9 Fundamental Limits in Information Theory

Predicting AGI: What can we say when we know so little?

Capacity of the Discrete Memoryless Energy Harvesting Channel with Side Information

DEEP LEARNING CHAPTER 3 PROBABILITY & INFORMATION THEORY

Lecture 6 : Induction DRAFT

Feedback Capacity of a Class of Symmetric Finite-State Markov Channels

Note that the new channel is noisier than the original two : and H(A I +A2-2A1A2) > H(A2) (why?). min(c,, C2 ) = min(1 - H(a t ), 1 - H(A 2 )).

Lecture 12. Block Diagram

Lecture 8: Channel and source-channel coding theorems; BEC & linear codes. 1 Intuitive justification for upper bound on channel capacity

On The Binary Lossless Many-Help-One Problem with Independently Degraded Helpers

Feedback Capacity of the Gaussian Interference Channel to Within Bits: the Symmetric Case

Quantum Information Types

Lecture 11: Information theory THURSDAY, FEBRUARY 21, 2019

Classification and Regression Trees

A Comparison of Superposition Coding Schemes

Noisy-Channel Coding

Lecture 7 Introduction to Statistical Decision Theory

Incompatibility Paradoxes

Intro to Information Theory

On the Feedback Capacity of Stationary Gaussian Channels

Max-Planck-Institut für Mathematik in den Naturwissenschaften Leipzig

Decentralized Detection In Wireless Sensor Networks

Why is the field of statistics still an active one?

Discrete Memoryless Channels with Memoryless Output Sequences

MATH 25 CLASS 12 NOTES, OCT Contents 1. Simultaneous linear congruences 1 2. Simultaneous linear congruences 2

On the Duality between Multiple-Access Codes and Computation Codes

EE/Stats 376A: Homework 7 Solutions Due on Friday March 17, 5 pm

Gaussian processes. Chuong B. Do (updated by Honglak Lee) November 22, 2008

Channel Polarization and Blackwell Measures

5 Mutual Information and Channel Capacity

9. Distance measures. 9.1 Classical information measures. Head Tail. How similar/close are two probability distributions? Trace distance.

Lecture 4 Channel Coding

K User Interference Channel with Backhaul

Optimum Power Allocation in Fading MIMO Multiple Access Channels with Partial CSI at the Transmitters

Sumset and Inverse Sumset Inequalities for Differential Entropy and Mutual Information

Secret Key Agreement Using Asymmetry in Channel State Knowledge

An Alternative Proof of Channel Polarization for Channels with Arbitrary Input Alphabets

Entanglement and information

On Scalable Coding in the Presence of Decoder Side Information

On Network Interference Management

,

Transcription:

Mutual Information, Synergy and Some Curious henomena for Simple Channels I. Kontoyiannis Div of Applied Mathematics & Dpt of Computer Science Brown University rovidence, RI 9, USA Email: yiannis@dam.brown.edu B. Lucena Division of Computer Science UC-Berkeley, Soda Hall Berkeley, CA 947, USA Email: lucena@cs.berkeley.edu Abstract Suppose we are allowed to observe two equally noisy versions of some signal, where the level of the noise is fixed. We are given a choice: We can either observe two independent noisy versions of, or two correlated ones. We show that, contrary to what classical statistical intuition suggests, it is often the case that correlated data is more valuable than independent data. We investigate this phenomenon in a variety of contexts, give numerous examples for standard families of channels and present general sufficient conditions for deciding this dilemma. One of these conditions draws an interesting connection with the information-theoretic notion of synergy, which has received a lot of attention in the neuroscience literature recently. I. INTRODUCTION The following examples motivate much of our discussion. A broadcast channel. Consider the problem of communicating a message to two remote receivers. We are given two options. Either send the message to two independent intermediaries and have each of them relay the message to one of the receivers, or send the message to only one intermediary and have her re-send the message to the two receivers in two independent transmissions. Assuming that the two receivers are allowed to cooperate, which option is more efficient? Although intuition perhaps suggests that it is preferable to use two independent intermediaries so that we are not stuck with the noise incurred in the first transmission, in many cases this intuition turns out to be wrong. A sampling problem. Suppose we want to test for the presence of a disease in a certain population, and we do so by randomly selecting and testing members of the population. Assuming the test is imperfect (there is a certain chance we might get a false positive or a false negative), is it better to test one person twice, or two people, once each? Again, although it may seem more reasonable to test two people independently, we find that the opposite is often true. These questions are formalized as follows; see Fig.. We have two sets of conditional distributions, or channels, = ( (y x)) and =((z y)) on the same alphabet A. SCENARIO. INDEENDENT OBSERVATIONS. Suppose has a given distribution (x) on A, let, be conditionally independent given, each with distribution (y ), and let Z and Z be distributed according to (z ) and (z ), independently of the remaining variables. In this case, Z and Z are two (conditionally) independent, noisy observations of the variable of interest. SCENARIO. CORRELATED OBSERVATIONS. Given as before, let be distributed according to (y ), and let W,W be conditionally independent of and of one another given, each of them distributed according to (z ). In this case the two noisy observations W and W are typically not conditionally independent given. Z Z Fig.. Independent observations (left) vs. correlated observations (right). The joint distribution of (, Z ) is of course the same as the joint distribution of (, W ), so there is no difference in observing Z or W. But the joint distribution of (, Z,Z ) is different from that of (, W,W ), so the amount of information we obtain from the second observation is different in these two scenarios. In this note we consider the following question: For a given pair of channels and and for a fixed source distribution, we can choose between the two independent observations (Z,Z ) or the two correlated observations (W,W ). If our goal is to maximize the mutual information between and our observations, which scenario should we pick? At first glance, it is perhaps natural to expect that the two (conditionally) independent observations would always be better. The first goal of this work is to show that this intuition is often incorrect. We exhibit simple and very natural examples for which correlated observations provide an advantage over independent ones. Our second goal is to W W

investigate the underlying reasons for this phenomenon. For a variety of channels of interest we derive simple sufficient conditions, which guarantee that one or the other scenario is preferable. Moreover, in attempting to understand the surprising phenomenon where the second scenario is better when the second correlated observation is more valuable than the second independent observation we draw a connection with the statistical phenomenon of synergy identified in neurobiological studies of the brain. Of course one could choose a figure of merit other than mutual information we could ask which if the two scenarios yields a channel with a higher capacity, or we could compare the different probabilities of error when estimating based on the two different types of observations. We pursue these and other related questions in a longer version of this work. Throughout the paper when we say that correlated observations are better than independent ones we mean that I(; W,W ) > I(; Z,Z ). All logarithms are natural logarithms with base e, and all familiar information-theoretic quantities are therefore expressed in nats. All proofs are omitted here; complete arguments will be presented in the longer version of this paper. In Section II we look at the simplest case of binary channels. In Section III we consider a more general class of finitealphabet channels called otts channels, and in Section IV we offer some general results valid for arbitrary channels. There we introduce the concept of synergy, we discuss its relevance in this setting, and we show that the existence of synergy is a sufficient condition for our surprising phenomenon for the correlated observations (W,W ) to carry more information about than the independent observations (Z,Z ).In Section V we give general results on the presence or absence of synergy for Gaussian channels. Note that various related issues have been examined in the literature. The idea of synergy implicitly appears in [6][3][4][9]; for connections with neuroscience see [][8]; the relationship of some related statistical issues with von Neumann s problem of computation in noisy circuits is examined in []; some analogous problems to those treated here for large tree networks are considered in [7]. is that is Bernoulli(p), i.e., the prior probability that the disease is not present is ( p), and the prior probability that it is present is p, in which case a proportion ( δ) > of the people are infected. If the disease is not present then we will certainly pick a healthy individual, otherwise the probability we will pick an infected individual is ( δ). Moreover, we suppose that in testing the selected individuals, the probability of either a false positive or a false negative is ɛ. With the (fairly realistic) parameters p = /, ( δ) =. and ɛ =., direct calculation shows that testing one individual twice is more valuable than testing two people, once each, so that correlated observations are better than independent ones, or, formally, I(; W,W ) >I(; Z,Z ). In fact, as we show next, this phenomenon occurs for a wide range of parameter values. roposition. (Binary Asymmetric Channels) Suppose that Bernoulli(p), is the Z(δ) channel and is the BSC(ɛ) channel, where the parameters are in the range p (, ), δ (, ), ɛ (, /). For all p (, ): (i) For all ɛ (, /) there exists δ small enough such that independent observations are better, i.e. there exists δ such that δ (,δ ) implies I(; W,W ) < I(; Z,Z ). (ii) For all ɛ (, /) there exists δ large enough so that correlated observations are better, i.e., there exists δ such that δ (δ, ), implies I(; W,W ) >I(; Z,Z ). (iii) For any δ (, ) there exists ɛ small enough such that independent observations are better, i.e., there exists ɛ such that ɛ<ɛ implies I(; W,W ) <I(; Z,Z ). (iv) For δ> p p+, correlated observations are better for large enough ɛ, and for δ< p p+ independent observations are better for large enough ɛ: Forδ> p p+, there exists ɛ such that ɛ>ɛ implies I(; W,W ) >I(; Z,Z ), whereas, for δ< p p+ there exists ɛ such that ɛ>ɛ implies I(; W,W ) <I(; Z,Z ). The following three diagrams show three numerical examples that illustrate the result of the proposition quantitatively. p =.3 p =.5 p =.8 II. BINAR CHANNELS Example. Suppose is the Z-channel with parameter δ (, ), denoted Z(δ), and is the binary symmetric channel with parameter ɛ (, /), denoted BSC(ɛ); see Figure. δ -δ ε -ε ε -ε Z delta * 8 6 4 delta * 8 6 4 delta * 8 6 4 Fig.. The Z(δ) channel and the BSC(ɛ) channel. This setting can be interpreted as a simple model for the sampling problem described in the Introduction: The root variable is equal to if a disease is present in a certain population, and it is otherwise. Suppose our prior understanding 4 4 4 epsilon * epsilon * epsilon * (a) (b) (c) Fig. 3. Lighter color indicates regions in the δ-ɛ plane where the correlated observations are more informative for three different values of p.

The above proposition says that, in the particular case where is Z(δ) and is BSC(ɛ), correlated observations are better than independent ones only when δ is large enough, that is, when the first channel is highly non-symmetric. On the other extreme, in the following proposition we show that when and are both BSCs then correlated observations are never better than independent ones. roposition. (Binary Symmetric Channels) Suppose that Bernoulli(p), and and are BSCs with parameters ɛ and ɛ, respectively. Then for any choice of the parameters p (, ), ɛ (, /), ɛ (, /), independent observations are always at least as good as correlated ones, i.e., I(; Z,Z ) I(; W,W ). Roughly speaking, roposition says that the surprising phenomenon (correlated observations being better than independent ones) only occurs when the first channel is highly non-symmetric, and roposition says that it never happens for symmetric channels. This may seem to suggest that the surprise is caused by this lack of symmetry, but as we show in the following section, when the alphabet is not binary, we can still get the surprise with perfectly symmetric channels. III. THE OTTS CHANNEL A natural generalization of the BSC to a general finite alphabet A with m 3 elements is the otts() channel defined by the conditional probabilities W (i i) = for all i A, and W (j i) = = m for all i j A, where the parameter is in (, ); see Figure 4. m Fig. 4. The otts channel with parameter on an alphabet of size m. roposition 3. (otts Channels) If the root is uniformly distributed and both and are otts channels with the same parameter, then there exist < < < such that: (i) For all >, independent observations are better that correlated ones, i.e., I(; Z,Z ) >I(; W,W ). (ii) For all <, correlated observations are better that independent ones, i.e., I(; Z,Z ) <I(; W,W ). IV. ARBITRAR CHANNELS AND SNERG Next we investigate general conditions, under which the surprising phenomenon of getting more information from correlated observations than from independent ones occurs. Since the distributions of (, Z ) and of (, W ) are the same, the amount of information obtained from the first sample is the same in both scenarios, I(; Z ) = I(; W ). So m the surprise will occur only if the second correlated sample is more valuable than the second independent one, namely if I(; W W ) >I(; Z Z ). But since Z and Z are conditionally independent given, we always have I(; Z Z ) I(; Z )=I(; W ), and therefore a sufficient condition for the surprise is that I(; W W ) >I(; W ). This says that the information I(; W W ) we obtain about from the second observation W given W,isgreater than the information I(; W ) = I(; W ) obtained from the first observation; in other words, W not only contains no redundant information, but it somehow collaborates with W in order to give extra information about. This idea is formalized in the following definition. Definition. Consider three jointly distributed random variables (, V,V ).Wedefine the synergy between them as, S(, V,V )=I(; V V ) I(; V ), and we say that the random variables, V,V are synergetic whenever S(, V,V ) >. The concept and the term synergy as defined here are borrowed from the neuroscience literature; see, e.g., [8]. There, represents some kind of sensory stimulus, for example an image, and the observations V,V represent the response of two neurons in the brain, for example in the primary visual cortex. In [8] the amount of synergy between V and V is defined as the amount of information (V,V ) jointly provide about, minus the sum of the amounts of information they provide individually: S(, V,V )=I(; V,V ) I(; V ) I(; V ). () A simple application of the chain rule for mutual information shows that this is equivalent to our definition. Similarly, by the chain rule we also have that the synergy is circularly symmetric in its three arguments. Alternatively, the synergy can be expressed in terms of entropies as: S(, V,V ) = H(, V,V ) +H(, V )+H(, V )+H(V,V ) H() H(V ) H(V ). () roposition 4. (General Channels and Synergy) Let have an arbitrary distribution, and, be two arbitrary finitealphabet channels. If the synergy S(, W,W ) is positive, then the correlated observations are better than the independent ones, I(; W,W ) > I(; Z,Z ). More generally, the correlated observations are better than the independent ones if and only if S(, W,W ) > I(Z ; Z ). Although the statement above is given for finite-alphabet channels, the same result holds channels on arbitrary alphabets. Motivated by roposition 4, we now look at three simple examples, and determine conditions under which synergy does or does not occur.

Example. Maximal Binary Synergy. Consider three binary variables (, V,V ), where we think of V and V as noisy observations of and we assume that (, V ) has the same distribution as (, V ). This covers all the scenarios considered in Section II. Under what conditions is the synergy S(, V,V ) maximized? We have, S(, V,V ) = I(V ; V ) I(V ; V ) = H(V ) H(V, V ) I(V ; V ), with equality if and only if H(V ) =and H(V, V )= I(V ; V )=, that is, if and only if the three variables are pairwise independent, they all have Bernoulli(/) distribution, and any one of them is a deterministic function of the other two. This can be realized in essentially only one way: and V are independent Bernoulli(/) random variables, and V is their sum modulo. In that case the maximal synergy is also intuitively obvious: The first observation is independent of and hence entirely useless, but the second one is maximally useful (given the first), as it tells us the value of exactly. Example 3. Frustration in a Simple Spin System. Matsuda in [5] presented a simple example which exhibits an interesting connection between the notion of synergy and the phenomenon of frustration in a physical system. In our notation, let, V,V denote three random variables with values in A = {+, }. hysically, these represent the directions of the spins of three different particles. Assume that their joint distribution is given by the Gibbs measure r( = x, V = v,v = v )= Z e(xv+xv+vv), where Z = Z() =e 3 +6e is the normalizing constant, and is a parameter related to the temperature of the system and the strength of the interaction between the particles. When is positive, then the preferred states of the system i.e., the triplets (x, v,v ) that have higher probability are those in which each pair of particles has the same spin, namely x = v = v =+and x = v = v =. Similarly, when is negative the preferred states are those in which the spins in each pair are different; but this is of course impossible to achieve for all three pairs simultaneously. In physics, this phenomenon where the preferred local configurations are incompatible with the global state of the system is referred to as frustration. A cumbersome but straightforward calculation shows that the synergy S(, V,V ) can be calculated explicitly to be, log(z()/8) + 6(e3 + e ) [ e ] log Z() e 3 + e, which is plotted as a function of in Figure 5. We observe that the synergy is positive exactly when the system is frustrated, i.e., when is negative. Example 4. Gaussian Additive Noise Channels. Suppose is a Gaussian signal and V,V are observations obtained through additive Gaussian noise channels. Specifically, we synergy.5..5.5..5.5.5 alpha Fig. 5. The synergy of the spin system in Example 3 as a function of. assume that the total noise power between and each V i is fixed at some value N, but that we can control the degree of the dependence between the two observations, V = + rz + sz V = + rz + sz, where Z,Z,Z are independent N(,N) variables which are also independent of N(, ), the parameter r [, ] is in our control, and s is chosen so that the total noise power stays constant, i.e., r + s =. See Figure 6. N(,s N) N(,r N) V N(,) V N(,s N) Fig. 6. The additive Gaussian noise setting of Example 3. How should we choose the parameter r in order to maximize the information I(; V,V )? Taking r =corresponds to independent observations, taking <r< gives correlated observations as in the second scenario, and the extreme case r =gives the same observation twice V = V. The covariance matrix of (, V,V ) is easily read off of the above description, and the mutual information I(; V,V ) can be calculated explicitly to be, I(; V,V )= log [+ ] N( + r, ) which is obviously decreasing in r. Therefore the maximum is achieved at r =, meaning that here independent observations are always better. Therefore, roposition 4 implies that we never have positive synergy for any r.

V. GENERAL GAUSSIAN CHANNELS Example 5. The General Symmetric Gaussian Case. Suppose (, V,V ) are arbitrary (jointly) Gaussian random variables such that (, V ) and (, V ) have the same distribution. When is the synergy S(, V,V ) maximized? As we will see, the synergy is positive if and only if the correlation between the two observations is either negative or positive but small enough. Without loss of generality we may take all three random variables to have zero mean. In the most general case, the covariance matrix of (, V,V ) can be written in the form, K = σ τ τ for arbitrary positive variances σ,τ, and for ( στ, στ), ( τ,τ ). In order to ensure that K is positive definite (so that it can be a legitimate covariance matrix) we also need to restrict the parameter values so that det(k) >, which reduces to the relation, τ + > σ. (3) Using the expression in (), where the entropies now are interpreted as differential entropies, the synergy can be evaluated by a straightforward calculation, S(, V,V )= { (τ log + )(σ τ ) } σ τ 4 [σ (τ + ). ] Solving the inequality S(, V,V ) > we obtain that we have synergy if and only if τ < σ τ, (4) which means that we have synergy if and only if is negative (but still not too negative, subject to the constraint (3)) or small enough according to (4). Example 6. A Gaussian Example with Asymmetric Observations. We take and the observations V,V to be jointly Gaussian, all with unit variances and zero means. We assume that the correlation between the two observations remains fixed, but we let the correlation between and each V i vary in such a way that it is split between the two: E(V )=λρ and E(V )=( λ)ρ. We also assume that λ (, ) is a parameter we can control, that ρ (, ) is fixed, and we define ρ> by ρ + ρ =. By symmetry, we can restrict attention to the range λ (, /]. Our question here is to see whether the asymmetry in the two observations (corresponding to values of λ /) increases the synergy or not. Before proceeding with the answer we look at the two extreme points. For λ =we see that and V are independent N(, ) variables, and their joint distribution with V can be described by V = ρ + ρv. In this case the mutual information I(; V ) is some easy to calculate finite number, whereas the conditional mutual information I(; V V ) is infinite, because and V are deterministically related given V. Therefore the synergy S(, V,V )=I(; V V ) I(; V )=. At the other extreme, when λ =/ we have a symmetric distribution as in the previous example, with σ = τ =, = ρ/ and = ρ. Therefore, here we only have synergy when = ρ is negative or small enough, i.e., when the correlation between the two observations is either negative or small enough. Specifically, substituting the above parameter values in condition (4) we see that we have symmetry if and only if ρ satisfies ρ 3 + ρ +7ρ <, i.e., if and only if < ρ<.3968... The fact that the synergy is infinite for λ =and reduces to a reasonable value (which may or may not be positive) at λ = / suggests that perhaps the asymmetry in some way helps the synergy, and that the synergy may in fact be decreasing with λ. As it turns out, this is almost true: roposition 5. (Asymmetric Gaussian Observations) Suppose that (, V,V ) are jointly Gaussian as described above. Let ɛ = 8( 5/4 ).86. (i) If ρ (, ɛ) then the synergy is decreasing in λ for all λ (, /). (ii) If ρ ( ɛ, ) then the synergy is decreasing for λ (,λ ) and increasing for λ (λ, /), where λ = λ (ρ) = [ ACKNOWLEDGMENTS 4ρ ρ ]. I.K. was supported in part by a Sloan Foundation Research Fellowship and by NSF grant #-73378-CCR. B.L. was supported in part by an NSF Mathematical Sciences ostdoctoral Research Fellowship. REFERENCES [] N. Brenner, Strong S.., R. Koberle, W. Bialek, and R.R. de Ruyter van Steveninck. Synergy in a neural code. Neural Comput., (7):53 55,. [] W. Evans, C. Kenyon,. eres, and L.J. Schulman. Broadcasting on trees and the Ising model. Ann. Appl. robab., ():4 433,. [3] R.M. Fano. Transmission of information: A statistical theory of communications. The M.I.T. ress, Cambridge, Mass., 96. [4] T.S. Han. Nonnegative entropy measures of multivariate symmetric correlations. Inform. and Control, 36():33 56, 978. [5] H. Matsuda. hysical nature of higher-order mutual information: Intrinsic correlations and frustration. hys. Rev. E, 6(3, art A):396 3,. [6] W.J. McGill. Multivariate information transmission. Trans. I.R.E., GIT- 4:93, 954. [7] E. Mossel. Reconstruction on trees: beating the second eigenvalue. Ann. Appl. robab., ():85 3,. [8] E. Schneidman, W. Bialek, and M.J. II Berry. Synergy, redundancy, and independence in population codes. Journal of Neuroscience, 3(37):539 553, 3. [9] R.W. eung. A new outlook on Shannon s information measures. IEEE Trans. Inform. Theory, 37(3, part ):466 474, 99.