An Infeasibility Result for the Multiterminal Source-Coding Problem

Size: px
Start display at page:

Download "An Infeasibility Result for the Multiterminal Source-Coding Problem"

Transcription

1 An Infeasibiity Resut for the Mutitermina Source-Coding Probem Aaron B. Wagner, Venkat Anantharam, November 22, 2005 Abstract We prove a new outer bound on the rate-distortion region for the mutitermina source-coding probem. This bound subsumes the best outer bound in the iterature and improves upon it stricty in some cases. The improved bound enabes us to obtain a new, concusive resut for the binary erasure version of the CEO probem. The bound recovers many of the converse resuts that have been estabished for specia cases of the probem, incuding the recent one for the Gaussian version of the CEO probem. Introduction In their auded paper [], David Sepian and Jack K. Wof characterize the information rates needed to ossessy communicate two correated, memoryess information sources when these sources are encoded separatey. Their we-known resut states that two discrete sources Y and Y 2 can be ossessy reproduced if R > H(Y Y 2 R 2 > H(Y 2 Y R + R 2 > H(Y, Y 2, where R is the rate of the encoder observing Y and R 2 is the rate of the encoder observing Y 2. Conversey, ossess reproduction is not possibe if (R, R 2 ies outside the cosure of this region. See Cover and Thomas [2, Section 4.4] or Csiszár and Körner [3, Section 3.] for precise statements of the resut and modern proofs. This resut is naturay viewed as a muti-source generaization of This research was supported by DARPA under Grants F and N C-8062, under Grant N from the Office of Nava Research, and under Grant ECS from the Nationa Science Foundation. Coordinated Science Laboratory, University of Iinois at Urbana-Champaign and Schoo of Eectrica and Computer Engineering, Corne University. Emai: wagner@ece.corne.edu. Department of Eectrica Engineering and Computer Sciences, University of Caifornia, Berkeey. Emai: ananth@eecs.berkeey.edu.

2 Y Encoder R Y 2 Encoder 2 R 2 Decoder Ŷ Ŷ 2 Figure : Separate encoding of correated sources. Y 0 Y Y 2. Y L Encoder Encoder 2. Encoder L R R 2 R L Y L+ Decoder Z Z 2. Z N Figure 2: A genera mode. the cassica resut of Shannon [4], which says that, oosey speaking, a discrete memoryess source with known aw can be ossessy reproduced if and ony if the data rate exceeds the entropy of the source. Shannon too studied a generaization of this resut, abeit in a different direction. He studied the probem of reproducing a source imperfecty, subject to a minimum fideity constraint, and showed that the required rate is given by the we-known rate-distortion formua [4, 5]. One of the centra probems of Shannon theory is to understand the imits of source coding for modes that combine the two generaizations. That is, we seek to determine the rates required to reproduce two correated sources, each subject to a fideity constraint, when the sources are encoded separatey (see Fig.. Determining the set of achievabe rates and distortions for this setup is often caed the mutitermina source-coding probem, even though this name suggests a more eaborate network topoogy. This probem has been unsoved for some time. The mode we consider in this paper is sighty more genera and is depicted in Fig. 2. Beyond considering an arbitrary number of encoders, L, we aso aow for a hidden source, Y 0, which is not directy observed by any encoder or the decoder, and a side information source, Y L+, which is observed by the decoder but not by any encoder. We aso permit arbitrary functions of the sources to be reproduced, in addition to, or in pace of, the sources themseves. We wi therefore use Z, Z 2, etc., to denote the instantaneous estimates instead of Ŷ, Ŷ2, etc., as before. In this paper, we wi refer to this more genera probem as the mutitermina source-coding probem. 2

3 One might doubt the wisdom of embeishing the mode when even the basic form shown in Fig. is unsoved. But one of the contributions of this paper is to show that far from obscuring the probem, the added generaity actuay iuminates it. Of course, the more genera probem is aso unsoved. Many specia cases have been soved, however. For these, the reader is referred to the cassica papers of Sepian and Wof [], mentioned earier; Wyner [6]; Ahswede and Körner [7]; Wyner and Ziv [8]; Körner and Marton [9]; and Ge fand and Pinsker [0]; and to the more recent papers of Berger and Yeung []; Gastpar [2]; Oohama [3]; and Prabhakaran, Tse, and Ramchandran [4]. Whie a of these papers contain concusive resuts, these resuts are estabished using coding theorems that are taiored to the specia cases under consideration. The soutions to these soved specia cases suggest a coding technique for the genera mode [5, 6]. The idea is this. Each encoder first quantizes its observation as in singe-user rate-distortion theory. The quantized processes are then ossessy communicated to the decoder using the binning scheme of Cover [7]. The decoder uses the quantized processes to produce the desired estimates. The set of rate-distortion vectors that can be achieved using this scheme is described in Section 3. This inner bound to the rate-distortion region is tight in a of the specia cases isted above except that of Körner and Marton [9]. Indeed, the Körner-Marton probem seems to require a custom coding technique that reies on the probem s unique structure. This suggests that the mutitermina source-coding probem may not have a cassica singe-etter soution. We attack this probem, therefore, by proving singe-etter inner and outer bounds on the rate-distortion region. The best inner bound in the iterature has just been described. The best outer bound, which is due to Berger [5] and Tung [6], is described in Section 3. In ight of the resut of Körner and Marton, it is cear that the two bounds must not coincide in a cases. This gap cannot be entirey attributed to the inner bound, however, as there are instances of the probem that can be soved from first principes for which the Berger-Tung outer bound is stricty bigger than the true rate-distortion region (see Section 3. of this paper. Our aim is to provide an improved outer bound for the probem. We prove such a bound in the next section, foowing a precise formuation of the probem. We show that our bound is contained in (i.e., subsumes the Berger-Tung outer bound in Section 3. In that section, we aso provide severa exampes for which the containment is strict. One exampe is the binary erasure version of the CEO probem, the genera version of which was introduced by Berger, Zhang, and Viswanathan [8]. The CEO probem is a specia case of the mutitermina source-coding probem in which the observed processes Y,..., Y L are conditionay independent given the hidden process Y 0 and in which the decoder (the CEO is ony interested in estimating the hidden process. Berger, Zhang, and Viswanathan characterize This definition is not as restrictive as it might seem. Indeed, any instance of the mutitermina source-coding probem with a singe distortion constraint can be transformed into 3

4 the tradeoff between sum rate and Hamming distortion in the high-rate and many-encoder imit. Ge fand and Pinsker [0] had earier found the rate region in the ossess reproduction case. We consider the probem in which Y 0 is binary and uniform, and the encoders observe Y 0 through independent binary erasure channes. The decoder reproduces Y 0 subject to a constraint on the erasure distortion (see Section 3.2 or Cover and Thomas [2, p. 370]. For this probem, we show that our outer bound is tight in the sum rate for any number of users. In contrast, the Berger-Tung outer bound contains points whose sum rate is stricty smaer than the optimum. In our view, this resut is of interest in its own right. The binary erasure CEO probem arises naturay in sensor networks in which the sensors occasionay seep to conserve energy. This appication is described in Section 3.2. The resut aso provides an exampe for which the binning-based coding scheme mentioned earier is optimum. Finay, this is one of reativey few concusive resuts for the mutitermina source-coding probem in genera, and the CEO probem in particuar. These probems are considered sufficienty difficut that it is worth reporting soutions to specia cases. One of the few other concusive resuts avaiabe is for the Gaussian version of the CEO probem, which was first studied by Viswanathan and Berger [9]. Here the encoders observe a hidden Gaussian source through independent Gaussian additive-noise channes. The distortion measure is expected squared error. The rate-distortion region for this probem was recenty found by Oohama [20, 3] and independenty by Prabhakaran, Tse, and Ramchandran [4]. We show that the converse resut of these four authors can be recovered from our singe-etter outer bound, whie the Berger-Tung outer bound contains points that ie outside the true rate-distortion region. The converse resuts used to sove a of the other specia cases mentioned so far are aso consequences of our bound. This is discussed in Section 4. Our outer bound therefore serves to unify most of what is known about the nonexistence of mutiuser source codes. This unification is noteworthy in the case of Oohama [3] and Prabhakaran, Tse, and Ramchandran [4] because the connection between their remarkabe converse resut and the cassica discrete resuts in this area is not immediatey apparent. As we wi see, subject to some technica caveats, most of the key resuts in mutitermina source coding can be recovered by combining the genera inner bound described earier with the outer bound described next. 2 Formuation and Main Resut We work excusivey in discrete time. We use uppercase etters to denote random variabes and vectors, owercase etters to denote their reaizations, and script etters to denote their ranges. Let {Y0 n (t, Y n (t,..., YL n(t, Y L+ n (t}n t= an instance of the CEO probem without changing the rate-distortion region by umping Y,..., Y L into Y 0 and redefining the distortion measure as needed. Nonetheess, it defines a usefu specia case. 4

5 Y n 0 Y n Y n 2 Y n L. Encoder Encoder 2. Encoder L f (n (Y n f (n L (Y n L Y n L+ Decoder ( Z n = φ (n (Y nl+, ( (Y nl+, Z2 n = φ (n 2. Z n K = φ(n K (Y nl+, ( f (n L (Y n = L f (n (Y n = L f (n (Y n = Figure 3: Notation for the encoding and decoding rues. be a vector-vaued, finite-aphabet memoryess source. For A {,..., L}, we denote (Y n(t { A} by YA n (t. If A = {,..., L}, we write this simpy as Yn (t. In this context, the set A c shoud be interpreted as {,..., L}\A rather than {0,..., L + }\A. When A = {}, we sha write Y n n (t and Yc(t in pace of Y {} (t and Y {} c(t, respectivey. Aso, we use Y n(t : t 2 to denote {Y n(t}t2 t=t, Y n to denote Y n n ( : n, and Y (tc to denote (Y n (,..., Y n (t, Y n (t +,..., Y n (n. Simiar notation wi be used for other vectors that appear ater. The notation for the encoding and decoding rues is shown in Fig. 3. For each in {,..., L}, encoder observes Y n, then empoys a mapping { } f (n : Y n,..., M (n to convey information about it to the decoder. The decoder observes Y n L+ and uses it and the received messages to estimate K functions of the vector-vaued source according to the mappings ϕ (n k : Y n L+ L { =,..., M (n } Zk n for k =,..., K. We assume that K distortion measures d k : L+ =0 Y Z k R + are given. We mention at this point that whie the generaity of this setup wi be usefu ater when studying exampes, it is not needed to appreciate the bounding technique itsef. The reader is wecome to focus on the basic mode shown in Fig. for that purpose. Definition The rate-distortion vector (R, D = (R, R 2,..., R L, D, D 2,..., D K is achievabe if there exists a bock ength n, encoders f (n, and a decoder ( ϕ (n,..., ϕ(n K 5

6 such that 2 R (n og M for a, and n [ ] n D k E d k (Y0 n (t, Y n (t, Y n n L+(t, Zk n (t t= for a k. ( Let RD be the set of achievabe rate-distortion vectors. Its cosure, RD, is caed the rate-distortion region. We wi sometimes be concerned with projections of the rate-distortion region. We denote these by, for exampe, RD {R = 0}, meaning { (R2,..., R L, D,..., D K : (0, R 2,..., R L, D,..., D K RD }. In this paper, we view ossess compression as a imit of ossy compression with the distortion tending to zero. More precisey, if we wish to reproduce Y ossessy, we wi set, say, Z = Y with d equa to Hamming distance, and then examine RD {D = 0}. This convention and Definition together yied a notion of ossess compression that is weaker than the one traditionay used. It is common instead to require that for a sufficienty arge bock engths, there exists a code for which the probabiity of correcty reproducing the entire vector Y n is arbitrariy cose to. But a weaker notion is desirabe here since we are proving an outer bound or converse resut. To state our resut, et Y 0,..., Y L+ be generic random variabes with the distribution of the source at a singe time. Let Γ o denote the set of finiteaphabet random variabes γ = (U,..., U L, Z,..., Z K, W, T satisfying (i (W, T is independent of (Y 0, Y, Y L+, (ii U (Y, W, T (Y 0, Y c, Y L+, U c, shorthand for U, (Y, W, T and (Y 0, Y c, Y L+, U c form a Markov chain in this order, for a, and (iii (Y 0, Y, W (U, Y L+, T Z. It is straightforward to verify that Γ o is precisey the set of finite-aphabet random variabes (U,..., U L, Z,..., Z K, W, T whose joint distribution with (Y 0, Y, Y L+ factors as p(y 0, y, y L+, u, z, w, t = p(y 0, y, y L+ p(w, t L p(u y, w, tp(z u, y L+, t. This description is hepfu in that it suggests a parametrization of the space Γ o. Let χ denote the set of finite-aphabet random variabes X with the property that Y,..., Y L are conditionay independent given (X, Y L+. Note that χ is nonempty since it contains, e.g., X = (Y,..., Y L. = 2 A ogarithms and exponentiations in this paper have base e. 6

7 There are many ways of couping a given X in χ and γ in Γ o. In this paper, we sha ony consider the unique couping for which X (Y 0, Y, Y L+ γ, which we ca the Markov couping. Whenever the joint distribution of X, (Y 0, Y, Y L+, and γ arises, we assume that this couping is in effect. It is evident from the definition of χ that there is considerabe atitude in choosing how X depends on Y 0. This is because the soe constraint on the choice of X ony depends on the joint distribution of X and (Y, Y L+. But as the foowing definition makes cear, this freedom is inconsequentia since our outer bound ony depends on the distributions of (Y 0, Y, Y L+, γ and (X, Y, Y L+, γ separatey. Definition 2 Let RD o (X, γ = { (R, D : A R I(X; U A U A c, Y L+, T + I(Y ; U X, Y L+, W, T for a A {,..., L}, A } and D k E[d k (Y 0, Y, Y L+, Z k ] for a k. Then define RD o = RD o (X, γ. X χ γ Γ o The first theorem is our main resut. Theorem The rate-distortion region is contained in RD o. In fact, RD RD o. Proof. It suffices to show the second statement. Suppose (R, D is achievabe. Let f (n,..., f (n L be encoders and (ϕ(n,..., ϕ(n K a decoder satisfying (. Take any X in χ and augment the sampe space to incude X n so that (X n (t, Y n 0 (t, Y n (t, Y n L+(t is independent over t. Next et T be uniformy distributed over {,..., n}, independent of X n, Y0 n, Y n, and YL+ n. Then define X = X n (T Y = Y n (T for each in {0,..., L + } ( U = (Y n, X n ( : T, YL+(T n c f (n Z k = Z n k (T for each k W = (X n (T c, Y n L+(T c. for each 7

8 It can be verified that γ = (U, Z, W, T is in Γ o and that, together with Y 0, Y, Y L+, and X, it satisfies the Markov couping. It suffices to show that (R, D is in RD o (X, γ. First, note that ( impies i.e., D k E[d k (Y n 0 (T, Y n (T, Y n L+(T, Z n k (T ] for a k, D k E[d k (Y 0, Y, Y L+, Z k ] for a k. Second, et A {,..., L}. Then by the cardinaity bound on entropy, n ( ( R H f (n (Y n. A A Since conditioning reduces entropy, this impies n ( ( ( R H f (n (Y n f (n (Y n, Y n A c L+ A ( ( = I X n, YA; n f (n A (Y n By the chain rue for mutua information, ( ( I X n, YA; n f (n (Y n ( ( = I X n ; + I A f (n ( ( YA; n ( f (n (Y n f (n A ( f (n (Y n, Y n A c L+ ( A (Y n Appying the chain rue again gives ( ( ( I X n ; (Y n = n I t= f (n ( ( X n (t; A f (n f (n (Y n A f (n ( (Y n f (n (Y n, Y n A c L+ ( A f (n (Y n (Y n, Y n A c L+, Y n A c L+ (Y n A c, X n, Y n L+ A c, Xn ( : t, Y n L+. (2 Consider next the second term on the right-hand side of (3. Since X χ, ( ( I YA; n f (n (Y n A ( f (n (Y n, Xn, Y n A c L+ = ( I Y n ; f (n (Y n X n, YL+ n A. (3.. 8

9 Appying the chain rue once more gives ( I Y n ; f (n (Y n X n, YL+ n = n ( I Y n (t; f (n (Y n X n, Y n ( : t, YL+ n. t= But ( I Y n (t; f (n (Y n X n, Y n ( : t, YL+ n + I(Y n (t; Y n ( : t X n, YL+ n ( = I Y n (t; f (n (Y n X n, YL+ n ( + I Y n (t; Y n ( : t f (n (Y n, X n, YL+ n, and the second term on the eft-hand side is zero. Thus ( I Y n (t; f (n (Y n X n, Y n ( : t, YL+ n ( I Y n (t; f (n (Y n X n, YL+ n. Substituting the resuts of these various cacuations into (2 gives R A n [ n I t= ( ( X n (t; f (n (Y n A ( f (n (Y n, X n ( : t, Y n A c L+ + ( I Y n (t; f (n (Y n X n (t, X n (t c, YL+(t, n YL+(t ] n c. A (4 If A c is nonempty, this can be rewritten as R I(X n (T ; U A U A c, YL+(T n, T A + A I(Y n (T ; U X n (T, X n (T c, Y n L+(T, Y n L+(T c, T = I(X; U A U A c, Y L+, T + A I(Y ; U X, Y L+, W, T. 9

10 The case A = {,..., L} is handed separatey. In this case, observe that ( ( ( I X n (t; f (n (Y n f (n (Y n, X n ( : t, Y n A A c L+ ( ( = I X n (t; f (n (Y n X n ( : t, YL+ n A ( ( = I X n (t; (Y n X n ( : t, YL+ n f (n A + I(X n (t; X n ( : t, Y n L+(t c Y n L+(t ( ( = I X n (t; f (n (Y n A, X n ( : t, YL+(t n c YL+(t n. Substituting this into (4 and proceeding as in the A c case competes the proof. It is worth noting that the proof uses cassica techniques. Most of the manipuations in the atter part of the proof can be viewed as versions of the chain rue for mutua information. Since this chain rue hods in abstract spaces [2, (3.6.6], the proof can be readiy extended to more genera aphabets. The key step in the proof is the introduction of X n in (2. Unike the other auxiiary random variabes, X n does not represent a component of the code. Rather, it is used to aid the anaysis by inducing conditiona independence among the messages sent by the encoders. This technique of augmenting the source to induce conditiona independence was pioneered by Ozarow [22], who used it to sove the Gaussian two-descriptions probem. Wang and Viswanath [23] used it to determine the sum rate of the Gaussian vector mutipe-descriptions probem with individua and centra decoders. It was aso used by Wagner, Tavidar, and Viswanath [24] to sove the Gaussian two-termina source-coding probem. A step that is simiar to (2 appeared in Ge fand and Pinsker [0] and in ater papers on the Gaussian CEO probem [3, 4], athough in these works X n is part of the source, so no augmentation is invoved. The significance of conditiona independence has ong been known in the reated fied of distibuted detection (e.g., [25]. Given the simiarity between distributed detection and the mutitermina source-coding probem, one expects conditiona independence to pay a significant roe here as we. Indeed, most concusive resuts for the mutitermina source-coding probem require a conditiona independence assumption [0, 8, 2, 3, 4]. The motivation for introducing X n is that it aows one to appy the approach used in these works to probems that ack conditiona independence. We do not consider the probem of computing RD o in this paper. Note that we have not specified the aphabet sizes of the auxiiary random variabes U, W, and T. As such, the outer bound provided by Theorem is not computabe [3, p. 259] in the present form. One might question the utiity of an outer bound that cannot be computed. The remainder of the paper, however, wi show that the bound is sti usefu as a theoretica too. In addition, cardinaity 0

11 bounds might be found ater, athough obtaining such bounds appears to be more difficut in this case than for reated bounds. It shoud be mentioned that the time-sharing variabe T is unnecessary; it can be absorbed into the other variabes. We have incuded it to ease the comparison with existing inner and outer bounds, to which we turn next. 3 Reation to Existing Bounds The coding scheme described in the introduction gives rise to the foowing inner bound on the rate-distortion region. Definition 3 Let Γ BT i denote the set of finite-aphabet random variabes γ = (U,..., U L, Z,..., Z K, T satisfying ( i T is independent of (Y 0, Y, Y L+, ( ii U (Y, T (Y 0, Y c, Y L+, U c for a, and ( iii (Y 0, Y (U, Y L+, T Z. Then define RD BT i (γ = { (R, D : A R I(Y A ; U A U A c, Y L+, T for a A, and D k E[d k (Y 0, Y, Y L+, Z k ] for a k }. Finay, et RD BT i = γ Γ BT i RD BT i (γ. Proposition ([5, 6] RD BT i RD. In Appendix F we show that RD BT i is in fact cosed. We ca RD BT i the Berger-Tung [5, 6] inner bound, since athough these authors prove a bound that is ess genera than the one given here, their proof can be extended to prove Proposition. See Chen et a. [26] or Gastpar [2] for recent sketches of the proof that accommodate some of the generaizations incuded here. To understand the difference between RD BT i and RD o, suppose that (U, Z, W, T

12 is in Γ o and W is deterministic. Then (U, Z, T is in Γ BT i, and for a A {,..., L} and a X χ, Thus I(X; U A U A c, Y L+, T + A I(Y ; U X, Y L+, W, T = I(X; U A U A c, Y L+, T + I(Y A ; U A U A c, X, Y L+, W, T = I(X; U A U A c, Y L+, T + I(Y A ; U A U A c, X, Y L+, T = I(X, Y A ; U A U A c, Y L+, T = I(Y A ; U A U A c, Y L+, T. RD o (X, U, Z, W, T = RD BT i (U, Z, T. (5 Conversey, if (U, Z, T is in Γ BT i, then for any deterministic W, (U, Z, W, T is in Γ o and (5 hods for any X. It foows that RD BT i is equa to RD o with W restricted to be deterministic in the definition of Γ o. In particuar, to obtain coincident inner and outer bounds, it suffices to show that restricting W to be deterministic in the definition of Γ o does not reduce RD o. We wi see ater how this can be accompished in severa exampes. Of course, it is not possibe for the probem soved by Körner and Marton [9], since they show that the inner bound is not tight in that case. The best outer bound in the iterature is the foowing. Definition 4 Let Γ BT o denote the set of finite-aphabet random variabes γ = (U, Z, T satisfying ( i T is independent of (Y 0, Y, Y L+, ( ii U (Y, T (Y 0, Y c, Y L+ for a, and ( iii (Y 0, Y (U, Y L+, T Z. Then et RD BT o (γ = { (R, D : A R I(Y; U A U A c, Y L+, T for a A, and D k E[d k (Y 0, Y, Y L+, Z k ] for a k }. Finay, et RD BT o = γ Γ BT o RD BT o (γ. Proposition 2 ([5, 6] RD RD BT o. 2

13 As with the inner bound, Berger [5] and Tung [6] prove the resut for a mode that is more restrictive than the one considered here, but their proof can be extended to this setup (c.f. [26, 2]. The difference between RD BT i and RD BT o is that condition (ii has been weakened in the atter. We next show that the Berger-Tung outer bound is subsumed by the one in the previous section. Proposition 3 RD o RD BT o. Proof. First observe that for any (U, Z, W, T in Γ o, U (Y, W, T (Y 0, Y c, Y L+ for each. Since (Y 0, Y c, Y L+ (Y, T W, it hods U (Y, T (Y 0, Y c, Y L+ for each. Thus (U, Z, T is in Γ BT o and in particuar, It foows that RD o RD o (Y, U, Z, W, T = RD BT o (U, Z, T. γ Γ o RD o (Y, γ γ Γ BT o RD BT o (γ = RD BT o. The proof reveas that RD o improves upon RD BT o in two ways. The first is that RD o aows for optimization over X whie RD BT o effectivey requires the choice X = Y. The second is that Γ o is smaer than Γ BT o in the sense that if (U, Z, W, T is in Γ o then (U, Z, T is in Γ BT o. The baance of this section is devoted to showing that these improvements make the containment in Proposition 3 strict in some cases. As the reader wi see, the former difference is entirey responsibe for the gap that we expose between the two bounds in our exampes. We hasten to add, however, that the atter improvement is not an empty one in that Anantharam and Borkar [27] have shown that there can exist a (U, Z, T in Γ BT o with the property that there does not exist a W such that (U, Z, W, T is in Γ o. It is interesting to note that the Anantharam-Borkar exampe arose independenty of this work in the context of distributed stochastic contro. stricty contains RD o. The first is rather contrived and can be soved from first principes. It is incuded to iustrate the difference between the two bounds. We wi exhibit three exampes for which RD BT o 3. Toy Exampe Let Y, Y 2, Y 2, and Y 22 be independent and identicay distributed (i.i.d. random variabes, uniformy distributed over {0, }. Consider two encoders (L = 2 with Y = (Y, Y 2 and Y 2 = (Y 2, Y 22 (there is no hidden source 3

14 or side information in this exampe. We have a singe distortion constraint (K = with Z = {0, } 2 and { 0 if Z = (Y, Y 2 or Z = (Y 2, Y 22 d (Y, Z = otherwise. In words, the decoder attempts to guess either the first or the second coordinate of both encoders observations. It incurs a distortion of zero if it guesses correcty the same coordinate of the two sources and one otherwise. Note that the decoder need not decare which coordinate it is attempting to guess. Proposition 4 For this probem, RD o {D = 0} {(R, R 2 : R og 2, R 2 og 2}. Proof. Suppose (R, R 2, ɛ is in RD o, and ɛ /2. Observe that since Y and Y 2 are independent, deterministic random variabes are in χ. Thus there exists γ in Γ o such that By condition (ii defining Γ o, ɛ E[d (Y, Z ] R I(Y ; U W, T R 2 I(Y 2 ; U 2 W, T. U (Y, W, T (Y 2, U 2. (6 Since Y 2 is independent of (Y, W, T in this exampe, Y 2 must be independent of (Y, U, W, T. Thus I(Y ; U W, T = I(Y ; U W, T, Y 2 Y 22. (7 Likewise, Y is independent of (Y 2, U 2, W, T and hence given (W, T, Y is independent of (Y 2, U 2. This observation combined with (6 impies (Y, U (W, T (Y 2, U 2. In particuar, Y (U, W, T (Y 2, U 2. By condition (iii defining Γ o, Y (Y 2, U, U 2, W, T (Y 2, Z. These ast two chains impy that Y (U, W, T (Y 2, Z. Thus conditioned on (W, T and the event {Y 2 Y 22 }, we have Y U (Y 2, Z. It foows that I(Y ; U W, T, Y 2 Y 22 I(Y ; Y 2, Z W, T, Y 2 Y 22 = I(Y ; Y 2, Z, d (Y, Z W, T, Y 2 Y 22 I(Y ; d (Y, Z W, T, Y 2, Z, Y 2 Y 22 H(Y W, T, Y 2 Y 22 H(Y Y 2, Z, d (Y, Z, W, T, Y 2 Y 22 H(d (Y, Z W, T, Y 2, Z, Y 2 Y 22, 4

15 since d (Y, Z is a function of Y and Z. Next, observe that on the events {Y 2 Y 22 } and {d (Y, Z = 0}, Y 2 and Z together must revea one of the two bits of Y. Thus H(Y Y 2, Z, d (Y, Z = 0, W, T, Y 2 Y 22 og 2. Continuing our chain of inequaities, Now I(Y ; U W, T, Y 2 Y 22 2 og 2 og 2 Pr(d (Y, Z = 0 Y 2 Y 22 (2 og 2 Pr(d (Y, Z = Y 2 Y 22 H(d (Y, Z Y 2 Y 22 og 2 (2 og 2 Pr(d (Y, Z = Y 2 Y 22 (8 H(d (Y, Z Y 2 Y H(d (Y, Z Y 2 Y H(d (Y, Z Y 2 = Y 22 = H(d (Y, Z (Y 2 = Y 22 H(d (Y, Z h(ɛ, where, here and throughout, h( is the binary entropy function with natura ogarithms. We concude that Simiary, H(d (Y, Z Y 2 Y 22 2h(ɛ. Pr(d (Y, Z = Y 2 Y 22 2ɛ. Substituting these two observations into (8 and recaing (7 yieds I(Y ; U W, T og 2 4ɛ og 2 2h(ɛ. By symmetry, I(Y 2 ; U 2 W, T must satisfy the same inequaity. This impies the desired concusion. It is easy to see that the point (og 2, og 2, 0 is achievabe. Using rate og 2, each encoder can send, say, the first coordinate of its observation. The decoder can then reaize zero distortion by repeating the two bits it receives. This fact and the above proposition together impy RD o {D = 0} = RD {D = 0} = {(R, R 2 : R og 2, R 2 og 2}. In particuar, RD o is tight in the zero-distortion imit. In contrast, we show next that the Berger-Tung outer bound is not. Proposition 5 The point ((3/4 og 2, (3/4 og 2, 0 is contained in RD BT o. 5

16 Proof. Let the random variabe W be uniformy distributed over {, 2}, and et U = Y W and U 2 = Y 2W. Let Z = (U, U 2. It is straightforward to verify that (U, U 2, Z is in Γ BT o (the time-sharing random variabe T is unneeded and can be taken to be constant. Next note that E[d (Y, Z ] = 0. Finay, one can compute and This impies that I(Y; U, U 2 = 5 4 og 2 I(Y; U = og 2. 2 I(Y; U U 2 = I(Y; U 2 U = 3 4 og 2 > 5 og 2. 8 The concusion foows. 3.2 Binary Erasure CEO Probem Here Y 0 is uniformy distributed over {, }, and Y = N Y 0 for in {,..., L}, where N,..., N L are i.i.d. with 0 < Pr(N = 0 = p < and Pr(N = = p. Let Z = {, 0, }. We wi assume that there is no side information and that the decoder is ony interested in reproducing the hidden process Y 0. We measure the fideity of its reproduction using a famiy of distortion measures, {d λ } λ>0, where 0 if Y 0 = Z d λ (Y 0, Y, Z = if Z = 0 λ otherwise. We are particuary interested in the arge-λ imit. In this regime, d λ approximates the erasure distortion measure [2, p. 370], 0 if Y 0 = Z d (Y 0, Y, Z = if Z = 0 otherwise. We use a finite approximation because an infinite distortion measure causes difficuties in the proof of the Berger-Tung inner bound. This exampe is motivated by the foowing probem arising in energy-imited sensor networks. We seek to monitor a remote source, Y 0. To this end, we depoy an array of sensors, each of which is capabe of observing the source with negigibe probabiity of error. To engthen the ifetime of the network, each sensor spends a fraction p of the time in a ow-power seep state. We assume that the sensors cyce between the awake and seep states independenty of each 6

17 other and on a faster time scae than the samping; at each discrete time, each sensor seeps with probabiity p, independenty of the other sensors and the past. Sensors do not make any observations whie they are aseep, resuting in erasures. We permit the coding process to introduce additiona erasures, but not errors, yieding the erasure distortion measure. What sum rate is required in order for the decoder to reproduce a fraction D of the Y0 n variabes whie amost never making an error? Of course, D must satisfy D p L. Define { L } R (D, λ = inf R : (R,..., R L, D RD (λ, = where RD (λ is the rate-distortion region when the distortion measure is d λ. We define R o (D, λ and R BT i (D, λ anaogousy. In Appendix A, we show that if p L D, then [ ( im λ RBT i (D, λ ( D og 2 + L h D /L ( D /L ] p ( ph. (9 p In Appendix B, we show that the quantity on the right-hand side is aso a ower bound to im λ R o (D, λ. Hence it must equa im λ R (D, λ. That is, the improved outer bound and the Berger-Tung inner bound together yied a concusive resut for the sum rate of the binary erasure CEO probem. Evidenty this probem was previousy unsoved. In Appendix C, we show that RD BT o contains points with a stricty smaer sum rate in genera. Fig. 4 shows the correct sum rate for p = 0.5 and severa vaues of L. 3.3 Gaussian CEO Probem [9, 28, 3, 4] We turn to a continuous exampe. Here Y 0,..., Y L are jointy Gaussian and Y,..., Y L are conditionay independent given Y 0. For, et us write Y = Y 0 + N, where Y 0, N,..., N L are mutuay independent and E [ N 2 ] = σ 2 > 0 for a. We wi denote the variance of Y 0 by σ 2. Again there is no side information, and the decoder is ony interested in reproducing the hidden process Y 0, d (Y 0, Y, Z = (Y 0 Z 2. The rate-distortion region for this probem was recenty found by Oohama [20, 3] and Prabhakaran, Tse, and Ramchandran [4]. The two proofs are neary the same, and buid on earier work of Oohama [28]. The primary contribution is the converse resut, which makes heavy use of the entropy power inequaity [2, Theorem 6.6.3]. The Berger-Tung inner bound is used for achievabiity. It is straightforward to extend Theorem to this continuous setting. A statement of the continuous version is given in Appendix D, where we aso use the techniques of Oohama [3] and Prabhakaran, Tse, and Ramchandran [4] to prove the foowing. 7

18 5 L L = 3 Sum Rate (bits per sampe L = 2 L = Toerabe Fraction of Erasures (D Figure 4: Sum rate for the binary erasure CEO probem with p = /2. Proposition 6 For the Gaussian CEO probem, RD o { (R,..., R L, D R L+ + : there exists (r,..., r L R L + : for a A, [ ( R 2 og+ D σ 2 + ] exp( 2r σ 2 + r }, (0 A A c A where og + x = max(og x, 0. Since this expression equas RD [3], we concude that RD o is tight in this exampe. It aso foows that the converse resut of Oohama [3] and Prabhakaran, Tse, and Ramchandran [4] is a consequence of the outer bound provided in this paper. This does not impy, however, that the task of proving the converse resut is made any easier by our bound. In fact, comparing Appendix D to the origina works shows that proving Proposition 6 is as formidabe a task as proving the converse resut unaided. But this is sti an improvement over the Berger-Tung outer bound, the cosure of which we show in Appendix E contains points outside the rate-distortion region. We end this section by mentioning that Oohama s [3] converse is actuay more genera than the resut described here, in that Oohama permits one of the encoders to make noise-free observations (i.e., σ 2 = 0. Comparing Oohama s proof to Appendix D shows that the outer bound suppied in this paper aso recovers this more genera resut. 8

19 4 Recovery of Discrete Converse Resuts Having seen that the new outer bound recovers the converse of Oohama [3] and Prabhakaran, Tse, and Ramchandran [4] for the Gaussian CEO probem, we show in this fina section that it aso recovers the converse resuts for the discrete probems of Sepian and Wof [], Wyner [6], Ahswede and Körner [7], Wyner and Ziv [8], Ge fand and Pinsker [0], Berger and Yeung [], and Gastpar [2]. The outer bound aso recovers the converse resut for the probem studied by Körner and Marton [9], athough the proof of this fact is not as interesting. We sha therefore focus on the others. To recover these converse resuts, we sha use the foowing concusive resut for a specia case of the probem. Suppose that there exists a function g : Y 0 Y 0 such that Y,..., Y L are conditionay independent given (g(y 0, Y L+. Aso et Z = Y 0 and { 0 if Z = Y 0 d (Y 0, Y, Y L+, Z = otherwise. We make no other assumptions about the probem. We woud ike to characterize the set RD {D = 0}. In words, conditioned on the side information and some function of the hidden variabe, the observations are independent, and the hidden variabe must be reproduced ossessy. Note that RD {D = 0} wi be empty uness H(Y 0 Y, Y L+ = 0. Ge fand and Pinsker [0] refer to this condition as competeness of observations. Proposition 7 For this probem, RD o {D = 0} = RD BT i {D = 0} = RD {D = 0} ( = RD BT i (γ {D = 0}. (2 γ Γ BT i Proof. To show (, it suffices to show that RD o {D = 0} is contained in RD BT i {D = 0}. Suppose (R, ɛ, D 2,..., D K is a point in RD o and ɛ /2. By choosing X = g(y 0 in Definition 2, we see that there exists (U, Z, W, T in Γ o such that and for a A, Now Pr(Z Y 0 ɛ E[d k (Y 0, Y, Y L+, Z k ] D k for a k 2, A R I(g(Y 0 ; U A U A c, Y L+, T + A I(g(Y 0 ; U A U A c, Y L+, T = H(g(Y 0 U A c, Y L+, T I(Y ; U g(y 0, Y L+, W, T. H(g(Y 0 U, Y L+, T H(g(Y 0 U A c, Y L+, W, T H(g(Y 0 g(z, 9

20 where we have used the fact that g(y 0 Y 0 (U, Y L+, T Z g(z. By Fano s inequaity [3, Lemma.3.8], H(g(Y 0 g(z h(ɛ + ɛ og( Y 0. Thus I(g(Y 0 ; U A U A c, Y L+, T H(g(Y 0 U A c, Y L+, W, T h(ɛ ɛ og( Y 0 I(g(Y 0 ; U A U A c, Y L+, W, T h(ɛ ɛ og( Y 0. It foows that [R + h(ɛ + ɛ og( Y 0 ] I(g(Y 0 ; U A U A c, Y L+, W, T A + I(Y ; U g(y 0, Y L+, W, T A = I(g(Y 0 ; U A U A c, Y L+, W, T + I(Y A ; U A U A c, g(y 0, Y L+, W, T = I(g(Y 0, Y A ; U A U A c, Y L+, W, T I(Y A ; U A U A c, Y L+, W, T. If we now define T = (W, T, it is evident that (U, Z, T is in Γ BT i and the point (R + h(ɛ + ɛ og( Y 0,..., R L + h(ɛ + ɛ og( Y 0, ɛ, D 2,..., D K is in RD BT i. This impies that RD o {D = 0} RD BT i {D = 0}, which proves (. To prove (2, it suffices to show that RD BT i is cosed. This is shown in Appendix F. The differences between this resut and that of Ge fand and Pinsker [0] are numerous but minor. The most visibe differences are that Ge fand and Pinsker s mode does not aow for side information at the decoder or distortion constraints beyond the one on Y 0. Indeed, the region given here reduces to theirs when these extensions are ignored. Thus this resut seems to be a generaization of theirs, abeit a trivia one since their proof can be modified to hande these extensions. A coser comparison, however, reveas that they define the rate region more stringenty than we do here. Thus, our resut does not recover theirs, stricty speaking, athough it does recover the converse component of their resut since our definitions are weaker. 20

21 The reason for incuding side information and additiona distortion constraints in the mode is that they enabe us to aso recover the converse resuts for the other probems mentioned earier. For instance, Gastpar [2] considers the probem of reproducing the observations individuay, subject to separate distortion constraints, under the assumption that the decoder is provided with side information that makes the observations conditionay independent. His converse resut can be recovered by setting Y 0 = Y L+. It is easiy verified that, under this condition, our region coincides with his. The cassica Wyner-Ziv probem [8] can be viewed as Gastpar s probem with a singe encoder (L =. So that converse resut is recovered too. Berger and Yeung [] sove the two-encoder probem in which the observations are to be reproduced individuay, with at east one of the two being reproduced ossessy. In our notation, this corresponds to setting L = 2 and Y = Y 0. Note that our conditiona independence assumption necessariy hods in this case. To see that under these assumptions, our region reduces to theirs, suppose (R, R 2, D 2 RD BT i Aso, where we have used the fact that (γ {D = 0} for some γ Γ BT i. Then R I(Y ; U U 2, T = H(Y U 2, T. (3 R 2 I(Y 2 ; U 2 U, T I(Y 2 ; U 2 U, Y, T U 2 (Y 2, U, T Y (see Cover and Thomas [2, p. 33]. Finay, R + R 2 I(Y, Y 2 ; U, U 2 T = I(Y 2 ; U 2 Y, T, (4 I(Y ; U, U 2 T + I(Y 2 ; U, U 2 Y, T H(Y + I(Y 2 ; U 2 Y, T. (5 It is now evident that the two regions are identica (c.f. [, p. 230]. Thus the converse resut of Berger and Yeung is a consequence of the outer bound provided here. The cassica probem of source coding with side information [6, 7] can be viewed as a specia case of the Berger-Yeung probem in which D 2 exceeds the maximum vaue of d 2, the distortion measure for Y 2. Berger and Yeung demonstrate how, under this assumption, the region described above reduces to the one given by Wyner [6] and Ahswede and Körner [7]. Ipso facto, the converse resut for this probem is aso recovered. This paper ends the way it began, with the resut of Sepian and Wof []. Here the aim is to ossessy reproduce a of the observations. For two encoders (L = 2, this can be viewed as a specia case of the probem of Berger and 2

22 Yeung. These authors show how the region described in Eqs. (3 (5 reduces to the one given at the beginning of the paper. The resut for more than two encoders can be viewed as a specia case of Proposition 7 in which Y 0 = Y. In this case, if R RD o {D = 0}, then for any A, R I(Y A ; U A U A c, T A = H(Y A U A c, T H(Y A Y A c, T, since Y A (Y A c, T (U A c, T. Now Y is independent of T, so R H(Y A Y A c, A which is the we-known rate region for this probem. Thus the converse of Sepian and Wof is aso recovered. For this resut, as with the others, our outer bound dispenses with the need to prove a custom converse coding theorem. In fact, Proposition 7 can be viewed as unifying a of the resuts in this discussion, assuming one is wiing to ignore the discrepancies in the definition of the ratedistortion region mentioned earier. A Sum-Rate Achievabiity for the Binary Erasure CEO Probem Showing that a particuar rate-distortion vector is achievabe using the Berger- Tung inner bound is mosty a matter of finding the proper test channes Y U for the encoders. To prove (9, we use binary erasure test channes that are identicay distributed across the encoders. In this appendix and the next two, the notation is drawn from Section 3.2. Lemma For any p L D, im λ RBT i (D, λ ( D og 2 + L [ ( h D /L ( ph ( D /L p p Proof. Fix D and et Ñ,..., ÑL be i.i.d., independent of Y 0,..., Y L, with Pr (Ñ = 0 = D/L p p Pr (Ñ = = D/L p p. For in {,..., L}, et U = Y Ñi. Then et ( L if L = U < 0 Z = sgn U := 0 if L = = U = 0 otherwise. ]. 22

23 Then for a λ, E [ d λ (Y 0, Y, Z ] = Pr(U = 0 for a = [Pr(U = 0] L = D. Thus (R, D is contained in RD BT i for a λ if for a A {,..., L}, R I(Y A ; U A U A c. (6 A The rate vectors satisfying this coection of inequaities are known to form a contrapoymatroid [29, 26]. As such, there exist rate vectors R satisfying (6 such that L R = I(Y; U. = In particuar, this hods for any vertex of (6 [29, 26]. Now I(Y; U = I(Y 0, Y; U But I(Y 0 ; U = ( D og 2 and = I(Y 0 ; U + I(Y; U Y 0 = I(Y 0 ; U + L I(Y ; U Y 0. I(Y ; U Y 0 = H(U Y 0 H(U Y ( D = h(d /L /L p ( ph. p Then for any λ, there exist vectors (R, D in RD BT i (λ such that L = The concusion foows. [ R = ( D og 2 + L h(d /L ( ph ( D /L p p ]. B Sum-Rate Converse for the Binary Erasure CEO Probem We evauate the outer bound s sum-rate constraint for the binary erasure CEO probem via a sequence of emmas. Throughout this appendix, g( wi denote the function on [p, defined by { h(x ( ph( x p p g(x = p x 0 x >. We begin by proving severa facts about g(. For this, the foowing cacuations are usefu. 23

24 Lemma 2 For a x in (og p, 0], and e x og(e x p xe x p (7 e x og(e x p e x (x + + Proof. It is we known that ( og z for a z <. z e2x e x 0. (8 p Repacing z with pe x and rearranging yieds (7. To see (8, note that (7 impies that the first derivative of (e x p og(e x p (e x p(x + + e x (9 is nonpositive on (og p, 0]. Since the function in (9 is nonnegative at x = 0, it foows that (e x p og(e x p (e x p(x + + e x 0 (20 for a x in (og p, 0]. One can now obtain (8 by mutipying both sides by e x and dividing both sides by (e x p. Lemma 3 The function g(e x is nonincreasing and convex as a function of x on [og p,. Proof. The first derivative of g(e x on (og p, 0 is e x og(e x p xe x. This observation, the first concusion of Lemma 2, and the continuity of g( together impy that g(e x is nonincreasing on [og p, 0]. Since g(e x is constant on [0,, it foows that g(e x is nonincreasing on [og p,. The second derivative of g(e x on (og p, 0 is e x og(e x p e x (x + + e2x e x p. This observation, the second concusion of Lemma 2, and the continuity of g( together impy that g(e x is convex on [og p, 0]. Since g(e x is nonincreasing on [og p, and constant on [0,, it foows that g(e x is convex on [og p,. Coroary The function g(y /L is nonincreasing and convex in y on [p L,. Proof. g(y /L = g(e x with x = (/L og y, and g(e x is convex and nonincreasing whie (/L og( is concave and nondecreasing. The next emma is centra to our evauation of the outer bound s sum rate. Note that condition (i in the hypothesis impies that Pr(Y 0 Z < 0 = 0. That is, the reproduction Z is never in error (athough it may be an erasure. 24

25 Lemma 4 Suppose p L D and (U, Z is such that ( i E[d λ (Y 0, Z ] D for a λ, ( ii U Y (Y 0, Y c, U c for a, and ( iii (Y 0, Y U Z. Then L L I(Y ; U Y 0 g(d /L. = Proof. For each encoder, et Then define Finay, et Then L I(Y ; U Y 0 L L = A,+ = {u U : Pr(U = u Y 0 = > 0} A, = {u U : Pr(U = u Y 0 = > 0}. if U A,+ \A, Ũ = if U A, \A,+ 0 otherwise. = L = L δ,+ = Pr(Ũ = 0 Y = δ, = Pr(Ũ = 0 Y =. L I(Y ; Ũ Y 0 = L = L = [ ] H(Ũ Y 0 H(Ũ Y [ 2 h (p + ( pδ,+ + 2 h (p + ( pδ, 2 ( ph(δ,+ 2 ( ph(δ, Since Y 0 Z 0 a.s., on the event Z = we must have Y 0 = and hence U A,+ for a. In addition, the condition Y 0 U Z dictates that when Z = we must have U A,+ \A, for some, for otherwise we woud have Pr(Y 0 Z = > 0. A of this impies that sgn( L = Ũ = on the event that Z =. Simiary, sgn( L = Ũ = on the event Z =. Thus sgn( L = Ũ = 0 impies that Z = 0, so ( ( L Pr sgn Ũ = 0 Pr(Z = 0 = D. = 25 ].

26 This impies that Thus L 2 L (p + ( pδ,+ + 2 = L I(Y ; U Y 0 inf = { L L = L (p + ( pδ, D. = [ h(p + ( pδ,+ ( ph(δ,+ 2 ] + h(p + ( pδ, ( ph(δ, : 2 δ,+, δ, [0, ] for a and L (p + ( pδ,+ + 2 = } L (p + ( pδ, D. This optimization probem is not convex, but if we change variabes to =,+ = og(p + ( pδ,+, = og(p + ( pδ,, then it can be rewritten as { [ L inf h ( ( e,+ e,+ p ( ph L 2 p = + h ( ( e, e, ] p ( ph : p = inf,+,, [og p, 0] for a and ( L ( 2 exp,+ + L } 2 exp, D = = { L [ ( ( ] g e,+ + g e, : L 2 =,+,, [og p, 0] for a and ( L ( 2 exp,+ + L } 2 exp, D, = which is convex by Lemma 3. Thus we may assume without oss of optimaity that,+ = 2,+ = = L,+ =: + = 26

27 and This gives L L I(Y ; U Y 0 inf =, = 2, = = L, =:. inf { 2 g ( e g ( e : +, [og p, 0] : } + D 2 el + 2 el { g ( e } : [og p, 0] : e L D g(d /L, by Lemma 3. The quantity I(Y ; U Y 0 can be interpreted as the amount of information that the th encoder sends about its observation noise 3. Lemma 4 then says that if a fraction D of the output symbos is aowed to be erased and no errors are aowed, then the amount of information that the average encoder must send about its observation noise is at east g(d /L. We woud ike to extend this ast assertion to aow few decoding errors instead of none. To this end, we wi empoy the foowing cardinaity bound on the aphabet sizes of the auxiiary random variabes U,..., U L. Lemma 5 Let (U, Z be such that ( i U Y (Y 0, Y c, U c for a, and ( ii (Y 0, Y U Z. Then for any λ, there exist aternate random variabes Ũ and Z aso satisfying ( i and ( ii such that E[d λ (Y 0, Z ] E[d λ (Y 0, Z ], I(Y ; Ũ Y 0 = I(Y ; U Y 0 for a, and U Y + for a. See Wyner and Ziv [8, Theorem A2] or Csiszár and Körner [3, Theorem 3.4.6] for proofs of simiar resuts. The next emma is the desired extension of Lemma 4. Lemma 6 Suppose p L D and (U, Z is such that ( i E[d λ (Y 0, Z ] D, ( ii U Y (Y 0, Y c, U c for a, and 3 This terminoogy is due to Prabhakaran, Tse, and Ramchandran. 27

28 ( iii (Y 0, Y U Z. If then L L = 32L p( p ( /L 2D δ λ 2, ( I(Y ; U Y 0 g (D + δ /L + 2δ og δ 5. Proof. By Lemma 5, we may assume that U = {,..., 4} for each. We may aso assume that Z is a deterministic function of U: Z = φ(u. Define { A,+ = A, = { u U : u c : φ(u = and Pr(U = u Y 0 = = u U : u c : φ(u = and Pr(U = u Y 0 = = min Pr(U j = u j Y 0 = j {,...,L} min Pr(U j = u j Y 0 = j {,...,L} We now define random variabes (Ũ, Z to repace (U, Z. The repacements wi be cose to the originas in distribution but wi have the property that Pr(Y Z < 0 = 0. That is, Z wi never be in error. Set Ũ = {,..., 5} for each, and et 0 if i A, Pr(Ũ = i Y = = Pr(U A, Y = if i = 5 Pr(U = i Y = otherwise 0 if i A,+ Pr(Ũ = i Y = = Pr(U A,+ Y = if i = 5 Pr(U = i Y = otherwise 0 if i A,+ A, Pr(Ũ = i Y = 0 = Pr(U A,+ A, Y = 0 if i = 5 Pr(U = i Y = 0 otherwise. }. } Then define { Z = φ(ũ := φ(ũ if ũ 4 for a 0 otherwise. 28

29 There is a natura way of couping Ũ to U such that if Ũ is in {,..., 4} then Ũ = U. With this couping in mind, it is evident that E[d λ (Y 0, Z ] = E[d λ (Y 0, Z (max(ũ,..., ŨL 4] Now for any in {,..., L}, + E[d λ (Y 0, Z (max(ũ,..., ŨL = 5] E[d λ (Y 0, Z (max(ũ,..., ŨL 4] + Pr(max(Ũ,..., ŨL = 5 D + Pr(max(Ũ,..., ŨL = 5. Pr(Ũ = 5 = p Pr(U A, Y = + p Pr(U A,+ Y = p Pr(U A,+ A, Y = 0. By the union bound, this is upper bounded by [ ] p Pr(U A, Y = + p Pr(U A, Y = 0 2 [ ] p + Pr(U A,+ Y = + p Pr(U A,+ Y = 0. 2 Since U Y Y 0, Pr(Ũ = 5 [ p Pr(U A, Y =, Y 0 = 2 ] + p Pr(U A, Y = 0, Y 0 = + [ p Pr(U A,+ Y =, Y 0 = 2 ] + p Pr(U A,+ Y = 0, Y 0 = Pr(U A, Y 0 = + Pr(U A,+ Y 0 =. But which impies u:φ(u= 2 L Pr(U j = u j Y 0 = λ D, j= L Pr(U j = u j Y 0 = 2D λ. (2 u:φ(u=,u A,+ j= 29

30 By the definition of A,+, for each u A,+, there exists at east one u c such that φ(u = and L Pr(U j = u j Y 0 = Pr(U = u Y 0 = L. j= Together with (2, this impies u A,+ Pr(U = u Y 0 = L 2D λ. Appying Höder s inequaity [30, p. 2] gives Pr(U A,+ Y 0 = = Pr(U = u Y 0 = u A,+ Likewise, Thus Pr(U = u Y 0 = L u A,+ ( /L 2D 4. λ Pr(U A, Y 0 = 4 Pr(Ũ = 5 8 By the union bound, it foows that and therefore ( /L 2D. λ ( /L 2D. λ Pr(max(Ũ,..., ŨL = 5 8L E[d λ (Y 0, Z ] D + 8L ( /L 2D λ /L ( /L 2D D + δ. λ Note that Z = ony if Ũ is in A,+ for some, and Pr(Ũ A,+ Y 0 = = 0 for a. A,+ (L /L Thus Pr(Y 0 =, Z = = 0 and simiary, Pr(Y 0 =, Z = = 0. It foows from Lemma 4 that L L = ( I(Y ; Ũ Y 0 g (D + δ /L. (22 30

31 The remainder of the proof is devoted to showing that I(Y ; Ũ Y 0 is cose to I(Y ; U Y 0. For this we use the decomposition Observe that I(Y ; Ũ Y 0 = H(Ũ Y 0 H(Ũ Y. Pr(Ũ = 5 Y = 0p Pr(Ũ = 5 8 ( /L 2D. λ Thus Pr(Ũ = 5 Y = 0 δ 2. Simiary, Pr(Ũ = 5 Y = δ 2 and Pr(Ũ = 5 Y = δ 2. Therefore if we view U as a random variabe on {,..., 5}, for any i in {, 0, }, 5 Pr(Ũ = j Y = i Pr(U = j Y = i = 2 Pr(Ũ = 5 Y = i δ. j= A standard resut on the continuity of entropy [3, Lemma.2.7] now impies that (reca δ /2 H(Ũ Y = i H(U Y = i δ og δ 5 so H(Ũ Y H(U Y δ og δ 5. Likewise, for any i in {, }, Thus so ( /L 2D 2 Pr(Ũ = 5 Y 0 = i 8. λ Pr(Ũ = 5 Y 0 = i δ 2, as before. It foows that H(Ũ Y 0 H(U Y 0 δ og δ 5 I(Y ; Ũ Y 0 I(Y ; U Y 0 2δ og δ 5. 3

32 Combining this with (22 yieds L L = ( I(Y ; U Y 0 g (D + δ /L + 2δ og δ 5. We are now in a position to prove the main resut of this Appendix. Lemma 7 For any p L D, im R o(d, λ ( D og 2 + Lg λ ( D /L. Proof. Fix p L D and δ (0, /2], and suppose λ satisfies [ ( 2L ( ] 2 32L D λ max 4,. (23 δp( p δ By taking X = Y 0 in the definition of RD o (λ, it foows that there exist R in R L + and γ in Γ o such that D + δ E[d λ (Y 0, Z ], and L L R o (D, λ + δ R I(Y 0 ; U T + I(Y ; U Y 0, W, T. = = For each possibe reaization (w, t of (W, T, et D w,t = E[d λ (Y 0, Z W = w, T = t]. (24 Let S = {(w, t : D w,t λ}. Then by Markov s inequaity, Pr((W, T / S D λ δ. (25 In particuar, Pr((W, T S > 0. Aso, for any (w, t S, ( /L 32L 2Dw,t δ p( p λ by (23. Thus, by Lemma 6, if (w, t S, L L = ( I(Y ; U Y 0, W = w, T = t g (D w,t + δ /L + 2δ og δ 5. By averaging over (w, t S and invoking Coroary, we obtain (w,t S L L I(Y ; U Y 0, W = w, T = t = Pr(W = w, T = t Pr((W, T S g((d + δ /L + 2δ og δ 5. 32

33 From (25, it foows that L = [ ( I(Y ; U Y 0, W, T L( δ g (D + δ /L + 2δ og δ ]. (26 5 Now by the data processing inequaity, Let ε = (Y 0 Z =. Continuing, I(Y 0 ; U T = I(Y 0 ; U, T I(Y 0 ; Z. I(Y 0 ; U T H(Y 0 H(Y 0 Z = og 2 H(Y 0, ε Z = og 2 H(ε Z H(Y 0 ε, Z og 2 h(d/λ Pr(Z = 0 og 2 ( D og 2 h(δ. Substituting this and (26 into (24 yieds R o (D, λ ( D og 2 h(δ + L( δ [ ( g (D + δ /L + 2δ og δ ] δ. 5 The proof is terminated by etting λ and then δ 0. C The Berger-Tung Outer Bound is Loose for the Binary Erasure CEO Probem We wi show numericay that for one instance of the binary erasure CEO probem, RD BT o contains points with a stricty superoptima sum rate. Let L = 2 and p = /2. Let W and W 2 be {0, }-vaued random variabes with the joint distribution [ ] /5 2/5, 2/5 0 i.e., Pr(W = 0, W 2 = 0 = 5 Pr(W =, W 2 = 0 = Pr(W = 0, W 2 = = 2 5. We assume that (W, W 2 is independent of (Y 0, Y, Y 2. Let U = Y W for in {, 2}, and et Z = sgn(u + U 2. Since Y can be written as Y = Y 0 N where N and N 2 are i.i.d. with Pr(N = 0 = Pr(N = = /2 (reca the 33

CS229 Lecture notes. Andrew Ng

CS229 Lecture notes. Andrew Ng CS229 Lecture notes Andrew Ng Part IX The EM agorithm In the previous set of notes, we taked about the EM agorithm as appied to fitting a mixture of Gaussians. In this set of notes, we give a broader view

More information

A Brief Introduction to Markov Chains and Hidden Markov Models

A Brief Introduction to Markov Chains and Hidden Markov Models A Brief Introduction to Markov Chains and Hidden Markov Modes Aen B MacKenzie Notes for December 1, 3, &8, 2015 Discrete-Time Markov Chains You may reca that when we first introduced random processes,

More information

Many-Help-One Problem for Gaussian Sources with a Tree Structure on their Correlation

Many-Help-One Problem for Gaussian Sources with a Tree Structure on their Correlation Many-Hep-One Probem for Gaussian Sources with a Tree Structure on their Correation Yasutada Oohama arxiv:090988v csit 4 Jan 009 Abstract In this paper we consider the separate coding probem for + correated

More information

MARKOV CHAINS AND MARKOV DECISION THEORY. Contents

MARKOV CHAINS AND MARKOV DECISION THEORY. Contents MARKOV CHAINS AND MARKOV DECISION THEORY ARINDRIMA DATTA Abstract. In this paper, we begin with a forma introduction to probabiity and expain the concept of random variabes and stochastic processes. After

More information

Some Measures for Asymmetry of Distributions

Some Measures for Asymmetry of Distributions Some Measures for Asymmetry of Distributions Georgi N. Boshnakov First version: 31 January 2006 Research Report No. 5, 2006, Probabiity and Statistics Group Schoo of Mathematics, The University of Manchester

More information

Mat 1501 lecture notes, penultimate installment

Mat 1501 lecture notes, penultimate installment Mat 1501 ecture notes, penutimate instament 1. bounded variation: functions of a singe variabe optiona) I beieve that we wi not actuay use the materia in this section the point is mainy to motivate the

More information

Expectation-Maximization for Estimating Parameters for a Mixture of Poissons

Expectation-Maximization for Estimating Parameters for a Mixture of Poissons Expectation-Maximization for Estimating Parameters for a Mixture of Poissons Brandon Maone Department of Computer Science University of Hesini February 18, 2014 Abstract This document derives, in excrutiating

More information

Reichenbachian Common Cause Systems

Reichenbachian Common Cause Systems Reichenbachian Common Cause Systems G. Hofer-Szabó Department of Phiosophy Technica University of Budapest e-mai: gszabo@hps.ete.hu Mikós Rédei Department of History and Phiosophy of Science Eötvös University,

More information

Homogeneity properties of subadditive functions

Homogeneity properties of subadditive functions Annaes Mathematicae et Informaticae 32 2005 pp. 89 20. Homogeneity properties of subadditive functions Pá Burai and Árpád Száz Institute of Mathematics, University of Debrecen e-mai: buraip@math.kte.hu

More information

Efficiently Generating Random Bits from Finite State Markov Chains

Efficiently Generating Random Bits from Finite State Markov Chains 1 Efficienty Generating Random Bits from Finite State Markov Chains Hongchao Zhou and Jehoshua Bruck, Feow, IEEE Abstract The probem of random number generation from an uncorreated random source (of unknown

More information

MATH 172: MOTIVATION FOR FOURIER SERIES: SEPARATION OF VARIABLES

MATH 172: MOTIVATION FOR FOURIER SERIES: SEPARATION OF VARIABLES MATH 172: MOTIVATION FOR FOURIER SERIES: SEPARATION OF VARIABLES Separation of variabes is a method to sove certain PDEs which have a warped product structure. First, on R n, a inear PDE of order m is

More information

Rate-Distortion Theory of Finite Point Processes

Rate-Distortion Theory of Finite Point Processes Rate-Distortion Theory of Finite Point Processes Günther Koiander, Dominic Schuhmacher, and Franz Hawatsch, Feow, IEEE Abstract We study the compression of data in the case where the usefu information

More information

NOISE-INDUCED STABILIZATION OF STOCHASTIC DIFFERENTIAL EQUATIONS

NOISE-INDUCED STABILIZATION OF STOCHASTIC DIFFERENTIAL EQUATIONS NOISE-INDUCED STABILIZATION OF STOCHASTIC DIFFERENTIAL EQUATIONS TONY ALLEN, EMILY GEBHARDT, AND ADAM KLUBALL 3 ADVISOR: DR. TIFFANY KOLBA 4 Abstract. The phenomenon of noise-induced stabiization occurs

More information

Iterative Decoding Performance Bounds for LDPC Codes on Noisy Channels

Iterative Decoding Performance Bounds for LDPC Codes on Noisy Channels Iterative Decoding Performance Bounds for LDPC Codes on Noisy Channes arxiv:cs/060700v1 [cs.it] 6 Ju 006 Chun-Hao Hsu and Achieas Anastasopouos Eectrica Engineering and Computer Science Department University

More information

The Streaming-DMT of Fading Channels

The Streaming-DMT of Fading Channels The Streaming-DMT of Fading Channes Ashish Khisti Member, IEEE, and Star C. Draper Member, IEEE arxiv:30.80v3 cs.it] Aug 04 Abstract We consider the sequentia transmission of a stream of messages over

More information

Limited magnitude error detecting codes over Z q

Limited magnitude error detecting codes over Z q Limited magnitude error detecting codes over Z q Noha Earief choo of Eectrica Engineering and Computer cience Oregon tate University Corvais, OR 97331, UA Emai: earief@eecsorstedu Bea Bose choo of Eectrica

More information

(f) is called a nearly holomorphic modular form of weight k + 2r as in [5].

(f) is called a nearly holomorphic modular form of weight k + 2r as in [5]. PRODUCTS OF NEARLY HOLOMORPHIC EIGENFORMS JEFFREY BEYERL, KEVIN JAMES, CATHERINE TRENTACOSTE, AND HUI XUE Abstract. We prove that the product of two neary hoomorphic Hece eigenforms is again a Hece eigenform

More information

Do Schools Matter for High Math Achievement? Evidence from the American Mathematics Competitions Glenn Ellison and Ashley Swanson Online Appendix

Do Schools Matter for High Math Achievement? Evidence from the American Mathematics Competitions Glenn Ellison and Ashley Swanson Online Appendix VOL. NO. DO SCHOOLS MATTER FOR HIGH MATH ACHIEVEMENT? 43 Do Schoos Matter for High Math Achievement? Evidence from the American Mathematics Competitions Genn Eison and Ashey Swanson Onine Appendix Appendix

More information

C. Fourier Sine Series Overview

C. Fourier Sine Series Overview 12 PHILIP D. LOEWEN C. Fourier Sine Series Overview Let some constant > be given. The symboic form of the FSS Eigenvaue probem combines an ordinary differentia equation (ODE) on the interva (, ) with a

More information

A NOTE ON QUASI-STATIONARY DISTRIBUTIONS OF BIRTH-DEATH PROCESSES AND THE SIS LOGISTIC EPIDEMIC

A NOTE ON QUASI-STATIONARY DISTRIBUTIONS OF BIRTH-DEATH PROCESSES AND THE SIS LOGISTIC EPIDEMIC (January 8, 2003) A NOTE ON QUASI-STATIONARY DISTRIBUTIONS OF BIRTH-DEATH PROCESSES AND THE SIS LOGISTIC EPIDEMIC DAMIAN CLANCY, University of Liverpoo PHILIP K. POLLETT, University of Queensand Abstract

More information

Asymptotic Properties of a Generalized Cross Entropy Optimization Algorithm

Asymptotic Properties of a Generalized Cross Entropy Optimization Algorithm 1 Asymptotic Properties of a Generaized Cross Entropy Optimization Agorithm Zijun Wu, Michae Koonko, Institute for Appied Stochastics and Operations Research, Caustha Technica University Abstract The discrete

More information

Asynchronous Control for Coupled Markov Decision Systems

Asynchronous Control for Coupled Markov Decision Systems INFORMATION THEORY WORKSHOP (ITW) 22 Asynchronous Contro for Couped Marov Decision Systems Michae J. Neey University of Southern Caifornia Abstract This paper considers optima contro for a coection of

More information

Limits on Support Recovery with Probabilistic Models: An Information-Theoretic Framework

Limits on Support Recovery with Probabilistic Models: An Information-Theoretic Framework Limits on Support Recovery with Probabiistic Modes: An Information-Theoretic Framewor Jonathan Scarett and Voan Cevher arxiv:5.744v3 cs.it 3 Aug 6 Abstract The support recovery probem consists of determining

More information

Akaike Information Criterion for ANOVA Model with a Simple Order Restriction

Akaike Information Criterion for ANOVA Model with a Simple Order Restriction Akaike Information Criterion for ANOVA Mode with a Simpe Order Restriction Yu Inatsu * Department of Mathematics, Graduate Schoo of Science, Hiroshima University ABSTRACT In this paper, we consider Akaike

More information

(1 ) = 1 for some 2 (0; 1); (1 + ) = 0 for some > 0:

(1 ) = 1 for some 2 (0; 1); (1 + ) = 0 for some > 0: Answers, na. Economics 4 Fa, 2009. Christiano.. The typica househod can engage in two types of activities producing current output and studying at home. Athough time spent on studying at home sacrices

More information

Formulas for Angular-Momentum Barrier Factors Version II

Formulas for Angular-Momentum Barrier Factors Version II BNL PREPRINT BNL-QGS-06-101 brfactor1.tex Formuas for Anguar-Momentum Barrier Factors Version II S. U. Chung Physics Department, Brookhaven Nationa Laboratory, Upton, NY 11973 March 19, 2015 abstract A

More information

Optimality of Gaussian Fronthaul Compression for Uplink MIMO Cloud Radio Access Networks

Optimality of Gaussian Fronthaul Compression for Uplink MIMO Cloud Radio Access Networks Optimaity of Gaussian Fronthau Compression for Upink MMO Coud Radio Access etworks Yuhan Zhou, Yinfei Xu, Jun Chen, and Wei Yu Department of Eectrica and Computer Engineering, University of oronto, Canada

More information

arxiv:math/ v2 [math.pr] 6 Mar 2005

arxiv:math/ v2 [math.pr] 6 Mar 2005 ASYMPTOTIC BEHAVIOR OF RANDOM HEAPS arxiv:math/0407286v2 [math.pr] 6 Mar 2005 J. BEN HOUGH Abstract. We consider a random wa W n on the ocay free group or equivaenty a signed random heap) with m generators

More information

Efficient Generation of Random Bits from Finite State Markov Chains

Efficient Generation of Random Bits from Finite State Markov Chains Efficient Generation of Random Bits from Finite State Markov Chains Hongchao Zhou and Jehoshua Bruck, Feow, IEEE Abstract The probem of random number generation from an uncorreated random source (of unknown

More information

T.C. Banwell, S. Galli. {bct, Telcordia Technologies, Inc., 445 South Street, Morristown, NJ 07960, USA

T.C. Banwell, S. Galli. {bct, Telcordia Technologies, Inc., 445 South Street, Morristown, NJ 07960, USA ON THE SYMMETRY OF THE POWER INE CHANNE T.C. Banwe, S. Gai {bct, sgai}@research.tecordia.com Tecordia Technoogies, Inc., 445 South Street, Morristown, NJ 07960, USA Abstract The indoor power ine network

More information

Problem set 6 The Perron Frobenius theorem.

Problem set 6 The Perron Frobenius theorem. Probem set 6 The Perron Frobenius theorem. Math 22a4 Oct 2 204, Due Oct.28 In a future probem set I want to discuss some criteria which aow us to concude that that the ground state of a sef-adjoint operator

More information

6 Wave Equation on an Interval: Separation of Variables

6 Wave Equation on an Interval: Separation of Variables 6 Wave Equation on an Interva: Separation of Variabes 6.1 Dirichet Boundary Conditions Ref: Strauss, Chapter 4 We now use the separation of variabes technique to study the wave equation on a finite interva.

More information

Partial permutation decoding for MacDonald codes

Partial permutation decoding for MacDonald codes Partia permutation decoding for MacDonad codes J.D. Key Department of Mathematics and Appied Mathematics University of the Western Cape 7535 Bevie, South Africa P. Seneviratne Department of Mathematics

More information

THE REACHABILITY CONES OF ESSENTIALLY NONNEGATIVE MATRICES

THE REACHABILITY CONES OF ESSENTIALLY NONNEGATIVE MATRICES THE REACHABILITY CONES OF ESSENTIALLY NONNEGATIVE MATRICES by Michae Neumann Department of Mathematics, University of Connecticut, Storrs, CT 06269 3009 and Ronad J. Stern Department of Mathematics, Concordia

More information

A CLUSTERING LAW FOR SOME DISCRETE ORDER STATISTICS

A CLUSTERING LAW FOR SOME DISCRETE ORDER STATISTICS J App Prob 40, 226 241 (2003) Printed in Israe Appied Probabiity Trust 2003 A CLUSTERING LAW FOR SOME DISCRETE ORDER STATISTICS SUNDER SETHURAMAN, Iowa State University Abstract Let X 1,X 2,,X n be a sequence

More information

$, (2.1) n="# #. (2.2)

$, (2.1) n=# #. (2.2) Chapter. Eectrostatic II Notes: Most of the materia presented in this chapter is taken from Jackson, Chap.,, and 4, and Di Bartoo, Chap... Mathematica Considerations.. The Fourier series and the Fourier

More information

Uniprocessor Feasibility of Sporadic Tasks with Constrained Deadlines is Strongly conp-complete

Uniprocessor Feasibility of Sporadic Tasks with Constrained Deadlines is Strongly conp-complete Uniprocessor Feasibiity of Sporadic Tasks with Constrained Deadines is Strongy conp-compete Pontus Ekberg and Wang Yi Uppsaa University, Sweden Emai: {pontus.ekberg yi}@it.uu.se Abstract Deciding the feasibiity

More information

Separation of Variables and a Spherical Shell with Surface Charge

Separation of Variables and a Spherical Shell with Surface Charge Separation of Variabes and a Spherica She with Surface Charge In cass we worked out the eectrostatic potentia due to a spherica she of radius R with a surface charge density σθ = σ cos θ. This cacuation

More information

Rate Region of the Quadratic Gaussian Two-Encoder Source-Coding Problem

Rate Region of the Quadratic Gaussian Two-Encoder Source-Coding Problem Rate Region of the Quadratic Gaussian Two-Encoder Source-Coding Problem Aaron B Wagner, Saurabha Tavildar, and Pramod Viswanath June 9, 2007 Abstract We determine the rate region of the quadratic Gaussian

More information

An Extension of Almost Sure Central Limit Theorem for Order Statistics

An Extension of Almost Sure Central Limit Theorem for Order Statistics An Extension of Amost Sure Centra Limit Theorem for Order Statistics T. Bin, P. Zuoxiang & S. Nadarajah First version: 6 December 2007 Research Report No. 9, 2007, Probabiity Statistics Group Schoo of

More information

XSAT of linear CNF formulas

XSAT of linear CNF formulas XSAT of inear CN formuas Bernd R. Schuh Dr. Bernd Schuh, D-50968 Kön, Germany; bernd.schuh@netcoogne.de eywords: compexity, XSAT, exact inear formua, -reguarity, -uniformity, NPcompeteness Abstract. Open

More information

SYMMETRICAL MULTILEVEL DIVERSITY CODING AND SUBSET ENTROPY INEQUALITIES. A Dissertation JINJING JIANG

SYMMETRICAL MULTILEVEL DIVERSITY CODING AND SUBSET ENTROPY INEQUALITIES. A Dissertation JINJING JIANG SYMMETRICA MUTIEVE DIVERSITY CODING AND SUBSET ENTROPY INEQUAITIES A Dissertation by JINJING JIANG Submitted to the Office of Graduate Studies of Texas A&M University in partia fufiment of the requirements

More information

Sequential Decoding of Polar Codes with Arbitrary Binary Kernel

Sequential Decoding of Polar Codes with Arbitrary Binary Kernel Sequentia Decoding of Poar Codes with Arbitrary Binary Kerne Vera Miosavskaya, Peter Trifonov Saint-Petersburg State Poytechnic University Emai: veram,petert}@dcn.icc.spbstu.ru Abstract The probem of efficient

More information

The Group Structure on a Smooth Tropical Cubic

The Group Structure on a Smooth Tropical Cubic The Group Structure on a Smooth Tropica Cubic Ethan Lake Apri 20, 2015 Abstract Just as in in cassica agebraic geometry, it is possibe to define a group aw on a smooth tropica cubic curve. In this note,

More information

FRIEZE GROUPS IN R 2

FRIEZE GROUPS IN R 2 FRIEZE GROUPS IN R 2 MAXWELL STOLARSKI Abstract. Focusing on the Eucidean pane under the Pythagorean Metric, our goa is to cassify the frieze groups, discrete subgroups of the set of isometries of the

More information

8 Digifl'.11 Cth:uits and devices

8 Digifl'.11 Cth:uits and devices 8 Digif'. Cth:uits and devices 8. Introduction In anaog eectronics, votage is a continuous variabe. This is usefu because most physica quantities we encounter are continuous: sound eves, ight intensity,

More information

SUPPLEMENTARY MATERIAL TO INNOVATED SCALABLE EFFICIENT ESTIMATION IN ULTRA-LARGE GAUSSIAN GRAPHICAL MODELS

SUPPLEMENTARY MATERIAL TO INNOVATED SCALABLE EFFICIENT ESTIMATION IN ULTRA-LARGE GAUSSIAN GRAPHICAL MODELS ISEE 1 SUPPLEMENTARY MATERIAL TO INNOVATED SCALABLE EFFICIENT ESTIMATION IN ULTRA-LARGE GAUSSIAN GRAPHICAL MODELS By Yingying Fan and Jinchi Lv University of Southern Caifornia This Suppementary Materia

More information

A. Distribution of the test statistic

A. Distribution of the test statistic A. Distribution of the test statistic In the sequentia test, we first compute the test statistic from a mini-batch of size m. If a decision cannot be made with this statistic, we keep increasing the mini-batch

More information

Maximizing Sum Rate and Minimizing MSE on Multiuser Downlink: Optimality, Fast Algorithms and Equivalence via Max-min SIR

Maximizing Sum Rate and Minimizing MSE on Multiuser Downlink: Optimality, Fast Algorithms and Equivalence via Max-min SIR 1 Maximizing Sum Rate and Minimizing MSE on Mutiuser Downink: Optimaity, Fast Agorithms and Equivaence via Max-min SIR Chee Wei Tan 1,2, Mung Chiang 2 and R. Srikant 3 1 Caifornia Institute of Technoogy,

More information

Homework 5 Solutions

Homework 5 Solutions Stat 310B/Math 230B Theory of Probabiity Homework 5 Soutions Andrea Montanari Due on 2/19/2014 Exercise [5.3.20] 1. We caim that n 2 [ E[h F n ] = 2 n i=1 A i,n h(u)du ] I Ai,n (t). (1) Indeed, integrabiity

More information

Completion. is dense in H. If V is complete, then U(V) = H.

Completion. is dense in H. If V is complete, then U(V) = H. Competion Theorem 1 (Competion) If ( V V ) is any inner product space then there exists a Hibert space ( H H ) and a map U : V H such that (i) U is 1 1 (ii) U is inear (iii) UxUy H xy V for a xy V (iv)

More information

Age of Information: The Gamma Awakening

Age of Information: The Gamma Awakening Age of Information: The Gamma Awakening Eie Najm and Rajai Nasser LTHI, EPFL, Lausanne, Switzerand Emai: {eie.najm, rajai.nasser}@epf.ch arxiv:604.086v [cs.it] 5 Apr 06 Abstract Status update systems is

More information

Lecture Note 3: Stationary Iterative Methods

Lecture Note 3: Stationary Iterative Methods MATH 5330: Computationa Methods of Linear Agebra Lecture Note 3: Stationary Iterative Methods Xianyi Zeng Department of Mathematica Sciences, UTEP Stationary Iterative Methods The Gaussian eimination (or

More information

Delay Asymptotics with Retransmissions and Fixed Rate Codes over Erasure Channels

Delay Asymptotics with Retransmissions and Fixed Rate Codes over Erasure Channels Deay Asymptotics with Retransmissions and Fixed Rate Codes over Erasure Channes Jian Tan, Yang Yang, Ness B. Shroff, Hesham E Gama Department of Eectrica and Computer Engineering The Ohio State University,

More information

Coupling of LWR and phase transition models at boundary

Coupling of LWR and phase transition models at boundary Couping of LW and phase transition modes at boundary Mauro Garaveo Dipartimento di Matematica e Appicazioni, Università di Miano Bicocca, via. Cozzi 53, 20125 Miano Itay. Benedetto Piccoi Department of

More information

Manipulation in Financial Markets and the Implications for Debt Financing

Manipulation in Financial Markets and the Implications for Debt Financing Manipuation in Financia Markets and the Impications for Debt Financing Leonid Spesivtsev This paper studies the situation when the firm is in financia distress and faces bankruptcy or debt restructuring.

More information

FOURIER SERIES ON ANY INTERVAL

FOURIER SERIES ON ANY INTERVAL FOURIER SERIES ON ANY INTERVAL Overview We have spent considerabe time earning how to compute Fourier series for functions that have a period of 2p on the interva (-p,p). We have aso seen how Fourier series

More information

Integrality ratio for Group Steiner Trees and Directed Steiner Trees

Integrality ratio for Group Steiner Trees and Directed Steiner Trees Integraity ratio for Group Steiner Trees and Directed Steiner Trees Eran Haperin Guy Kortsarz Robert Krauthgamer Aravind Srinivasan Nan Wang Abstract The natura reaxation for the Group Steiner Tree probem,

More information

Uniformly Reweighted Belief Propagation: A Factor Graph Approach

Uniformly Reweighted Belief Propagation: A Factor Graph Approach Uniformy Reweighted Beief Propagation: A Factor Graph Approach Henk Wymeersch Chamers University of Technoogy Gothenburg, Sweden henkw@chamers.se Federico Penna Poitecnico di Torino Torino, Itay federico.penna@poito.it

More information

arxiv: v1 [math.co] 17 Dec 2018

arxiv: v1 [math.co] 17 Dec 2018 On the Extrema Maximum Agreement Subtree Probem arxiv:1812.06951v1 [math.o] 17 Dec 2018 Aexey Markin Department of omputer Science, Iowa State University, USA amarkin@iastate.edu Abstract Given two phyogenetic

More information

Algorithms to solve massively under-defined systems of multivariate quadratic equations

Algorithms to solve massively under-defined systems of multivariate quadratic equations Agorithms to sove massivey under-defined systems of mutivariate quadratic equations Yasufumi Hashimoto Abstract It is we known that the probem to sove a set of randomy chosen mutivariate quadratic equations

More information

Biometrics Unit, 337 Warren Hall Cornell University, Ithaca, NY and. B. L. Raktoe

Biometrics Unit, 337 Warren Hall Cornell University, Ithaca, NY and. B. L. Raktoe NONISCMORPHIC CCMPLETE SETS OF ORTHOGONAL F-SQ.UARES, HADAMARD MATRICES, AND DECCMPOSITIONS OF A 2 4 DESIGN S. J. Schwager and w. T. Federer Biometrics Unit, 337 Warren Ha Corne University, Ithaca, NY

More information

MONOCHROMATIC LOOSE PATHS IN MULTICOLORED k-uniform CLIQUES

MONOCHROMATIC LOOSE PATHS IN MULTICOLORED k-uniform CLIQUES MONOCHROMATIC LOOSE PATHS IN MULTICOLORED k-uniform CLIQUES ANDRZEJ DUDEK AND ANDRZEJ RUCIŃSKI Abstract. For positive integers k and, a k-uniform hypergraph is caed a oose path of ength, and denoted by

More information

c 2007 Society for Industrial and Applied Mathematics

c 2007 Society for Industrial and Applied Mathematics SIAM REVIEW Vo. 49,No. 1,pp. 111 1 c 7 Society for Industria and Appied Mathematics Domino Waves C. J. Efthimiou M. D. Johnson Abstract. Motivated by a proposa of Daykin [Probem 71-19*, SIAM Rev., 13 (1971),

More information

On Non-Optimally Expanding Sets in Grassmann Graphs

On Non-Optimally Expanding Sets in Grassmann Graphs ectronic Cooquium on Computationa Compexity, Report No. 94 (07) On Non-Optimay xpanding Sets in Grassmann Graphs Irit Dinur Subhash Khot Guy Kinder Dor Minzer Mui Safra Abstract The paper investigates

More information

Stochastic Variational Inference with Gradient Linearization

Stochastic Variational Inference with Gradient Linearization Stochastic Variationa Inference with Gradient Linearization Suppementa Materia Tobias Pötz * Anne S Wannenwetsch Stefan Roth Department of Computer Science, TU Darmstadt Preface In this suppementa materia,

More information

arxiv: v1 [cs.db] 25 Jun 2013

arxiv: v1 [cs.db] 25 Jun 2013 Communication Steps for Parae Query Processing Pau Beame, Paraschos Koutris and Dan Suciu {beame,pkoutris,suciu}@cs.washington.edu University of Washington arxiv:1306.5972v1 [cs.db] 25 Jun 2013 June 26,

More information

An Algorithm for Pruning Redundant Modules in Min-Max Modular Network

An Algorithm for Pruning Redundant Modules in Min-Max Modular Network An Agorithm for Pruning Redundant Modues in Min-Max Moduar Network Hui-Cheng Lian and Bao-Liang Lu Department of Computer Science and Engineering, Shanghai Jiao Tong University 1954 Hua Shan Rd., Shanghai

More information

Cryptanalysis of PKP: A New Approach

Cryptanalysis of PKP: A New Approach Cryptanaysis of PKP: A New Approach Éiane Jaumes and Antoine Joux DCSSI 18, rue du Dr. Zamenhoff F-92131 Issy-es-Mx Cedex France eiane.jaumes@wanadoo.fr Antoine.Joux@ens.fr Abstract. Quite recenty, in

More information

On the estimation of multiple random integrals and U-statistics

On the estimation of multiple random integrals and U-statistics Péter Major On the estimation of mutipe random integras and U-statistics Lecture Note January 9, 2014 Springer Contents 1 Introduction................................................... 1 2 Motivation

More information

Primal and dual active-set methods for convex quadratic programming

Primal and dual active-set methods for convex quadratic programming Math. Program., Ser. A 216) 159:469 58 DOI 1.17/s117-15-966-2 FULL LENGTH PAPER Prima and dua active-set methods for convex quadratic programming Anders Forsgren 1 Phiip E. Gi 2 Eizabeth Wong 2 Received:

More information

Bayesian Learning. You hear a which which could equally be Thanks or Tanks, which would you go with?

Bayesian Learning. You hear a which which could equally be Thanks or Tanks, which would you go with? Bayesian Learning A powerfu and growing approach in machine earning We use it in our own decision making a the time You hear a which which coud equay be Thanks or Tanks, which woud you go with? Combine

More information

Week 6 Lectures, Math 6451, Tanveer

Week 6 Lectures, Math 6451, Tanveer Fourier Series Week 6 Lectures, Math 645, Tanveer In the context of separation of variabe to find soutions of PDEs, we encountered or and in other cases f(x = f(x = a 0 + f(x = a 0 + b n sin nπx { a n

More information

Stochastic Complement Analysis of Multi-Server Threshold Queues. with Hysteresis. Abstract

Stochastic Complement Analysis of Multi-Server Threshold Queues. with Hysteresis. Abstract Stochastic Compement Anaysis of Muti-Server Threshod Queues with Hysteresis John C.S. Lui The Dept. of Computer Science & Engineering The Chinese University of Hong Kong Leana Goubchik Dept. of Computer

More information

More Scattering: the Partial Wave Expansion

More Scattering: the Partial Wave Expansion More Scattering: the Partia Wave Expansion Michae Fower /7/8 Pane Waves and Partia Waves We are considering the soution to Schrödinger s equation for scattering of an incoming pane wave in the z-direction

More information

Multicasting Energy and Information Simultaneously

Multicasting Energy and Information Simultaneously Muticasting Energy and Information Simutaneousy Ting-Yi Wu, Anshoo Tandon, Lav R. Varshney, and Mehu Motani Sun Yat-Sen University, wutingyi@mai.sysu.edu.cn Nationa University of Singapore, {anshoo.tandon@gmai.com,

More information

IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, VOL. 15, NO. 2, FEBRUARY

IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, VOL. 15, NO. 2, FEBRUARY IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, VOL. 5, NO. 2, FEBRUARY 206 857 Optima Energy and Data Routing in Networks With Energy Cooperation Berk Gurakan, Student Member, IEEE, OmurOze,Member, IEEE,

More information

Discrete Techniques. Chapter Introduction

Discrete Techniques. Chapter Introduction Chapter 3 Discrete Techniques 3. Introduction In the previous two chapters we introduced Fourier transforms of continuous functions of the periodic and non-periodic (finite energy) type, we as various

More information

Pattern Frequency Sequences and Internal Zeros

Pattern Frequency Sequences and Internal Zeros Advances in Appied Mathematics 28, 395 420 (2002 doi:10.1006/aama.2001.0789, avaiabe onine at http://www.ideaibrary.com on Pattern Frequency Sequences and Interna Zeros Mikós Bóna Department of Mathematics,

More information

Discrete Techniques. Chapter Introduction

Discrete Techniques. Chapter Introduction Chapter 3 Discrete Techniques 3. Introduction In the previous two chapters we introduced Fourier transforms of continuous functions of the periodic and non-periodic (finite energy) type, as we as various

More information

Explicit overall risk minimization transductive bound

Explicit overall risk minimization transductive bound 1 Expicit overa risk minimization transductive bound Sergio Decherchi, Paoo Gastado, Sandro Ridea, Rodofo Zunino Dept. of Biophysica and Eectronic Engineering (DIBE), Genoa University Via Opera Pia 11a,

More information

STA 216 Project: Spline Approach to Discrete Survival Analysis

STA 216 Project: Spline Approach to Discrete Survival Analysis : Spine Approach to Discrete Surviva Anaysis November 4, 005 1 Introduction Athough continuous surviva anaysis differs much from the discrete surviva anaysis, there is certain ink between the two modeing

More information

Centralized Coded Caching of Correlated Contents

Centralized Coded Caching of Correlated Contents Centraized Coded Caching of Correated Contents Qianqian Yang and Deniz Gündüz Information Processing and Communications Lab Department of Eectrica and Eectronic Engineering Imperia Coege London arxiv:1711.03798v1

More information

On the Goal Value of a Boolean Function

On the Goal Value of a Boolean Function On the Goa Vaue of a Booean Function Eric Bach Dept. of CS University of Wisconsin 1210 W. Dayton St. Madison, WI 53706 Lisa Heerstein Dept of CSE NYU Schoo of Engineering 2 Metrotech Center, 10th Foor

More information

AST 418/518 Instrumentation and Statistics

AST 418/518 Instrumentation and Statistics AST 418/518 Instrumentation and Statistics Cass Website: http://ircamera.as.arizona.edu/astr_518 Cass Texts: Practica Statistics for Astronomers, J.V. Wa, and C.R. Jenkins, Second Edition. Measuring the

More information

ORTHOGONAL MULTI-WAVELETS FROM MATRIX FACTORIZATION

ORTHOGONAL MULTI-WAVELETS FROM MATRIX FACTORIZATION J. Korean Math. Soc. 46 2009, No. 2, pp. 281 294 ORHOGONAL MLI-WAVELES FROM MARIX FACORIZAION Hongying Xiao Abstract. Accuracy of the scaing function is very crucia in waveet theory, or correspondingy,

More information

A UNIVERSAL METRIC FOR THE CANONICAL BUNDLE OF A HOLOMORPHIC FAMILY OF PROJECTIVE ALGEBRAIC MANIFOLDS

A UNIVERSAL METRIC FOR THE CANONICAL BUNDLE OF A HOLOMORPHIC FAMILY OF PROJECTIVE ALGEBRAIC MANIFOLDS A UNIERSAL METRIC FOR THE CANONICAL BUNDLE OF A HOLOMORPHIC FAMILY OF PROJECTIE ALGEBRAIC MANIFOLDS DROR AROLIN Dedicated to M Saah Baouendi on the occasion of his 60th birthday 1 Introduction In his ceebrated

More information

B. Brown, M. Griebel, F.Y. Kuo and I.H. Sloan

B. Brown, M. Griebel, F.Y. Kuo and I.H. Sloan Wegeerstraße 6 53115 Bonn Germany phone +49 8 73-347 fax +49 8 73-757 www.ins.uni-bonn.de B. Brown, M. Griebe, F.Y. Kuo and I.H. Soan On the expected uniform error of geometric Brownian motion approximated

More information

Chemical Kinetics Part 2

Chemical Kinetics Part 2 Integrated Rate Laws Chemica Kinetics Part 2 The rate aw we have discussed thus far is the differentia rate aw. Let us consider the very simpe reaction: a A à products The differentia rate reates the rate

More information

Restricted weak type on maximal linear and multilinear integral maps.

Restricted weak type on maximal linear and multilinear integral maps. Restricted weak type on maxima inear and mutiinear integra maps. Oscar Basco Abstract It is shown that mutiinear operators of the form T (f 1,..., f k )(x) = R K(x, y n 1,..., y k )f 1 (y 1 )...f k (y

More information

12.2. Maxima and Minima. Introduction. Prerequisites. Learning Outcomes

12.2. Maxima and Minima. Introduction. Prerequisites. Learning Outcomes Maima and Minima 1. Introduction In this Section we anayse curves in the oca neighbourhood of a stationary point and, from this anaysis, deduce necessary conditions satisfied by oca maima and oca minima.

More information

Tight Bounds for Distributed Functional Monitoring

Tight Bounds for Distributed Functional Monitoring Tight Bounds for Distributed Functiona Monitoring David P. Woodruff IBM Amaden dpwoodru@us.ibm.com Qin Zhang IBM Amaden qinzhang@cse.ust.hk Abstract We resove severa fundamenta questions in the area of

More information

Pricing Multiple Products with the Multinomial Logit and Nested Logit Models: Concavity and Implications

Pricing Multiple Products with the Multinomial Logit and Nested Logit Models: Concavity and Implications Pricing Mutipe Products with the Mutinomia Logit and Nested Logit Modes: Concavity and Impications Hongmin Li Woonghee Tim Huh WP Carey Schoo of Business Arizona State University Tempe Arizona 85287 USA

More information

Improved Decoding of Reed-Solomon and Algebraic-Geometric Codes

Improved Decoding of Reed-Solomon and Algebraic-Geometric Codes Improved Decoding of Reed-Soomon and Agebraic-Geometric Codes Venkatesan Guruswami Madhu Sudan Abstract Given an error-correcting code over strings of ength n and an arbitrary input string aso of ength

More information

u(x) s.t. px w x 0 Denote the solution to this problem by ˆx(p, x). In order to obtain ˆx we may simply solve the standard problem max x 0

u(x) s.t. px w x 0 Denote the solution to this problem by ˆx(p, x). In order to obtain ˆx we may simply solve the standard problem max x 0 Bocconi University PhD in Economics - Microeconomics I Prof M Messner Probem Set 4 - Soution Probem : If an individua has an endowment instead of a monetary income his weath depends on price eves In particuar,

More information

LECTURE NOTES 9 TRACELESS SYMMETRIC TENSOR APPROACH TO LEGENDRE POLYNOMIALS AND SPHERICAL HARMONICS

LECTURE NOTES 9 TRACELESS SYMMETRIC TENSOR APPROACH TO LEGENDRE POLYNOMIALS AND SPHERICAL HARMONICS MASSACHUSETTS INSTITUTE OF TECHNOLOGY Physics Department Physics 8.07: Eectromagnetism II October 7, 202 Prof. Aan Guth LECTURE NOTES 9 TRACELESS SYMMETRIC TENSOR APPROACH TO LEGENDRE POLYNOMIALS AND SPHERICAL

More information

Coded Caching for Files with Distinct File Sizes

Coded Caching for Files with Distinct File Sizes Coded Caching for Fies with Distinct Fie Sizes Jinbei Zhang iaojun Lin Chih-Chun Wang inbing Wang Department of Eectronic Engineering Shanghai Jiao ong University China Schoo of Eectrica and Computer Engineering

More information

UNIFORM CONVERGENCE OF MULTIPLIER CONVERGENT SERIES

UNIFORM CONVERGENCE OF MULTIPLIER CONVERGENT SERIES royecciones Vo. 26, N o 1, pp. 27-35, May 2007. Universidad Catóica de Norte Antofagasta - Chie UNIFORM CONVERGENCE OF MULTILIER CONVERGENT SERIES CHARLES SWARTZ NEW MEXICO STATE UNIVERSITY Received :

More information

arxiv: v1 [quant-ph] 18 Nov 2014

arxiv: v1 [quant-ph] 18 Nov 2014 Overcoming efficiency constraints on bind quantum computation Caros A Pérez-Degado 1 and oseph F Fitzsimons1, 2, 1 Singapore University of Technoogy and Design, 20 Dover Drive, Singapore 138682 2 Centre

More information

Turbo Codes. Coding and Communication Laboratory. Dept. of Electrical Engineering, National Chung Hsing University

Turbo Codes. Coding and Communication Laboratory. Dept. of Electrical Engineering, National Chung Hsing University Turbo Codes Coding and Communication Laboratory Dept. of Eectrica Engineering, Nationa Chung Hsing University Turbo codes 1 Chapter 12: Turbo Codes 1. Introduction 2. Turbo code encoder 3. Design of intereaver

More information