On Function Computation with Privacy and Secrecy Constraints

Size: px
Start display at page:

Download "On Function Computation with Privacy and Secrecy Constraints"

Transcription

1 1 On Function Computation with Privacy and Secrecy Constraints Wenwen Tu and Lifeng Lai Abstract In this paper, the problem of function computation with privacy and secrecy constraints is considered. The considered model consists of three legitimate nodes (i.e., two transmitters Alice and Bob, and a fusion center which acts as the receiver), who observe correlated sources and are connected by noiseless public channels, and an eavesdropper Eve who has full access to the public channels and also has its own source observations. The fusion center would like to compute a function of the distributed sources to within a prefixed distortion level under a certain distortion metric. To facilitate the function computation, Alice and Bob will send messages to the fusion center. Different from the existing setups in function computation, we assume that there is a privacy constraint on the sources at Alice and Bob. In particular, Alice and Bob would like to enable the fusion center to compute the function but at same time they do not want the fusion center to learn too much information about the source observations. We introduce a quantity to precisely measure the privacy leakage to the fusion center. In addition to this privacy constraint, we also have a secrecy constraint to Eve and use equivocation of sources to measure this quantity. Under this model, we study tradeoffs among message rates, private information leakage, equivocation and distortion. We first consider a scenario involving only one transmitter, i.e., the source at Bob is empty, and fully single-letter characterize the corresponding regions. Then, we consider the more general case and provide both outer and inner bounds on the corresponding regions. Index Terms Function computation, privacy constraint, public discussion, rate distortion, secrecy constraint. W. Tu and L. Lai are with the Department of Electrical and Computer Engineering, University of California, Davis, CA. The work of W. Tu and L. Lai was supported by the National Science Foundation CAREER Award under Grant CCF and by the National Science Foundation under Grant ECCS {wwtu,lflai}@ucdavis.edu. June 16, 2017 DRAFT

2 I. INTRODUCTION Recently, the problem of designing schemes to enable communication parties to compute functions of distributed sources has received significant attentions [1] [17]. One straightforward scheme for function computation is to ask each information source to send enough information (for example, using schemes in distributed source coding [18] [22]) so that the function computing parties can first recover all sources and then compute functions of interest using the recovered sources. However, as shown in many of the existing works, full source recovery is not necessary in many scenarios [11] [16]. As the result, information sources can reduce their transmitted message rates while still enabling the function computing parties to compute functions of interest. This can significantly reduce the resource (in terms of energy, spectrum etc.) requirements and hence is very appealing for resource-constrained applications such as IoT where the goal of communication is decision making (hence requires function computation) but not full source recovery [23] [25]. In the basic function computation setup considered in [11], two terminals observe correlated sources and are allowed to exchange messages, while only one of them is required to compute a function of these distributed sources. [11] investigates what are the minimal message rates so that the function can be computed with a negligibly small error probability. It characterizes the message rate region and further provides an efficient method, by introducing conditional characteristic graph, to characterize the optimal message rate for the case where only one terminal is allowed to send messages. This basic model is further extended to more complex scenarios in many interesting recent papers [12] [16]. In particular, [12] studies the problem of two-terminal interactive distributed source coding for function computations at both terminals. In this model, these two terminals are allowed to exchange t (a finite nonnegative integer) coded messages, and each of these two terminals need to compute a function to within a certain distortion level respectively. [12] provides a single-letter characterization on the corresponding message rate region. Properties of the limit of the sum-rate-distortion function (i.e., the minimal value of the sum of message rates) when t goes to infinity are further investigated in [14]. Furthermore, [15] studies the message rate region of function computation in a different setup. The model considered in [15] consists of three terminals, two of them (transmitters) are allowed to send messages to the third terminal who needs to compute a function, but there is no interaction between the two transmitters. This model is further discussed in [16] by allowing an additional 2

3 one-way discussion between the two transmitters. [17] further generalizes this model to a more sophisticated scenario that consists of more terminals over a rooted multi-level directed tree. In this scenario, each terminal is allowed to transmit messages to its parent terminal and the function is computed at the root. [17] provides both outer and inner bounds on the message rate region, which recover capacity regions of many function computation setups. In this paper, we consider privacy and secrecy issues arising in the function computation setup. In the considered model, two terminals, Alice and Bob, are connected to a fusion center, and they observe correlated source sequences X1 n, X2 n, Y n respectively. The fusion center would like to compute a function of X1 n, X2 n, Y n. To facilitate the function computation, Alice and Bob will send messages M 1 and M 2 respectively to the fusion center. Different from the setups in [11] [14], we assume that there is a privacy constraint on the sources at Alice and Bob. In particular, these terminals would like to assist the fusion center to compute the function but at same time do not want the fusion center to learn too much information about their source observations. We use 1 n I(Xn 1, X2 n ; M 1, M 2 Y n ) as our privacy measure. As this quantity is the same as 1 n [H(Xn 1, X2 n Y n ) H(X1 n, X2 n M 1, M 2, Y n )], this quantity measures additional information about the sources (X1 n, X2 n ) that the fusion center learns from the transmitted messages. We would like to minimize this privacy leakage subject to the constraint that the fusion center can still compute the function of interest. In addition to this privacy constraint, we also have a secrecy constraint. In particular, there is an additional terminal Eve who observes Z n, which is correlated with the source sequences, and we use equivocation of sources to measure the secrecy leakage to Eve. We would like to maximize this equivocation so that Eve s uncertainty about the sources is maximized. For the function to be computed, we consider both lossless and lossy cases. In the lossless case, the fusion center is required to compute the function with a diminishingly small error probability. In the lossy case, we allow distortion in the computed function to within a certain distortion level measured by a given distortion metric. We would like to note that the lossless case in our model is not merely a special case of the lossy case when the distortion is zero. It will be clear in the sequel, the lossless case in our model has a more stringent constraint than just setting distortion as zero in the lossy case. Thus, it deserves an independent study. We study the relationship of message rates, the private information leakage to the fusion center, the equivocation at Eve and the distortion. To gain design insights, we first study an important special case where there is only one 3

4 transmitter (by setting X 2 = ). This case recovers the basic function computation problem [11] but with additional privacy and secrecy considerations. We fully characterize the regions of the involved parameters for both the lossless and the lossy function computation cases. The results demonstrate that there exist tradeoffs among these parameters. For example, given the distortion level, the message rate and privacy leakage can be simultaneously optimized but the secrecy level of sources at Eve may not be simultaneously maximized. In addition, we show that, even though the lossless case has a more stringent constraint than that of the lossy case with distortion being zero, the obtained result for the lossless case is equivalent to that of the special case of the lossy case. Using the understanding from the single transmitter case, we then extend the study to the scenario with two transmitters. We first derive both an outer bound and an inner bound on the corresponding region for the lossless case. These outer and inner bounds have the same form but with different range for auxiliary random variables involved. The obtained results recover many existing results [15], [26], and show that there exist tradeoffs among different parameters involved in the model. Furthermore, the techniques used in the lossless case are generalized into the lossy case. We also provide both an outer bound and an inner bound on the corresponding region. Similar to the lossless case, the obtained outer and inner bounds have the same form but with different range for auxiliary random variables involved. We now briefly review some recent interesting works addressing secrecy issues in function computation, and discuss the difference between our work and those works. [4] considers a multi-terminal source model for secure computation. Under this model, each of m terminals observes a component of correlated sources, and a subset of these terminals are required to compute a function via public discussion. Based on the result in [27], it characterizes the secure computability of the function, i.e., when the computed value of the function is secure from the adversary who has full access to the public discussion. The main difference between [4] and our work is that [4] focuses on the secrecy of the computed function while we care about both the privacy of the sources at the fusion center and the secrecy at Eve. In addition, in our work, we allow distortion. Furthermore, [4] allows nodes to conduct multiple rounds of interactive communications, while in our model only one way discussion is allowed. The extension of our model to the interesting scenario with multiple rounds of discussion is left for future work. The secure function computation problem is further studied in [5], [6], [28], [29]. Another line of related work is the class of secure multi-party computation (SMC) problems. In the 4

5 SMC paradigm, researchers focus on creating protocols for communication parties to jointly compute functions over distributed inputs while keeping the privacy of these inputs. Initially, the SMC problems were investigated based on various computational assumptions [30] [35]. Recently, there are growing interests in designing information theoretically secure protocols for SMC [36] [41]. Typically, these protocols involve multiple rounds of interactive discussion and some of them can work among untrusted communication parties, who may deviate from the protocols. The presence of untrusted parties make the SMC problem very challenging and the general SMC problem is still under active investigation. In our model, the legitimate users are assumed to be trusted and there is no interaction between Alice and Bob, which simplifies the problem and enables us to get conclusive results. The remainder of this paper is organized as follows. In Section II, we introduce the system model. In Section III, we consider the special case where there is only one transmitter. The scenario with two transmitters is analyzed in Section IV. In Section V, we provide proofs of results presented in this paper. In Section VI, we offer our concluding remarks. II. SYSTEM MODEL As illustrated in Fig. 1, in the considered model, two legitimate terminals, Alice and Bob, are connected to the fusion center via two public noiseless channels in the presence of an eavesdropper Eve who has full access to the public channels, and there is no link between Alice and Bob. Alice, Bob, the fusion center and Eve observe n-length correlated source sequences X1 n, X2 n, Y n and Z n respectively. These sequences are generated according to a given probability mass function (PMF) P X1 X 2 Y Z: n Pr{X1 n, X2 n, Y n, Z n } = P X1 X 2 Y Z(X 1i, X 2i, Y i, Z i ), (1) where (X 1, X 2, Y, Z) take values from finite alphabets (X 1, X 2, Y, Z) respectively. The fusion center would like to compute a function f(x1 n, X2 n, Y n ) that consists of componentwise functions of {X 1i, X 2i, Y i } n, and f(x1 n, X2 n, Y n ) can be written as f(x1 n, X2 n, Y n ) := {f(x 1i, X 2i, Y i )} n. f(x1 n, X2 n, Y n ) is denoted by f in short and f(x 1i, X 2i, Y i ) is denoted by f i for i [1 : n]. Thus, we rewrite f(x1 n, X2 n, Y n ) as f := f n. To facilitate the computation of f at the fusion center, Alice and Bob will send messages M 1 and M 2 to the fusion center via the public channels 5

6 X n 1 Alice M 1 X n 2 Z n Eve Y n Fusion Center ˆf Bob M 2 Fig. 1. System model: The fusion center would like to compute a function f of (X n 1, X n 2, Y n ). Alice and Bob are connected to the fusion center via public noiseless channels, which Eve has full access to. respectively. Here M 1 is a function (could be stochastic) of the sequence X1 n. Similarly M 2 is a function (could be stochastic) of the sequence X2 n. After receiving these messages, the fusion center computes an estimated value ˆf of f as a function of M 1, M 2 and Y n : ˆf := ˆf(M1, M 2, Y n ). In the considered model, Alice and Bob have privacy constraints in the sense that they would like to minimize privacy leakage to the fusion center and Eve about their observations while still enabling the fusion center to compute the function of interest. We use 1 n I(Xn 1, X2 n ; M 1, M 2 Y n ) to measure additional private information leakage about (X1 n, X2 n ) to the fusion center. As I(X1 n, X2 n ; M 1, M 2 Y n ) = H(X1 n, X2 n Y n ) H(X1 n, X2 n M 1, M 2, Y n ), this quantity measures additional information about (X1 n, X2 n ) that the fusion center learns from (M 1, M 2 ), and hence is the privacy price we pay in order to compute f. We use 1 n H(Xn 1, X2 n M 1, M 2, Z n ) to measure the equivocation of (X1 n, X2 n ) at Eve. Definition 1. Given an arbitrary random variable alphabet F and its reconstruction alphabet ˆF, the distortion measure is a mapping d : F ˆF [0, ), and the distortion between given sequences f n and ˆf n is measured as d(f n, ˆf n ) = d(f i, ˆf i ). 6

7 Definition 2. Given a per-letter distortion measure mapping d, a tuple (R 1, R 2, D, 1, 2 ) is said to be achievable if ɛ > 0, there exists an n(ɛ) N and a sequence of (n, R 1, R 2, D, 1, 2 ) codes such that n > n(ɛ). 1 n E[d(f, ˆf)] D + ɛ, (2) 1 n H(M i) R i + ɛ, i = 1, 2, (3) 1 n I(Xn 1, X2 n ; M 1, M 2 Y n ) 1 + ɛ, (4) 1 n H(Xn 1, X2 n M 1, M 2, Z n ) 2 ɛ, (5) Here, (2) indicates that the average distortion between the estimated value ˆf and the true value f is less than a given positive parameter D, (3) measures the transmitted message rates at Alice and Bob respectively, (4) implies that the extra privacy leakage of (X n 1, X n 2 ) to the fusion center is less than 1, and (5) measures the joint equivocation of (X n 1, X 2 ) at Eve s side. In Definition 2, in the case when D = 0, we replace (2) with the following condition: Pr{f ˆf} ɛ, (6) while keeping the inequalities (3)-(5) unchanged. For this case, we rewrite the tuple (n, R 1, R 2, D = 0, 1, 2 ) as (n, R 1, R 2, 1, 2 ) in short. Obviously, the constraint defined by (6) is stricter than that defined by (2). We refer the case when D = 0 with constraints defined by (3)-(6) as lossless function computation, and the case with constraints defined by (2)-(5) as lossy function computation. The lossless function computation case can be viewed as a special case of the lossy function computation case, but with a stricter constraint, thus it deserves an independent investigation. Definition 3. The set of all achievable tuple (R 1, R 2, D, 1, 2 ) is defined as: S := {(R 1, R 2, D, 1, 2 ) R 5 + : Our goal is to characterize the region S. (R 1, R 2, D, 1, 2 ) is achievable }. 7

8 III. A SPECIAL CASE WITH X 2 = In this section, we study a special case when X 2 =. In this case, for presentation convenience, we denote X 1 by X, and M 1 by M. The model is shown in Fig. 2. X n Alice M Y n Fusion Center ˆf Z n Eve Fig. 2. The case with X 2 = : The fusion center would like to compute a value as a function of sequences X n and Y n. Alice is connected to the fusion center via a public noiseless channel, which Eve has full access to. A. Lossless Function Computation In this part, we study the lossless function computation case. Before proceeding to the main results, we introduce the following definition (similar to a definition introduced in Section V. B of [11]) that will simplify the presentation of results in the sequel. Definition 4. A random variable U is said to be admissible with respect to random variables X, Y and function f (we may write U is admissible in short), if it satisfies 1) U X Y ; 2) U and Y determine f, i.e., H(f U, Y ) = 0. Furthermore, a sequence U n is said to be an admissible sequence if each component of U n is admissible. Here, condition 1) denotes that random variables U, X and Y form a Markov chain in this order. Condition 2) is equivalent to the condition that there exists a deterministic function g such that g(u, Y ) = f(x, Y ), (X, Y ) with P XY (X, Y ) > 0, according to [26, Chapter 2]. To facilitate understanding, we first assume that Eve has no side information, i.e., Z =, and we have the following result. 8

9 Theorem 1. The achievable tuple set S in the case when Eve has no side information is given by S = Proof: Please see Section V-A. { (R, 1, 2 ) : R I(X; U) I(Y ; U), (7) 1 I(X; U Y ), (8) and 2 H(X U) + I(Y ; U), (9) } for some admissible U w.r.t.x, Y and f.. (10) Intuitively, to reduce the additional information leakage to the fusion center and to increase the equivocation at Eve, Alice should reduce the information of X n contained in the public message M. In our scheme, Alice first chooses an appropriate sequence U n, and encodes it into a codeword containing the least information of X n under the conditions that the fusion center can decode U n correctly, and U n along with Y n can determine f n. We will show that stochastic encoding does not help in increasing the equivocation at Eve or reducing the privacy information leakage to the fusion center. Thus, deterministic encoding is sufficient. Detailed proof is provided in Section V. Obviously, the set of all possible random variables U is not empty: X belongs to this set. In addition, from Theorem 1, we can see that there is no tradeoff between (R, 1, 2 ) in this case. In particular, when R achieves it optimal value denoted by R, the values of 1 and 2 can be R and H(X) R respectively, which are the corresponding optimal values. In other words, there exists a U that achieves lower bounds on R and 1, and the upper bound for 2, simultaneously. Set U = arg min U is admissible I(X; U) I(Y ; U), then the set S in Theorem 1 can be rewritten as { S = (R, 1, 2 ) : R I(X; U ) I(Y ; U ), 1 I(X; U Y ), and 2 H(X U ) + I(Y ; U ).} However, as shown in the sequel, the situation is different in the case when Z. In addition, given PMF P XY and function f(x, Y ), the range of U can be written in an alternative manner by introducing conditional characteristic graph as shown [11]. [11] focuses 9

10 on characterizing the least message rate and does not take 1 and 2 into consideration. As a special case when we only care about R, the result in Theorem 1 is consistent with the result obtained in [11]. For the case when Eve has side information, i.e., Z, Eve can use both Z n and the public discussion to infer the sequence X n, thus, the equivocation of X n at Eve reduces. We have the following result in this case. Theorem 2. The achievable tuple set S for the case when Eve has side information is given by { S = (R, 1, 2 ) : R I(X; U) I(Y ; U), (11) 1 I(X; U Y ), (12) 2 H(X U, Z) + [I(Y ; U V ) I(Z; U V )] +, (13) for some admissible U and a r.v. V with } V U X (Y, Z). (14) Proof: Please see Section V-B. The existence of side-information Z n provides more information to Eve about X n. Thus, it is necessary to introduce an additional random variable V that serves as stochastic encoding to confuse Eve. Compared with the result in Theorem 1, there exists a tradeoff among the tuple (R, 1, 2 ): in general, there does not exist an optimal solution (U, V ) that minimizes R and 1, and maximizes 2 simultaneously. The result in Theorem 2 can be further simplified if the source random variables satisfy the Markov chain relationship X Y Z. Corollary 1. If X Y Z holds, the achievable tuple set S is given by { S = (R, 1, 2 ) : R I(X; U) I(Y ; U), (15) 1 I(X; U Y ), (16) 2 H(X U, Z) + I(Y ; U) I(Z; U), (17) } for some admissible U. Proof. For the convenience of notation, under the condition that X Y Z holds, we rename the region stated in Theorem 2 as Ŝ and the region in the corollary as S. On the one hand, that S Ŝ is trivial since we can set V = to obtain S from Ŝ. 10

11 On the other hand, we show that Ŝ S. It suffices to show that H(X U, Z)+[I(Y ; U V ) I(Z; U V )] + H(X U, Z) + I(Y ; U) I(Z; U), which is equivalent to I(Y ; U V ) I(Z; U V ) I(Y ; U) I(Z; U) I(Y ; V ) I(Z; V ). And that I(Y ; V ) I(Z; V ) is true due to the Markov chain V U X Y Z. Hence, we have Ŝ = S, and this completes the proof. B. Lossy Function Computation In this section, we focus on the lossy function computation case, i.e. D > 0. In this case, the fusion center is not required to recover the value of function f exactly, it only needs to compute f within a prefixed allowed distortion level for a given distortion metric. This relaxed requirement allows us to reduce the message rate and the privacy leakage. Given a distortion measure mapping d on the alphabets of f and its reconstruction, we have the following result. Theorem 3. Given distortion measure mapping d, the achievable tuple set S for the coding of lossy function computation is given by { S = (R, D, 1, 2 ) : R I(X; U) I(Y ; U), (18) D E[d(f(X, Y ), g(u, Y ))], (19) 1 I(X; U Y ), (20) 2 H(X U, Z) + [I(Y ; U V ) I(Z; U V )] +, (21) for some function g and r.v. U, V with } V U X (Y, Z). (22) Proof: Please see Section V-C. We note that, although there is a function g in the description of the region, the form of g is implicitly determined by the choice of U. In particular, for any PMF P XY ZUV and function f, we can always find an optimal function g as follows: g (U, Y ) = arg min g E[d(f(X, Y ), g(u, Y ))]. 11

12 Consider the case with hamming distance as an example. Here, we take the function f as a variable (denoted by F ) and its value as to realization (denoted by f). E[d(F, g(u, Y ))] = f,u,y P F UY (f, u, y)d(f, g(u, y)) 1 u,y P F UY ( ˆf, u, y), where ˆf := arg max P F UY (f u, y). Thus, (u, y) U Y, we can obtain the optimal function f g as g (u, y) := arg max P F UY (f u, y). (23) f When P XY ZUV and function f are given, the PMF P F UY is given and it is straightforward to find the solution to (23). Note that, unlike the lossless case, the random variable U here is not required to be admissible w.r.t (X, Y ) and f anymore. As shown in [11], in the lossless case, there are many scenarios where the fusion center needs to decode X n exactly so that it can compute f. However, when a certain amount of distortion is allowed, there always exists random variable U other than X, such that the decoder only needs to decode the sequence U n. This sequence serves as distortion mapping of X n, which helps in increasing the equivocation of X n at Eve and reducing the privacy leakage to the fusion center. Comparing the results in Theorems 2 and 3, we observe that the region given in Theorem 3, when D = 0, is the same as that in Theorem 2, even though the requirement in the lossless function computation case is stricter than that in the lossy case, i.e., (6) is stricter than that of setting D = 0 to (2). In addition, similar to Corollary 1, we have the following corollary when X Y Z holds in the lossy function computation case. Corollary 2. If X Y Z holds, the achievable tuple set S in the lossy function computation case is given by { S = (R, D, 1, 2 ) : R I(X; U) I(Y ; U), (24) D E[d(f(X, Y ), g(u, Y ))], (25) 1 I(X; U Y ), (26) 2 H(X U, Z) + I(Y ; U) I(Z; U), (27) for some function g and r.v. U with 12

13 } U X Y Z). (28) The proof follows similar steps as that in the derivative of Corollary 1, thus is omitted here. IV. THE CASE WHEN X 2 In this section, we study the case when X 2. Despite being much more complicated than the case when X 2 =, the techniques developed in the previous section can be generalized to this case. We first consider the lossless function computation case, for which we have both inner and outer bounds on the region of achievable tuples as follows. Theorem 4. (Converse) For lossless function computation at the fusion center, if the tuple (R 1, R 2, 1, 2 ) is achievable, then there exist auxiliary random variables (U 1, V 1 ) and (U 2, V 2 ), for which V 1 U 1 X 1 (X 2, Y, Z) and V 2 U 2 X 2 (X 1, Y, Z) form Markov chains in the indicated orders, and R 1 I(V 1 ; X 1 Y, V 2 ) + I(U 1 ; X 1 Y, U 2, V 1 ) I(V 1 ; V 2 Y, X 1 ) I(U 1 ; U 2 X 1, Y, V 1 ), (29) R 2 I(V 2 ; X 2 Y, V 1 ) + I(U 2 ; X 2 Y, U 1, V 2 ) I(V 1 ; V 2 Y, X 2 ) I(U 1 ; U 2 X 2, Y, V 2 ), (30) R 1 + R 2 I(V 1 ; X 1 Y ) + I(V 2 ; X 2 Y, V 1 ) +I(U 1 ; X 1 Y, V 1, V 2 ) + I(U 2 ; X 2 Y, U 1, V 2 ), (31) 1 I(X 1, X 2 ; U 1, U 2 Y ), (32) 2 H(X 1, X 2 U 1, U 2, Z)+ [ I(U 1, U 2 ; Y V 1, V 2 ) I(U 1, U 2 ; Z V 1, V 2 ) ] +, (33) H(f U 1, U 2, Y ) = 0. (34) (Achievability) Furthermore, for random variables (U 1, V 1 ) and (U 2, V 2 ) satisfying P U1 V 1 U 2 V 2 X 1 X 2 Y Z = P X1 X 2 Y ZP U1 X 1 P V1 U 1 P U2 X 2 P V2 U 2 and H(f U 1, U 2, Y ) = 0, then the tuple (R 1, R 2, 1, 2 ) subject to (29)-(33) is achievable. Proof: Please see Section V-D. 13

14 In general, the converse and the achievable bounds do not match because the region of U 1, V 1 and U 2, V 2 defined by V 1 U 1 X 1 (X 2, Y, Z) and V 2 U 2 X 2 (X 1, Y, Z) in the converse bound is larger than that defined by P U1 V 1 U 2 V 2 X 1 X 2 Y Z = P X1 X 2 Y ZP U1 X 1 P V1 U 1 P U2 X 2 P V2 U 2 in the achievability bound. Note that, the minus terms on the right-hand sides of (29) and (30) are zero if P U1 V 1 U 2 V 2 X 1 X 2 Y Z = P X1 X 2 Y ZP U1 X 1 P V1 U 1 P U2 X 2 P V2 U 2, since we have in this case. (V 1, U 1 ) X 1 (Y, U 2, V 2 ), (V 2, U 2 ) X 2 (Y, U 1, V 1 ), By setting V 1 = V 2 =, we observe that the achievability result of the message rate region defined by (29)-(31) and (34) recovers the inner bound obtained in [15, Prop. 1]. In addition, it is consistent with a special case of the result obtained in [17, Theorem 2] when the rooted directed tree involves with only three nodes: one root and two children. Given P U1 V 1 U 2 V 2 X 1 X 2 Y Z, the main idea of our achievable scheme is that there exist auxiliary sequences U n 1, V n 1 and U n 2, V n 2 such that f(x n 1, X n 2, Y n ) = ˆf(U n 1, U n 2, Y n ) if (R 1, R 2, 1, 2 ) is achievable. Thus, the function f will be correctly computed at the fusion center as long as it can correctly decodes (U n 1, U n 2 ), and (29)-(31) define the region of (R 1, R 2 ), such that (U n 1, U n 2 ) can be correctly decoded with some scheme. And sequences V n 1, V n 2 are used to increase the equivocation of (X n 1, X n 2 ) at Eve. Under certain scenarios where we need to correctly decode X n 1 and X n 2, i.e., f is a invertible function with respect to X 1 and X 2 : U 1 = X 1, U 2 = X 2 [15], and when we only care about the region of (R 1, R 2 ), we have the following corollary. Corollary 3. Given P X1 X 2 Y, sequences (X n 1, X n 2 ) can be correctly decoded, if and only if R 1 H(X 1 Y ) I(X 1 ; X 2 Y ) R 2 H(X 2 Y ) I(X 1 ; X 2 Y ) R 1 + R 2 H(X 1 Y ) + H(X 2 Y ) I(X 1 ; X 2 Y ). Corollary 3 recovers the result in [15, Rate Region - Invertible Function]. In addition, it recovers the distributed source coding problem when Y = as well, and the result is consistent with the Slepian-Wolf coding theorem [26, Chap. 15]. 14

15 For the lossy function computation case with a given distortion metric d, we have the following result regarding the tradeoffs among message rates, information leakage, equivocation and distortion. Theorem 5. (Achievability) Given a distortion mapping d, the tuple (R 1, R 2, D, 1, 2 ) is achievable if R 1 I(V 1 ; X 1 Y, V 2 ) + I(U 1 ; X 1 Y, U 2, V 1 ) I(V 1 ; V 2 Y, X 1 ) I(U 1 ; U 2 X 1, Y, V 1 ), (35) R 2 I(V 2 ; X 2 Y, V 1 ) + I(U 2 ; X 2 Y, U 1, V 2 ) I(V 1 ; V 2 Y, X 2 ) I(U 1 ; U 2 X 2, Y, V 2 ), (36) R 1 + R 2 I(V 1 ; X 1 Y ) + I(V 2 ; X 2 Y, V 1 ) +I(U 1 ; X 1 Y, V 1, V 2 ) + I(U 2 ; X 2 Y, U 1, V 2 ), (37) 1 I(X 1, X 2 ; U 1, U 2 Y ), (38) 2 H(X 1, X 2 U 1, U 2, Z)+ [ I(U 1, U 2 ; Y V 1, V 2 ) I(U 1, U 2 ; Z V 1, V 2 ) ] +, (39) D E [d (f(x 1, X 2, Y ), g(u 1, U 2, Y ))], (40) for some function g and auxiliary random variables U 1, V 1 and U 2, V 2 with P U1 V 1 U 2 V 2 X 1 X 2 Y Z = P X1 X 2 Y Z P U1 X 1 P V1 U 1 P U2 X 2 P V2 U 2. (Converse) If the tuple (R 1, R 2, 1, 2 ) is achievable, there exist some function g and auxiliary random variables, U 1, V 1 and U 2, V 2, for which V 1 U 1 X 1 (X 2, Y, Z) and V 2 U 2 X 2 (X 1, Y, Z) form Markov chains in the indicated orders, such that (35)-(40) hold. Proof: Please see Section V-E. Similar to the relationship between Theorems 2 and 3, Theorem 5 recovers Theorem 4 when D = 0. V. PROOFS In this paper, we use the term typicality as defined in [42, Chapter 2], i.e., a sequence X n is said to be typical if π(x X n ) P X (x) ɛp X (x), x X, 15

16 where π(x X n ) := {i : X i = x} /n is the empirical PMF of X n. We first have the following lemma that is very useful for achievability proofs in the sequel. Lemma 1. Given a typical sequence x n and an admissible variable U with joint PMF P UX, then U n is an admissible sequence if it is jointly typical with x n according to P UX. Proof. Given X = x, we only need to consider realizations y Y with p XY (x, y) > 0. According to Defi. 4, we have Pr{f(x, Y ) = g(u, Y )} = 1, (41) which is equivalent to P U X (u x)pr{f(x, Y ) = g(u, Y )} = 1. u U This means that for all u U, Pr{f(x, Y ) = g(u, Y )} = 1 if P U X (u x) 0. Denote the support of the conditional PMF P U X (U x) by S PUX (x) := {u U : P U X (u x) > 0}. The typicality of x n and U n guarantees that the probability of U i U i / S PUX (x i ), P UX (U i, x i ) = P X (x)p U X (U i x i ) = 0, / S PUX (x i ) is zero, since and π((u i, x i ) (U n, x n )) P UX (U i, x i ) ɛp UX (U i, x i ) π((u i, x i ) (U n, x n )) 0. Thus, we can conclude that U n is admissible. Now, we provide detailed proofs of the theorems presented in this paper. A. Proof of Theorem 1 Achievability: 16

17 In this part, we will show that for a given P UXY = P XY P U X, in which U is an admissible random variable w.r.t. X, Y and f, there exists a function computation scheme such that the tuple (R, 1, 2 ) with is achievable. R = I(X; U) I(Y ; U) + ɛ, 1 = I(X; U Y ) + ɛ, 2 = H(X; U) + I(Y ; U) ɛ, 1) Codebook (C) construction : Given P XY Z P U X, randomly and independently generate 2 nr 0 sequences U n according to n P U (u i ), and assign each U n into 2 nr bins which are indexed by M, using a uniform distribution. Here, we use b(m) to denote bin M, and set R 0 = I(X; U) + ɛ, R = I(X; U) I(Y ; U) + 2ɛ. 2) Encoding: Upon observing a sequence X n, Alice looks into C, trying to finding a U n that is jointly P UX -typical with X n. If there are more than one such U n, she randomly picks one, and sends the index, M, of the bin where U n is, to the fusion center. Otherwise, she declares an error. 3) Decoding: After receiving M, the fusion center looks into b(m) to find a unique Û n that is jointly P UY -typical with Y n. If there are more than one such sequence or no such sequence, it randomly selects a Û n as the decoded sequence. 4) Function computing: The fusion center computes the estimated value ˆf := {g(ûi, Y i )} n. 5) Error analysis: According to Lemma 1, the fusion center can correctly compute f as long as U n is jointly typical with X n and Û n = U n. Thus, the error is upper bounded by Pr{No jointly typical U n is found} + Pr{Û n U n }. Since there are 2 n(i(u;x)+ɛ) sequences U n n(i(u;y ) ɛ) generated in the codebook and there are 2 sequences in each bin, we can easily obtain that Pr{No jointly typical U n is found} ɛ/2, Pr{Û n U n } ɛ/2, 17

18 following the standard random coding techniques as that in [26]. Thus, we have that Pr{f ˆf} ɛ. 6) Privacy leakage: First, we have I(X n ; M C) = H(M C) H(M X n, C) H(M) = I(X; U) I(Y ; U) + 2ɛ. Thus 1 n H(Xn M, C) = 1 n H(Xn C) 1 n I(Xn ; M C) H(X U) + I(Y ; U) 2ɛ. Furthermore, it follows that Converse: I(X n ; M Y n, C) = H(M Y n, C) H(M X n, Y n, C) H(M C) = I(X; U) I(Y ; U) + 2ɛ = I(X; U Y ) + 2ɛ. It suffices to prove that for any given achievable tuple (R, 1, 2 ), there exists some admissible U w.r.t. X, Y and f, such that (7), (8) and (9) hold. First of all, we have nr H(M) nɛ H(M Y n ) nɛ H(M Y n ) H(M X n ) nɛ = I(M; X n ) I(M; Y n ) nɛ = I(M; X i X i 1, Yi+1) I(M; n Y i X i 1, Yi+1) nɛ n = I(M, X i 1, Y n i+1; X i ) I(M, X i 1, Y n i+1; Y i ) nɛ 18

19 = = n I(U i ; X i ) I(U i ; Y i ) nɛ 1 n [I(U i; X i J = i) I(U i ; Y i J = i)] nɛ = n[i(u; X) I(U; Y )] nɛ, (42) in which J is a random variable uniformly distributed over [1 : n], U i := (M, X i 1, Y n i+1) and U := (U J, J). And from (42), we also obtain 2 1 n H(Xn M) + ɛ = 1 n (H(Xn ) I(X n ; M)) + ɛ H(X) [I(U; X) I(U; Y )] + 2ɛ = H(X U) + I(U; Y ) + 2ɛ. In addition, we can easily verify that (M, X J 1, Y n J+1 ) X J Y J, thus it follows that U X Y. Furthermore, according to Fano s Inequality, we have nɛ H(f ˆf) H(f n ˆf, M, Y n ) = H(f n M, Y n ) = H(f i f i 1, M, Y n ) (a) = (b) = = H(f i f i 1, M, Y n, X i 1 ) H(f i M, Y n, X i 1 ) H(f i Y i, M, Y n i+1, X i 1 ) H(f i Y i, U i ) =nh(f Y, U), (43) 19

20 where step (a) is true since f i 1 is a function of (X i 1, Y i 1 ), and step (b) follows from the Markov chain f i (Y i, Y n i+1, X i 1, M) Y i 1, which is indicated by (X n, Y n i ) X i 1 Y i 1, (a) (M, X i, Y n i ) X i 1 Y i 1, (X i, Y i ) (M, Y n i+1, X i 1 ) Y i 1, (44) in which (a) is true due to the weak union property of the Markov chain [43]. Eq. (43) indicates that this particular choice of U is admissible w.r.t. X, Y and f. Finally, we have n 1 I(X n ; M Y n ) nɛ = H(X n Y n ) H(X n M, Y n ) nɛ = nh(x Y ) H(X n M, Y n ) nɛ = nh(x Y ) H(X i M, Y n, X i 1 ) nɛ (a) = nh(x Y ) =nh(x Y ) H(X i Y i, M, Y n i+1, X i 1 ) 2nɛ H(X i Y i, U i ) nɛ =nh(x Y ) nh(x Y, U) nɛ =ni(x; U Y ) nɛ, (45) where step (a) follows from the Markov chain defined in (44). As ɛ is an arbitrarily small number, we conclude that the converse is complete. B. Proof of Theorem 2 Achievability: Given PMF P XY Z P U X P V U with U being admissible, the case when I(Y ; U V ) I(Z; U V ) 0 is trivial since we can use the same scheme as stated in the achievability proof of Theorem 1, and under this scheme, we can show that 2 = H(X U, Z) ɛ is achievable. Thus, without 20

21 loss of generality, we assume that I(Y ; U V ) I(Z; U V ) > 0. We will show that the tuple (R, 1, 2 ) with is achievable. R = I(X; U) I(Y ; U) + 2ɛ, 1 = I(X; U Y ) + ɛ, 2 = H(X U, Z) + [I(Y ; U V ) I(Z; U V )] 2ɛ, 1) Codebook (C) construction: Randomly and independently generate 2 nr 0 sequences V n according to n P V (v i ), and assign each V n into 2 R 1 bins which are indexed by M, using a uniform distribution. We use b(m ) to denote bin M ; For each generated sequence V n, randomly and independently generate 2 nr 2 and assign each U n into 2 nr 3 sequences U n according to n P U V (u i v i ), bins indexed by M, using a uniform distribution. We use b V n(m ) to denote the corresponding bin of sequences U n. In addition, we set R 0 = I(X; V ) + ɛ, R 1 = I(X; V ) I(Y ; V ) + 2ɛ, R 2 = I(X; U V ) + ɛ; R 3 = I(X; U V ) I(Y ; U V ) + 2ɛ. 2) Encoding: Upon observing a sequence X n, Alice looks into the generated codebook trying to find a V n that is jointly P V X -typical with X n. After selecting V n, Alice looks into those sequences U n that are generated by V n, trying to find a U n that is jointly P UV X -typical with (V n, X n ). During this process, if there are more than one such V n or U n, she randomly picks one such sequence; if there is no such sequence, she declares an error. If Alice finds such V n and U n, she sends the bin indices, M and M, of V n and U n to the fusion center. 3) Decoding: After receiving (M, M ), the fusion center first looks into b(m ) trying to find a unique ˆV n that is jointly P V Y -typical with Y n. Then, it looks into b ˆV n(m ) trying to find a unique Û n that is jointly P V UY -typical with ( ˆV n, Y n ). If there are more than one or no such sequence ˆV n (Û n ), it randomly selects a Û n as the decoded sequence. 4) Function computing: The fusion center computes the estimated value ˆf := {g(ûi, Y i )} n. 21

22 5) Error analysis: Similar to the analysis in the previous scheme, the error is upper bounded by Pr{No jointly typical U n is found} + Pr{Û n U n }. Following the previous analysis, we can first obtain that there exists a V n that is jointly typical with X n and it can be correctly decoded. Then we can easily obtain that there exists a U n that is generated by V n and is joinly typical with X n using the Covering lemma, as there are 2 n(i(x;u V )+ɛ) sequences U n. After that, we can also show that there is no other sequence jointly typical with (V n, Y n ) (thus U n is correctly decoded) using the Packing lemma, since there are 2 n(i(y ;U V ) ɛ) sequences in b V n(m ). 6) Message rate: The transmitted messages are (M, M ), thus, the rate is I(X; V ) I(Y ; V )+ 2ɛ + I(X; U V ) I(Y ; U V ) + 2ɛ = I(X; U) I(Y ; U) + 4ɛ. 7) Privacy leakage: Similar to that of the proof of Theorem 1, we can obtain I(X n ; M, M Y n, C) ni(x; U Y ) + nɛ. Now we bound I(X n ; M, M, Z n C) as follows I(X n ; M, M, Z n C) I(X n ; V n, M, Z n C) = H(X n C) H(X n V n, M, Z n, C) = nh(x) H(X n, U n V n, M, Z n, C) +H(U n X n, V n, M, Z n, C) (a) nh(x) H(X n, U n V n, M, Z n, C) + nɛ = nh(x) H(U n V n, Z n, M, C) H(X n Z n, U n, V n, M, C) + nɛ = nh(x) H(U n V n, Z n, M, C) H(X n Z n, U n, V n, C) + nɛ (c) ni(x; Z, U) H(U n V n, Z n, M, C) + 2nɛ = ni(x; Z, U) H(U n V n, Z n, C) +I(U n ; M V n, Z n, C) + 2nɛ, 22

23 where step (a) is true due to the fact that given V n and M n(i(y ;U V ) ɛ), there are 2 sequences U n in b V n(m ), and the probability that there exists another Ū n that is jointly typical with (X n, V n ) is upper bounded by 2 n(i(x;u V ) I(Y ;U V )) < ɛ, thus, it is easy to have H(U n V n, Z n, M, C) nɛ. And step (c) follows from Lemma 2 in the Appendix A. According to Lemma 3 in the Appendix A, we have On the other hand, we have that H(U n V n, Z n, C) n(i(x; U V ) I(Z; U V )) ɛ. I(U n ; M V n, Z n, C) = H(M V n, Z n, C) H(M U n, V n, Z n, C) H(M C) = ni(x; U V ) ni(y ; U V ) + 2nɛ. Thus, we have that 1 n I(Xn ; M, M, Z n C) I(X; Z, U) [I(Y ; U V ) I(Z; U V )] + 5ɛ, which indicates that 1 n H(Xn M, M, Z n, C) H(X; Z, U) + [I(Y ; U V ) I(Z; U V )] 5ɛ. Hence, the achievability proof is complete. Converse: Similar to the converse proof of Theorem 1, we only need to show that any achievable tuple (R, 1, 2 ) is contained in S, i.e. there exists some admissible U w.r.t. X, Y and f, as well as a random variable V, such that (11), (12), (13) and (14) hold. As the first step, we have, according to (42), that nr I(M, X i 1, Yi+1; n X i ) I(M, X i 1, Yi+1; n Y i ) nɛ. 23

24 On the other hand, the following Markov chains are true, which are implied by X i (M, X i 1, Y n i+1) Z i 1, Y i (M, X i 1, Y n i+1) Z i 1, (Y n i, X n ) X i 1 Z i 1, (M, X i, Y n i ) X i 1 Z i 1, (X i, Y i ) (M, X i 1, Y n i+1) Z i 1. (46) Thus, it follows that nr = I(M, X i 1, Y n i+1, Z i 1 ; X i ) I(M, X i 1, Yi+1, n Z i 1 ; Y i ) nɛ I(U i ; X i ) I(U i ; Y i ) nɛ = n[i(u; X) I(U; Y ) nɛ, (47) in which U i and U are defined by U i := (M, X i 1, Y n i+1, Z i 1 ) and U := (U J, J), J is an independent random variable uniformly distributed over [1 : n]. And we can also verify that U X Y holds. Furthermore, following similar steps as that in (43), we conclude that nɛ H(f i Y i, M, Yi+1, n X i 1 ) = H(f i Y i, M, Y n i+1, X i 1, Z i 1 ) H(f i Y i, U i ) =nh(f Y, U). (48) Thus, we can claim that this constructed random variable U is admissible w.r.t. X, Y and f. Furthermore, as (46) implies that the following Markov chain holds X i (Y i, M, X i 1, Y n i+1) Z i 1, 24

25 we have, according to (45), that n 1 nh(x Y ) =nh(x Y ) =nh(x Y ) nh(x Y, U) nɛ H(X i Y i, M, Y n i+1, X i 1 ) nɛ H(X i Y i, M,Y n i+1,x i 1,Z i 1 ) nɛ =ni(x; U Y ) nɛ. (49) As the final step, we now show (14). It follows that I(X n ; M, Z n ) = I(M; X n ) + I(X n ; Z n M) In the right-hand side of (50), we have = I(M; X n ) I(M; Y n ) + I(M; Y n ) I(M; Z n ) +I(M; Z n ) + I(X n ; Z n M) = I(M; X n ) I(M; Y n ) + I(M; Y n ) I(M; Z n ) +I(M, X n ; Z n ) = I(M; X n ) I(M; Y n ) + I(M; Y n ) I(M; Z n ) +I(X n ; Z n ) = I(M; X n ) I(M; Y n ) + I(M; Y n ) I(M; Z n ) +ni(x; Z). (50) I(M; X n ) I(M; Y n ) = I(M; X i X i 1, Yi+1) I(M; n Y i X i 1, Yi+1) n = = I(M, X i 1, Y n i+1; X i ) I(M, X i 1, Y n i+1; Y i ) I(M, X i 1, Y n i+1, Z i 1 ; X i ) I(M, X i 1, Y n i+1, Z i 1 ; Y i ) = n[i(u; X) I(U; Y )], (51) 25

26 and I(M; Y n ) I(M; Z n ) = I(M; Y i Z i 1, Yi+1) n I(M; Z i Z i 1, Yi+1) n = = I(M, Z i 1, Y n i+1; Y i ) I(M, Z i 1, Y n i+1; Z i ) I(V i ; Y i ) I(V i ; Z i ) = n[i(v ; Y ) I(V ; Z)], (52) in which V i := (M, Y n i+1, Z i 1 ) and V := (U J, J), J is an independent random variable uniformly distributed over [1 : n]. For this construction of V, we can conclude that V U X (Y, Z) is true. Thus, it follows that I(X n ; M, Z n ) = I(U; X) I(U; Y ) + I(V ; Y ) I(V ; Z) + I(X; Z) = I(U; X) I(U; Y V ) I(V ; Z) + I(X; Z) = I(U; X) I(U; Y V ) + I(X; Z V ) = I(U; X) I(U; Y V ) + I(U, X; Z V ) = I(U; X) I(U; Y V ) + I(U; Z V ) + I(X; Z U, V ) = I(U; X) I(U; Y V ) + I(U; Z V ) + I(X; Z U) = I(X; U, Z) I(U; Y V ) + I(U; Z V ) I(X; U, Z) [I(U; Y V ) I(U; Z V )] +, which implies that 2 1 n H(Xn M, Z n ) + ɛ = 1 n (H(Xn ) I(X n ; M, Z n )) + ɛ H(X) I(X; U, Z)+[I(U; Y V ) I(U; Z V )] + +ɛ = H(X U, Z) + [I(U; Y V ) I(U; Z V )] + + ɛ. (53) Hence, the converse is complete. 26

27 C. Proof of Theorem 3 Converse: In this part, we show that any achievable tuple (R, D, 1, 2 ) is contained in the region defined by (18)-(22). First, according to (3), we have that nr H(M) ɛ I(M; X n ) I(M; Y n ) ɛ [ = I(M; Xi X i 1, Yi+1) n I(M; Y i X i 1, Yi+1) ] n ɛ = (a) = = [ I(M, X i 1, Y n i+1; X i ) I(M, X i 1, Y n i+1; Y i ) ] ɛ [ I(M, X i 1, Y n i+1, Y i 1, Z i 1 ; X i ) I(M, X i 1, Yi+1, n Y i 1, Z i 1 ; Y i ) ] ɛ [I(U i ; X i ) I(U i ; Y i )] ɛ = n [I(U; X) I(U; Y )] ɛ, (54) where step (a) follows from the following Markov chain which is implied by the following Markov chain Furthermore, U i is defined as (X i, Y i ) (M, X i 1, Y n i+1) (Y i 1, Z i 1 ), (55) (X n, Y n i ) (X i 1 ) (Y i 1, Z i 1 ). U i := (M, X i 1, Y n i+1, Y i 1, Z i 1 ), and we have U i X i (Y i, Z i ), which follows from (X n, Y i 1, Y n i+1, Z i 1 ) X i (Y i, Z i ) (M, X i 1, Y n i+1, Y i 1, Z i 1 ) X i (Y i, Z i ). (56) 27

28 Second, it follows from (2) that D 1 [d(f(x n E n, Y n ), ˆf(M, ] Y n )) ɛ ] [ = 1 n E d(f(x i, Y i ), ˆf i (M, Y n )) ɛ [ (a) ] 1 n E d(f(x i, Y i ), g(m, Y n, X i 1, Z i 1 )) ɛ ] [ = 1 n E d(f(x i, Y i ), g(u i, Y i )) ɛ [ ] 1 = E n d(f(x i, Y i ), g(u i, Y i )) ɛ = E [d(f(x, Y ), g(u, Y ))] ɛ, where step (a) holds as ˆf i (M, Y n ) is a function of M and Y n in general, hence there must exist some function, say g, such that the distortion decreases since more information is provided for each i [1 : n]. In addition, we have n 1 I(X n ; M Y n ) nɛ = H(X n Y n ) H(X n M, Y n ) nɛ = nh(x Y ) H(X i M, Y n, X i 1 ) nɛ (a) = nh(x Y ) =nh(x Y ) H(X i M, Y n, X i 1, Z i 1 ) 2nɛ H(X i Y i, U i ) nɛ =nh(x Y ) nh(x Y, U) nɛ =ni(x; U Y ) nɛ, (57) in which step (a) is true due to the following Markov chain X i (M, Y n, X i 1 ) Z i 1. This Markov chain is implied by (55) due to the decomposition property of Markov chain [43]. As the final step, the derivation is similar as the procedure from (50) to (53). 28

29 First, we have I(X n ; M, Z n ) Furthermore, it follows from (54) that while = I(M; X n ) I(M; Y n ) + I(M; Y n ) I(M; Z n ) +ni(x; Z). (58) I(M; X n ) I(M; Y n ) = n[i(u; X) I(U; Y )], (59) I(M; Y n ) I(M; Z n ) = I(M; Y i Z i 1, Yi+1) n I(M; Z i Z i 1, Yi+1) n = I(V i ; Y i ) I(V i ; Z i ) = n[i(v ; Y ) I(V ; Z)], (60) with V i := (M, Y n i+1, Z i 1 ) and V := (U J, J), J is an independent random variable uniformly distributed over [1 : n]. Based on the definition of U and V stated above, it s not difficult to obtain this Markov chain: V U X (Y, Z) based on (56). Combine (58)-(60), and we have I(X n ; M, Z n ) = I(U; X) I(U; Y ) + I(V ; Y ) I(V ; Z) + I(X; Z) = I(X; U, Z) I(U; Y V ) + I(U; Z V ) I(X; U, Z) [I(U; Y V ) I(U; Z V )] +. Finally, we obtain 2 1 n H(Xn M, Z n ) + ɛ = 1 n (H(Xn ) I(X n ; M, Z n )) + ɛ H(X) I(X; U, Z)+[I(U; Y V ) I(U; Z V )] + +ɛ = H(X U, Z) + [I(U; Y V ) I(U; Z V )] + + ɛ. (61) 29

30 Hence, the converse proof is complete. Achievability: To prove the achievability for Theorem 3, we use the same achievability scheme as stated in the proof for Theorem 2. The only difference is the range of PMF P XY Z P U X P V U. In this scheme, P XY Z P U X P V U is given, subject to that there exists a function g of (U, Y ) achieving E(d(f(X, Y ), g(u, Y ))) D + ɛ and the function g is fixed for function computation. Once P XY Z P U X P V U and g is fixed, we can follow the same procedures in the proof of Theorem 2 to obtain the desired result. D. Proof of Theorem 4 Converse: In the following, we define U 1i := (M 1, X i 1 1, Z i 1, Y n i+1), V 1i := (M 1, Z i 1, Y n i+1), U 2i := (M 2, X i 1 2, Z i 1, Y n i+1), V 2i := (M 2, Z i 1, Y n i+1). Furthermore, define U 1 := (U 1J, J) with J being a random variable independent with all other random variables and uniformly distributed over [1 : n]. Define V 1, U 2 and V 2 in the same manner. We can verify that the Markov chain V 1 U 1 X 1 (X 2, Y, Z) holds, as we have (X n 1, Z i 1, Y n i+1) X 1i (X 2i, Y i, Z i ) (M 1, X i 1 1, Z i 1, Y n i+1) X 1i (X 2i, Y i, Z i ). Similarly, we can verify that V 2 U 2 X 2 (X 1, Y, Z) holds. In the following, we show (29)-(34) one by one. First, we have nr 1 H(M 1 ) nɛ I(M 1 ; X1 n ) I(M 1 ; Y n ) nɛ = [I(M 1 ; X 1i X1 i 1, Yi+1) n I(M 1 ; Y i X i 1 1, Y n i+1)] nɛ 30

31 = = = [I(M 1, X i 1 1, Y n i+1; X 1i ) I(M 1, X i 1 1, Y n i+1; Y i )] nɛ [I(M 1, X i 1 1, Z i 1, Y n i+1; X 1i ) I(M 1, X1 i 1, Z i 1, Yi+1; n Y i )] nɛ [I(U 1i ; X 1i ) I(U 1i ; Y i )] nɛ = n[i(u 1 ; X 1 ) I(U 1 ; Y )] nɛ = n[i(u 1 ; X 1 Y ) nɛ = I(V 1 ; X 1 Y ) + I(U 1 ; X 1 Y, V 1 ) nɛ, = I(V 1 ; X 1, V 2 Y ) I(V 1 ; V 2 Y, X 1 ) nɛ +I(U 1 ; X 1, U 2 Y, V 1 ) I(U 1 ; U 2 X 1, Y, V 1 ), I(V 1 ; X 1 Y, V 2 ) + I(U 1 ; X 1 Y, U 2, V 1 ) I(V 1 ; V 2 Y, X 1 ) I(U 1 ; U 2 X 1, Y, V 1 ) nɛ. Thus, we have R 1 I(V 1 ; X 1 Y, V 2 ) + I(U 1 ; X 1 Y, U 2, V 1 ) I(V 1 ; V 2 Y, X 1 ) I(U 1 ; U 2 X 1, Y, V 1 ) ɛ. Similarly, we have In addition, it follows that R 1 + R 2 I(V 1 ; X 1 Y ) + I(V 2 ; X 2 Y, V 1 ) +I(U 1 ; X 1 Y, V 1, V 2 ) + I(U 2 ; X 2 Y, U 1, V 2 ) ɛ. R 1 + R 2 1 n H(M 1) ɛ + 1 n H(M 2) ɛ 1 n H(M 1, M 2 ) 2ɛ 1 n I(M 1, M 2 ; X n 1, X n 2 ) 1 n I(M 1, M 2 ; Y n ) 2nɛ = 1 n [I(M 1, M 2 ; X 1i, X 2i X i 1 1, X i 1 2, Y n i+1) 31

32 Furthermore, we have +I(M 1, M 2 ; Y i X1 i 1, X2 i 1, Yi+1)] n 2nɛ = 1 [I(M 1, M 2, X1 i 1, X2 i 1, Y n n i+1; X 1i, X 2i ) +I(M 1, M 2, X1 i 1, X2 i 1, Yi+1; n Y i )] 2nɛ = 1 [I(M 1, M 2, X1 i 1, X2 i 1, Z i 1, Y n n i+1; X 1i, X 2i ) +I(M 1, M 2, X i 1 1, X i 1 2, Z i 1, Y n i+1; Y i )] 2nɛ = I(U 1, U 2 ; X 1, X 2 ) I(U 1, U 2 ; Y ) 2ɛ = I(U 1, U 2 ; X 1, X 2 Y, V 1, V 2 ) I(V 1, V 2 ; X 1, X 2 Y ) 2ɛ = I(V 1 ; X 1, X 2 Y ) + I(V 2 ; X 1, X 2 Y, V 1 ) 2ɛ +I(U 1 ; X 1, X 2 Y, V 1, V 2 ) + I(U 2 ; X 1, X 2 Y, U 1, V 2 ) I(V 1 ; X 1 Y ) + I(V 2 ; X 2 Y, V 1 ) + I(U 1 ; X 1 Y, V 1, V 2 ) n 1 I(X n 1, X n 2 ; M 1, M 2 Y n ) nɛ +I(U 2 ; X 2 Y, U 1, V 2 ) 2ɛ. = H(X n 1, X n 2 Y n ) H(X n 1, X n 2 M 1, M 2, Y n ) nɛ = nh(x 1, X 2 Y ) H(X1 n, X2 n M 1, M 2, Y n ) nɛ ( = nh(x 1, X 2 Y ) H X 1i, X 2i (a) = nh(x 1, X 2 Y ) ) M 1, M 2, Y n, X1 i 1, X2 i 1 ( H X 1i, X 2i ) M 1, M 2, Yi+1, n Z i 1, X1 i 1, X2 i 1 =nh(x 1, X 2 Y ) H(X 1i, X 2i Y i, U 1i, U 2i ) nɛ nɛ nɛ =nh(x 1, X 2 Y ) nh(x 1, X 2 Y, U 1, U 2 ) nɛ =ni(x 1, X 2 ; U 1, U 2 Y ) nɛ, (62) 32

33 in which step (a) is due to the following Markov chain: ( ) ( ) ( (X1 ) i,(x 2 ) i M1,M 2,Yi+1,X n 1 i 1,X2 i 1 Y i 1,Z i 1). In addition, following similar steps from (50) to (53) by replacing X with (X 1, X 2 ), U with (U 1, U 2 ), V with (V 1, V 2 ) and M with (M 1, M 2 ), we can obtain As the last step, it follows that 2 H(X 1, X 2 U 1, U 2, Z)+ [ I(U 1, U 2 ; Y V 1, V 2 ) nɛ H(f n M 1, M 2, Y n ) = H(f i f i 1, M 1, M 2, Y n ) = = I(U 1, U 2 ; Z V 1, V 2 ) ] + + ɛ. (63) H(f i f i 1, M 1, M 2, Y n, X i 1 1, X i 1 2, Z i 1 ) H(f i M 1, M 2, Y n, X i 1 1, X i 1 2, Z i 1 ) H(f i M 1, M 2, Y n i, X i 1 1, X i 1 2, Z i 1 ) H(f i Y i, U 1i, U 2i ) =nh(f Y, U 1, U 2 ). (64) Finally, the fact that ɛ is arbitrarily small number completes the converse proof. Achievability: In this part, we show that given any P U1 V 1 U 2 V 2 X 1 X 2 Y Z = P X1 X 2 Y ZP U1 X 1 P V1 U 1 P U2 X 2 P V2 U 2 with H(f U 1, U 2, Y ) = 0, any tuple (R 1, R 2, 1, 2 ) satisfying the conditions from (29) to (34) is achievable. Since given P U1 V 1 U 2 V 2 X 1 X 2 Y Z, the values of the right-hand side of (32) and (33) are fixed, it suffices to consider the corner point with R 1 = I(V 1 ; X 1 Y ) + I(U 1 ; X 1 Y, V 1, V 2 ), R 2 = I(V 2 ; X 2 Y, V 1 ) + I(U 2 ; X 2 Y, U 1, V 2 ), 33

Keyless authentication in the presence of a simultaneously transmitting adversary

Keyless authentication in the presence of a simultaneously transmitting adversary Keyless authentication in the presence of a simultaneously transmitting adversary Eric Graves Army Research Lab Adelphi MD 20783 U.S.A. ericsgra@ufl.edu Paul Yu Army Research Lab Adelphi MD 20783 U.S.A.

More information

Lecture 6 I. CHANNEL CODING. X n (m) P Y X

Lecture 6 I. CHANNEL CODING. X n (m) P Y X 6- Introduction to Information Theory Lecture 6 Lecturer: Haim Permuter Scribe: Yoav Eisenberg and Yakov Miron I. CHANNEL CODING We consider the following channel coding problem: m = {,2,..,2 nr} Encoder

More information

LECTURE 13. Last time: Lecture outline

LECTURE 13. Last time: Lecture outline LECTURE 13 Last time: Strong coding theorem Revisiting channel and codes Bound on probability of error Error exponent Lecture outline Fano s Lemma revisited Fano s inequality for codewords Converse to

More information

Network coding for multicast relation to compression and generalization of Slepian-Wolf

Network coding for multicast relation to compression and generalization of Slepian-Wolf Network coding for multicast relation to compression and generalization of Slepian-Wolf 1 Overview Review of Slepian-Wolf Distributed network compression Error exponents Source-channel separation issues

More information

EE/Stat 376B Handout #5 Network Information Theory October, 14, Homework Set #2 Solutions

EE/Stat 376B Handout #5 Network Information Theory October, 14, Homework Set #2 Solutions EE/Stat 376B Handout #5 Network Information Theory October, 14, 014 1. Problem.4 parts (b) and (c). Homework Set # Solutions (b) Consider h(x + Y ) h(x + Y Y ) = h(x Y ) = h(x). (c) Let ay = Y 1 + Y, where

More information

Frans M.J. Willems. Authentication Based on Secret-Key Generation. Frans M.J. Willems. (joint work w. Tanya Ignatenko)

Frans M.J. Willems. Authentication Based on Secret-Key Generation. Frans M.J. Willems. (joint work w. Tanya Ignatenko) Eindhoven University of Technology IEEE EURASIP Spain Seminar on Signal Processing, Communication and Information Theory, Universidad Carlos III de Madrid, December 11, 2014 : Secret-Based Authentication

More information

Secret Key Agreement Using Conferencing in State- Dependent Multiple Access Channels with An Eavesdropper

Secret Key Agreement Using Conferencing in State- Dependent Multiple Access Channels with An Eavesdropper Secret Key Agreement Using Conferencing in State- Dependent Multiple Access Channels with An Eavesdropper Mohsen Bahrami, Ali Bereyhi, Mahtab Mirmohseni and Mohammad Reza Aref Information Systems and Security

More information

AN INTRODUCTION TO SECRECY CAPACITY. 1. Overview

AN INTRODUCTION TO SECRECY CAPACITY. 1. Overview AN INTRODUCTION TO SECRECY CAPACITY BRIAN DUNN. Overview This paper introduces the reader to several information theoretic aspects of covert communications. In particular, it discusses fundamental limits

More information

On the Simulatability Condition in Key Generation Over a Non-authenticated Public Channel

On the Simulatability Condition in Key Generation Over a Non-authenticated Public Channel On the Simulatability Condition in Key Generation Over a Non-authenticated Public Channel Wenwen Tu and Lifeng Lai Department of Electrical and Computer Engineering Worcester Polytechnic Institute Worcester,

More information

Information Theory. Lecture 10. Network Information Theory (CT15); a focus on channel capacity results

Information Theory. Lecture 10. Network Information Theory (CT15); a focus on channel capacity results Information Theory Lecture 10 Network Information Theory (CT15); a focus on channel capacity results The (two-user) multiple access channel (15.3) The (two-user) broadcast channel (15.6) The relay channel

More information

Lecture 5: Channel Capacity. Copyright G. Caire (Sample Lectures) 122

Lecture 5: Channel Capacity. Copyright G. Caire (Sample Lectures) 122 Lecture 5: Channel Capacity Copyright G. Caire (Sample Lectures) 122 M Definitions and Problem Setup 2 X n Y n Encoder p(y x) Decoder ˆM Message Channel Estimate Definition 11. Discrete Memoryless Channel

More information

Information Masking and Amplification: The Source Coding Setting

Information Masking and Amplification: The Source Coding Setting 202 IEEE International Symposium on Information Theory Proceedings Information Masking and Amplification: The Source Coding Setting Thomas A. Courtade Department of Electrical Engineering University of

More information

On Scalable Coding in the Presence of Decoder Side Information

On Scalable Coding in the Presence of Decoder Side Information On Scalable Coding in the Presence of Decoder Side Information Emrah Akyol, Urbashi Mitra Dep. of Electrical Eng. USC, CA, US Email: {eakyol, ubli}@usc.edu Ertem Tuncel Dep. of Electrical Eng. UC Riverside,

More information

EECS 750. Hypothesis Testing with Communication Constraints

EECS 750. Hypothesis Testing with Communication Constraints EECS 750 Hypothesis Testing with Communication Constraints Name: Dinesh Krithivasan Abstract In this report, we study a modification of the classical statistical problem of bivariate hypothesis testing.

More information

Lecture 3: Channel Capacity

Lecture 3: Channel Capacity Lecture 3: Channel Capacity 1 Definitions Channel capacity is a measure of maximum information per channel usage one can get through a channel. This one of the fundamental concepts in information theory.

More information

Interference Channels with Source Cooperation

Interference Channels with Source Cooperation Interference Channels with Source Cooperation arxiv:95.319v1 [cs.it] 19 May 29 Vinod Prabhakaran and Pramod Viswanath Coordinated Science Laboratory University of Illinois, Urbana-Champaign Urbana, IL

More information

Source and Channel Coding for Correlated Sources Over Multiuser Channels

Source and Channel Coding for Correlated Sources Over Multiuser Channels Source and Channel Coding for Correlated Sources Over Multiuser Channels Deniz Gündüz, Elza Erkip, Andrea Goldsmith, H. Vincent Poor Abstract Source and channel coding over multiuser channels in which

More information

LECTURE 10. Last time: Lecture outline

LECTURE 10. Last time: Lecture outline LECTURE 10 Joint AEP Coding Theorem Last time: Error Exponents Lecture outline Strong Coding Theorem Reading: Gallager, Chapter 5. Review Joint AEP A ( ɛ n) (X) A ( ɛ n) (Y ) vs. A ( ɛ n) (X, Y ) 2 nh(x)

More information

Lecture 11: Polar codes construction

Lecture 11: Polar codes construction 15-859: Information Theory and Applications in TCS CMU: Spring 2013 Lecturer: Venkatesan Guruswami Lecture 11: Polar codes construction February 26, 2013 Scribe: Dan Stahlke 1 Polar codes: recap of last

More information

Secret Key Agreement Using Asymmetry in Channel State Knowledge

Secret Key Agreement Using Asymmetry in Channel State Knowledge Secret Key Agreement Using Asymmetry in Channel State Knowledge Ashish Khisti Deutsche Telekom Inc. R&D Lab USA Los Altos, CA, 94040 Email: ashish.khisti@telekom.com Suhas Diggavi LICOS, EFL Lausanne,

More information

On the Duality between Multiple-Access Codes and Computation Codes

On the Duality between Multiple-Access Codes and Computation Codes On the Duality between Multiple-Access Codes and Computation Codes Jingge Zhu University of California, Berkeley jingge.zhu@berkeley.edu Sung Hoon Lim KIOST shlim@kiost.ac.kr Michael Gastpar EPFL michael.gastpar@epfl.ch

More information

Channel Coding for Secure Transmissions

Channel Coding for Secure Transmissions Channel Coding for Secure Transmissions March 27, 2017 1 / 51 McEliece Cryptosystem Coding Approach: Noiseless Main Channel Coding Approach: Noisy Main Channel 2 / 51 Outline We present an overiew of linear

More information

National University of Singapore Department of Electrical & Computer Engineering. Examination for

National University of Singapore Department of Electrical & Computer Engineering. Examination for National University of Singapore Department of Electrical & Computer Engineering Examination for EE5139R Information Theory for Communication Systems (Semester I, 2014/15) November/December 2014 Time Allowed:

More information

Lecture 4 Noisy Channel Coding

Lecture 4 Noisy Channel Coding Lecture 4 Noisy Channel Coding I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw October 9, 2015 1 / 56 I-Hsiang Wang IT Lecture 4 The Channel Coding Problem

More information

Side-information Scalable Source Coding

Side-information Scalable Source Coding Side-information Scalable Source Coding Chao Tian, Member, IEEE, Suhas N. Diggavi, Member, IEEE Abstract The problem of side-information scalable (SI-scalable) source coding is considered in this work,

More information

Joint Source-Channel Coding for the Multiple-Access Relay Channel

Joint Source-Channel Coding for the Multiple-Access Relay Channel Joint Source-Channel Coding for the Multiple-Access Relay Channel Yonathan Murin, Ron Dabora Department of Electrical and Computer Engineering Ben-Gurion University, Israel Email: moriny@bgu.ac.il, ron@ee.bgu.ac.il

More information

On Common Information and the Encoding of Sources that are Not Successively Refinable

On Common Information and the Encoding of Sources that are Not Successively Refinable On Common Information and the Encoding of Sources that are Not Successively Refinable Kumar Viswanatha, Emrah Akyol, Tejaswi Nanjundaswamy and Kenneth Rose ECE Department, University of California - Santa

More information

Superposition Encoding and Partial Decoding Is Optimal for a Class of Z-interference Channels

Superposition Encoding and Partial Decoding Is Optimal for a Class of Z-interference Channels Superposition Encoding and Partial Decoding Is Optimal for a Class of Z-interference Channels Nan Liu and Andrea Goldsmith Department of Electrical Engineering Stanford University, Stanford CA 94305 Email:

More information

ECE Information theory Final (Fall 2008)

ECE Information theory Final (Fall 2008) ECE 776 - Information theory Final (Fall 2008) Q.1. (1 point) Consider the following bursty transmission scheme for a Gaussian channel with noise power N and average power constraint P (i.e., 1/n X n i=1

More information

Distributed Lossless Compression. Distributed lossless compression system

Distributed Lossless Compression. Distributed lossless compression system Lecture #3 Distributed Lossless Compression (Reading: NIT 10.1 10.5, 4.4) Distributed lossless source coding Lossless source coding via random binning Time Sharing Achievability proof of the Slepian Wolf

More information

On the Capacity Region for Secure Index Coding

On the Capacity Region for Secure Index Coding On the Capacity Region for Secure Index Coding Yuxin Liu, Badri N. Vellambi, Young-Han Kim, and Parastoo Sadeghi Research School of Engineering, Australian National University, {yuxin.liu, parastoo.sadeghi}@anu.edu.au

More information

The Capacity Region for Multi-source Multi-sink Network Coding

The Capacity Region for Multi-source Multi-sink Network Coding The Capacity Region for Multi-source Multi-sink Network Coding Xijin Yan Dept. of Electrical Eng. - Systems University of Southern California Los Angeles, CA, U.S.A. xyan@usc.edu Raymond W. Yeung Dept.

More information

EE5585 Data Compression May 2, Lecture 27

EE5585 Data Compression May 2, Lecture 27 EE5585 Data Compression May 2, 2013 Lecture 27 Instructor: Arya Mazumdar Scribe: Fangying Zhang Distributed Data Compression/Source Coding In the previous class we used a H-W table as a simple example,

More information

Shannon s Noisy-Channel Coding Theorem

Shannon s Noisy-Channel Coding Theorem Shannon s Noisy-Channel Coding Theorem Lucas Slot Sebastian Zur February 2015 Abstract In information theory, Shannon s Noisy-Channel Coding Theorem states that it is possible to communicate over a noisy

More information

Universal Incremental Slepian-Wolf Coding

Universal Incremental Slepian-Wolf Coding Proceedings of the 43rd annual Allerton Conference, Monticello, IL, September 2004 Universal Incremental Slepian-Wolf Coding Stark C. Draper University of California, Berkeley Berkeley, CA, 94720 USA sdraper@eecs.berkeley.edu

More information

Lecture 22: Final Review

Lecture 22: Final Review Lecture 22: Final Review Nuts and bolts Fundamental questions and limits Tools Practical algorithms Future topics Dr Yao Xie, ECE587, Information Theory, Duke University Basics Dr Yao Xie, ECE587, Information

More information

6196 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 9, SEPTEMBER 2011

6196 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 9, SEPTEMBER 2011 6196 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 9, SEPTEMBER 2011 On the Structure of Real-Time Encoding and Decoding Functions in a Multiterminal Communication System Ashutosh Nayyar, Student

More information

Interactive Decoding of a Broadcast Message

Interactive Decoding of a Broadcast Message In Proc. Allerton Conf. Commun., Contr., Computing, (Illinois), Oct. 2003 Interactive Decoding of a Broadcast Message Stark C. Draper Brendan J. Frey Frank R. Kschischang University of Toronto Toronto,

More information

Chapter 4. Data Transmission and Channel Capacity. Po-Ning Chen, Professor. Department of Communications Engineering. National Chiao Tung University

Chapter 4. Data Transmission and Channel Capacity. Po-Ning Chen, Professor. Department of Communications Engineering. National Chiao Tung University Chapter 4 Data Transmission and Channel Capacity Po-Ning Chen, Professor Department of Communications Engineering National Chiao Tung University Hsin Chu, Taiwan 30050, R.O.C. Principle of Data Transmission

More information

The Communication Complexity of Correlation. Prahladh Harsha Rahul Jain David McAllester Jaikumar Radhakrishnan

The Communication Complexity of Correlation. Prahladh Harsha Rahul Jain David McAllester Jaikumar Radhakrishnan The Communication Complexity of Correlation Prahladh Harsha Rahul Jain David McAllester Jaikumar Radhakrishnan Transmitting Correlated Variables (X, Y) pair of correlated random variables Transmitting

More information

EE376A: Homework #3 Due by 11:59pm Saturday, February 10th, 2018

EE376A: Homework #3 Due by 11:59pm Saturday, February 10th, 2018 Please submit the solutions on Gradescope. EE376A: Homework #3 Due by 11:59pm Saturday, February 10th, 2018 1. Optimal codeword lengths. Although the codeword lengths of an optimal variable length code

More information

EE 4TM4: Digital Communications II. Channel Capacity

EE 4TM4: Digital Communications II. Channel Capacity EE 4TM4: Digital Communications II 1 Channel Capacity I. CHANNEL CODING THEOREM Definition 1: A rater is said to be achievable if there exists a sequence of(2 nr,n) codes such thatlim n P (n) e (C) = 0.

More information

arxiv: v2 [cs.it] 28 May 2017

arxiv: v2 [cs.it] 28 May 2017 Feedback and Partial Message Side-Information on the Semideterministic Broadcast Channel Annina Bracher and Michèle Wigger arxiv:1508.01880v2 [cs.it] 28 May 2017 May 30, 2017 Abstract The capacity of the

More information

Guess & Check Codes for Deletions, Insertions, and Synchronization

Guess & Check Codes for Deletions, Insertions, and Synchronization Guess & Check Codes for Deletions, Insertions, and Synchronization Serge Kas Hanna, Salim El Rouayheb ECE Department, Rutgers University sergekhanna@rutgersedu, salimelrouayheb@rutgersedu arxiv:759569v3

More information

SOURCE coding problems with side information at the decoder(s)

SOURCE coding problems with side information at the decoder(s) 1458 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 59, NO. 3, MARCH 2013 Heegard Berger Cascade Source Coding Problems With Common Reconstruction Constraints Behzad Ahmadi, Student Member, IEEE, Ravi Ton,

More information

A Graph-based Framework for Transmission of Correlated Sources over Multiple Access Channels

A Graph-based Framework for Transmission of Correlated Sources over Multiple Access Channels A Graph-based Framework for Transmission of Correlated Sources over Multiple Access Channels S. Sandeep Pradhan a, Suhan Choi a and Kannan Ramchandran b, a {pradhanv,suhanc}@eecs.umich.edu, EECS Dept.,

More information

The Method of Types and Its Application to Information Hiding

The Method of Types and Its Application to Information Hiding The Method of Types and Its Application to Information Hiding Pierre Moulin University of Illinois at Urbana-Champaign www.ifp.uiuc.edu/ moulin/talks/eusipco05-slides.pdf EUSIPCO Antalya, September 7,

More information

Shannon s noisy-channel theorem

Shannon s noisy-channel theorem Shannon s noisy-channel theorem Information theory Amon Elders Korteweg de Vries Institute for Mathematics University of Amsterdam. Tuesday, 26th of Januari Amon Elders (Korteweg de Vries Institute for

More information

Multiaccess Channels with State Known to One Encoder: A Case of Degraded Message Sets

Multiaccess Channels with State Known to One Encoder: A Case of Degraded Message Sets Multiaccess Channels with State Known to One Encoder: A Case of Degraded Message Sets Shivaprasad Kotagiri and J. Nicholas Laneman Department of Electrical Engineering University of Notre Dame Notre Dame,

More information

Amobile satellite communication system, like Motorola s

Amobile satellite communication system, like Motorola s I TRANSACTIONS ON INFORMATION THORY, VOL. 45, NO. 4, MAY 1999 1111 Distributed Source Coding for Satellite Communications Raymond W. Yeung, Senior Member, I, Zhen Zhang, Senior Member, I Abstract Inspired

More information

Lecture 5 Channel Coding over Continuous Channels

Lecture 5 Channel Coding over Continuous Channels Lecture 5 Channel Coding over Continuous Channels I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw November 14, 2014 1 / 34 I-Hsiang Wang NIT Lecture 5 From

More information

Reverse Edge Cut-Set Bounds for Secure Network Coding

Reverse Edge Cut-Set Bounds for Secure Network Coding Reverse Edge Cut-Set Bounds for Secure Network Coding Wentao Huang and Tracey Ho California Institute of Technology Michael Langberg University at Buffalo, SUNY Joerg Kliewer New Jersey Institute of Technology

More information

Variable Length Codes for Degraded Broadcast Channels

Variable Length Codes for Degraded Broadcast Channels Variable Length Codes for Degraded Broadcast Channels Stéphane Musy School of Computer and Communication Sciences, EPFL CH-1015 Lausanne, Switzerland Email: stephane.musy@ep.ch Abstract This paper investigates

More information

4F5: Advanced Communications and Coding Handout 2: The Typical Set, Compression, Mutual Information

4F5: Advanced Communications and Coding Handout 2: The Typical Set, Compression, Mutual Information 4F5: Advanced Communications and Coding Handout 2: The Typical Set, Compression, Mutual Information Ramji Venkataramanan Signal Processing and Communications Lab Department of Engineering ramji.v@eng.cam.ac.uk

More information

The Sensor Reachback Problem

The Sensor Reachback Problem Submitted to the IEEE Trans. on Information Theory, November 2003. 1 The Sensor Reachback Problem João Barros Sergio D. Servetto Abstract We consider the problem of reachback communication in sensor networks.

More information

Lecture 10: Broadcast Channel and Superposition Coding

Lecture 10: Broadcast Channel and Superposition Coding Lecture 10: Broadcast Channel and Superposition Coding Scribed by: Zhe Yao 1 Broadcast channel M 0M 1M P{y 1 y x} M M 01 1 M M 0 The capacity of the broadcast channel depends only on the marginal conditional

More information

Information Leakage of Correlated Source Coded Sequences over a Channel with an Eavesdropper

Information Leakage of Correlated Source Coded Sequences over a Channel with an Eavesdropper Information Leakage of Correlated Source Coded Sequences over a Channel with an Eavesdropper Reevana Balmahoon and Ling Cheng School of Electrical and Information Engineering University of the Witwatersrand

More information

The Gallager Converse

The Gallager Converse The Gallager Converse Abbas El Gamal Director, Information Systems Laboratory Department of Electrical Engineering Stanford University Gallager s 75th Birthday 1 Information Theoretic Limits Establishing

More information

Lecture 1: The Multiple Access Channel. Copyright G. Caire 12

Lecture 1: The Multiple Access Channel. Copyright G. Caire 12 Lecture 1: The Multiple Access Channel Copyright G. Caire 12 Outline Two-user MAC. The Gaussian case. The K-user case. Polymatroid structure and resource allocation problems. Copyright G. Caire 13 Two-user

More information

Wyner-Ziv Coding over Broadcast Channels: Digital Schemes

Wyner-Ziv Coding over Broadcast Channels: Digital Schemes Wyner-Ziv Coding over Broadcast Channels: Digital Schemes Jayanth Nayak, Ertem Tuncel, Deniz Gündüz 1 Abstract This paper addresses lossy transmission of a common source over a broadcast channel when there

More information

Lecture 8: Channel and source-channel coding theorems; BEC & linear codes. 1 Intuitive justification for upper bound on channel capacity

Lecture 8: Channel and source-channel coding theorems; BEC & linear codes. 1 Intuitive justification for upper bound on channel capacity 5-859: Information Theory and Applications in TCS CMU: Spring 23 Lecture 8: Channel and source-channel coding theorems; BEC & linear codes February 7, 23 Lecturer: Venkatesan Guruswami Scribe: Dan Stahlke

More information

List of Figures. Acknowledgements. Abstract 1. 1 Introduction 2. 2 Preliminaries Superposition Coding Block Markov Encoding...

List of Figures. Acknowledgements. Abstract 1. 1 Introduction 2. 2 Preliminaries Superposition Coding Block Markov Encoding... Contents Contents i List of Figures iv Acknowledgements vi Abstract 1 1 Introduction 2 2 Preliminaries 7 2.1 Superposition Coding........................... 7 2.2 Block Markov Encoding.........................

More information

On the Capacity and Degrees of Freedom Regions of MIMO Interference Channels with Limited Receiver Cooperation

On the Capacity and Degrees of Freedom Regions of MIMO Interference Channels with Limited Receiver Cooperation On the Capacity and Degrees of Freedom Regions of MIMO Interference Channels with Limited Receiver Cooperation Mehdi Ashraphijuo, Vaneet Aggarwal and Xiaodong Wang 1 arxiv:1308.3310v1 [cs.it] 15 Aug 2013

More information

arxiv: v1 [cs.it] 5 Feb 2016

arxiv: v1 [cs.it] 5 Feb 2016 An Achievable Rate-Distortion Region for Multiple Descriptions Source Coding Based on Coset Codes Farhad Shirani and S. Sandeep Pradhan Dept. of Electrical Engineering and Computer Science Univ. of Michigan,

More information

IN this paper, we show that the scalar Gaussian multiple-access

IN this paper, we show that the scalar Gaussian multiple-access 768 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 50, NO. 5, MAY 2004 On the Duality of Gaussian Multiple-Access and Broadcast Channels Nihar Jindal, Student Member, IEEE, Sriram Vishwanath, and Andrea

More information

Lecture 11: Quantum Information III - Source Coding

Lecture 11: Quantum Information III - Source Coding CSCI5370 Quantum Computing November 25, 203 Lecture : Quantum Information III - Source Coding Lecturer: Shengyu Zhang Scribe: Hing Yin Tsang. Holevo s bound Suppose Alice has an information source X that

More information

CS Communication Complexity: Applications and New Directions

CS Communication Complexity: Applications and New Directions CS 2429 - Communication Complexity: Applications and New Directions Lecturer: Toniann Pitassi 1 Introduction In this course we will define the basic two-party model of communication, as introduced in the

More information

Distributed Lossy Interactive Function Computation

Distributed Lossy Interactive Function Computation Distributed Lossy Interactive Function Computation Solmaz Torabi & John MacLaren Walsh Dept. of Electrical and Computer Engineering Drexel University Philadelphia, PA 19104 Email: solmaz.t@drexel.edu &

More information

Lossy Distributed Source Coding

Lossy Distributed Source Coding Lossy Distributed Source Coding John MacLaren Walsh, Ph.D. Multiterminal Information Theory, Spring Quarter, 202 Lossy Distributed Source Coding Problem X X 2 S {,...,2 R } S 2 {,...,2 R2 } Ẑ Ẑ 2 E d(z,n,

More information

Homework Set #2 Data Compression, Huffman code and AEP

Homework Set #2 Data Compression, Huffman code and AEP Homework Set #2 Data Compression, Huffman code and AEP 1. Huffman coding. Consider the random variable ( x1 x X = 2 x 3 x 4 x 5 x 6 x 7 0.50 0.26 0.11 0.04 0.04 0.03 0.02 (a Find a binary Huffman code

More information

Performance-based Security for Encoding of Information Signals. FA ( ) Paul Cuff (Princeton University)

Performance-based Security for Encoding of Information Signals. FA ( ) Paul Cuff (Princeton University) Performance-based Security for Encoding of Information Signals FA9550-15-1-0180 (2015-2018) Paul Cuff (Princeton University) Contributors Two students finished PhD Tiance Wang (Goldman Sachs) Eva Song

More information

PERFECTLY secure key agreement has been studied recently

PERFECTLY secure key agreement has been studied recently IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 45, NO. 2, MARCH 1999 499 Unconditionally Secure Key Agreement the Intrinsic Conditional Information Ueli M. Maurer, Senior Member, IEEE, Stefan Wolf Abstract

More information

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 62, NO. 5, MAY

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 62, NO. 5, MAY IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 6, NO. 5, MAY 06 85 Duality of a Source Coding Problem and the Semi-Deterministic Broadcast Channel With Rate-Limited Cooperation Ziv Goldfeld, Student Member,

More information

PERFECT SECRECY AND ADVERSARIAL INDISTINGUISHABILITY

PERFECT SECRECY AND ADVERSARIAL INDISTINGUISHABILITY PERFECT SECRECY AND ADVERSARIAL INDISTINGUISHABILITY BURTON ROSENBERG UNIVERSITY OF MIAMI Contents 1. Perfect Secrecy 1 1.1. A Perfectly Secret Cipher 2 1.2. Odds Ratio and Bias 3 1.3. Conditions for Perfect

More information

Source-Channel Coding Theorems for the Multiple-Access Relay Channel

Source-Channel Coding Theorems for the Multiple-Access Relay Channel Source-Channel Coding Theorems for the Multiple-Access Relay Channel Yonathan Murin, Ron Dabora, and Deniz Gündüz Abstract We study reliable transmission of arbitrarily correlated sources over multiple-access

More information

Capacity bounds for multiple access-cognitive interference channel

Capacity bounds for multiple access-cognitive interference channel Mirmohseni et al. EURASIP Journal on Wireless Communications and Networking, :5 http://jwcn.eurasipjournals.com/content///5 RESEARCH Open Access Capacity bounds for multiple access-cognitive interference

More information

Lecture 20: Lower Bounds for Inner Product & Indexing

Lecture 20: Lower Bounds for Inner Product & Indexing 15-859: Information Theory and Applications in TCS CMU: Spring 201 Lecture 20: Lower Bounds for Inner Product & Indexing April 9, 201 Lecturer: Venkatesan Guruswami Scribe: Albert Gu 1 Recap Last class

More information

Coding for Computing. ASPITRG, Drexel University. Jie Ren 2012/11/14

Coding for Computing. ASPITRG, Drexel University. Jie Ren 2012/11/14 Coding for Computing ASPITRG, Drexel University Jie Ren 2012/11/14 Outline Background and definitions Main Results Examples and Analysis Proofs Background and definitions Problem Setup Graph Entropy Conditional

More information

SOURCE CODING WITH SIDE INFORMATION AT THE DECODER (WYNER-ZIV CODING) FEB 13, 2003

SOURCE CODING WITH SIDE INFORMATION AT THE DECODER (WYNER-ZIV CODING) FEB 13, 2003 SOURCE CODING WITH SIDE INFORMATION AT THE DECODER (WYNER-ZIV CODING) FEB 13, 2003 SLEPIAN-WOLF RESULT { X i} RATE R x ENCODER 1 DECODER X i V i {, } { V i} ENCODER 0 RATE R v Problem: Determine R, the

More information

Shannon s Noisy-Channel Coding Theorem

Shannon s Noisy-Channel Coding Theorem Shannon s Noisy-Channel Coding Theorem Lucas Slot Sebastian Zur February 13, 2015 Lucas Slot, Sebastian Zur Shannon s Noisy-Channel Coding Theorem February 13, 2015 1 / 29 Outline 1 Definitions and Terminology

More information

Simultaneous Nonunique Decoding Is Rate-Optimal

Simultaneous Nonunique Decoding Is Rate-Optimal Fiftieth Annual Allerton Conference Allerton House, UIUC, Illinois, USA October 1-5, 2012 Simultaneous Nonunique Decoding Is Rate-Optimal Bernd Bandemer University of California, San Diego La Jolla, CA

More information

On Achievability for Downlink Cloud Radio Access Networks with Base Station Cooperation

On Achievability for Downlink Cloud Radio Access Networks with Base Station Cooperation On Achievability for Downlink Cloud Radio Access Networks with Base Station Cooperation Chien-Yi Wang, Michèle Wigger, and Abdellatif Zaidi 1 Abstract arxiv:1610.09407v1 [cs.it] 8 Oct 016 This work investigates

More information

Capacity of the Discrete Memoryless Energy Harvesting Channel with Side Information

Capacity of the Discrete Memoryless Energy Harvesting Channel with Side Information 204 IEEE International Symposium on Information Theory Capacity of the Discrete Memoryless Energy Harvesting Channel with Side Information Omur Ozel, Kaya Tutuncuoglu 2, Sennur Ulukus, and Aylin Yener

More information

Secret Message Capacity of Erasure Broadcast Channels with Feedback

Secret Message Capacity of Erasure Broadcast Channels with Feedback Secret Message Capacity of Erasure Broadcast Channels with Feedback László Czap Vinod M. Prabhakaran Christina Fragouli École Polytechnique Fédérale de Lausanne, Switzerland Email: laszlo.czap vinod.prabhakaran

More information

Lattices for Distributed Source Coding: Jointly Gaussian Sources and Reconstruction of a Linear Function

Lattices for Distributed Source Coding: Jointly Gaussian Sources and Reconstruction of a Linear Function Lattices for Distributed Source Coding: Jointly Gaussian Sources and Reconstruction of a Linear Function Dinesh Krithivasan and S. Sandeep Pradhan Department of Electrical Engineering and Computer Science,

More information

On Scalable Source Coding for Multiple Decoders with Side Information

On Scalable Source Coding for Multiple Decoders with Side Information On Scalable Source Coding for Multiple Decoders with Side Information Chao Tian School of Computer and Communication Sciences Laboratory for Information and Communication Systems (LICOS), EPFL, Lausanne,

More information

Group Secret Key Agreement over State-Dependent Wireless Broadcast Channels

Group Secret Key Agreement over State-Dependent Wireless Broadcast Channels Group Secret Key Agreement over State-Dependent Wireless Broadcast Channels Mahdi Jafari Siavoshani Sharif University of Technology, Iran Shaunak Mishra, Suhas Diggavi, Christina Fragouli Institute of

More information

2015 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media,

2015 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, 2015 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising

More information

A Formula for the Capacity of the General Gel fand-pinsker Channel

A Formula for the Capacity of the General Gel fand-pinsker Channel A Formula for the Capacity of the General Gel fand-pinsker Channel Vincent Y. F. Tan Institute for Infocomm Research (I 2 R, A*STAR, Email: tanyfv@i2r.a-star.edu.sg ECE Dept., National University of Singapore

More information

Solutions to Homework Set #2 Broadcast channel, degraded message set, Csiszar Sum Equality

Solutions to Homework Set #2 Broadcast channel, degraded message set, Csiszar Sum Equality 1st Semester 2010/11 Solutions to Homework Set #2 Broadcast channel, degraded message set, Csiszar Sum Equality 1. Convexity of capacity region of broadcast channel. Let C R 2 be the capacity region of

More information

Ahmed S. Mansour, Rafael F. Schaefer and Holger Boche. June 9, 2015

Ahmed S. Mansour, Rafael F. Schaefer and Holger Boche. June 9, 2015 The Individual Secrecy Capacity of Degraded Multi-Receiver Wiretap Broadcast Channels Ahmed S. Mansour, Rafael F. Schaefer and Holger Boche Lehrstuhl für Technische Universität München, Germany Department

More information

Reliable Computation over Multiple-Access Channels

Reliable Computation over Multiple-Access Channels Reliable Computation over Multiple-Access Channels Bobak Nazer and Michael Gastpar Dept. of Electrical Engineering and Computer Sciences University of California, Berkeley Berkeley, CA, 94720-1770 {bobak,

More information

Distributed Detection With Vector Quantizer Wenwen Zhao, Student Member, IEEE, and Lifeng Lai, Member, IEEE

Distributed Detection With Vector Quantizer Wenwen Zhao, Student Member, IEEE, and Lifeng Lai, Member, IEEE IEEE TRANSACTIONS ON SIGNAL AND INFORMATION PROCESSING OVER NETWORKS, VOL 2, NO 2, JUNE 206 05 Distributed Detection With Vector Quantizer Wenwen Zhao, Student Member, IEEE, and Lifeng Lai, Member, IEEE

More information

Cases Where Finding the Minimum Entropy Coloring of a Characteristic Graph is a Polynomial Time Problem

Cases Where Finding the Minimum Entropy Coloring of a Characteristic Graph is a Polynomial Time Problem Cases Where Finding the Minimum Entropy Coloring of a Characteristic Graph is a Polynomial Time Problem Soheil Feizi, Muriel Médard RLE at MIT Emails: {sfeizi,medard}@mit.edu Abstract In this paper, we

More information

3F1: Signals and Systems INFORMATION THEORY Examples Paper Solutions

3F1: Signals and Systems INFORMATION THEORY Examples Paper Solutions Engineering Tripos Part IIA THIRD YEAR 3F: Signals and Systems INFORMATION THEORY Examples Paper Solutions. Let the joint probability mass function of two binary random variables X and Y be given in the

More information

Information Theoretic Security and Privacy of Information Systems. Edited by Holger Boche, Ashish Khisti, H. Vincent Poor, and Rafael F.

Information Theoretic Security and Privacy of Information Systems. Edited by Holger Boche, Ashish Khisti, H. Vincent Poor, and Rafael F. Information Theoretic Security and Privacy of Information Systems Edited by Holger Boche, Ashish Khisti, H. Vincent Poor, and Rafael F. Schaefer 1 Secure source coding Paul Cuff and Curt Schieler Abstract

More information

Distributed Source Coding Using LDPC Codes

Distributed Source Coding Using LDPC Codes Distributed Source Coding Using LDPC Codes Telecommunications Laboratory Alex Balatsoukas-Stimming Technical University of Crete May 29, 2010 Telecommunications Laboratory (TUC) Distributed Source Coding

More information

Representation of Correlated Sources into Graphs for Transmission over Broadcast Channels

Representation of Correlated Sources into Graphs for Transmission over Broadcast Channels Representation of Correlated s into Graphs for Transmission over Broadcast s Suhan Choi Department of Electrical Eng. and Computer Science University of Michigan, Ann Arbor, MI 80, USA Email: suhanc@eecs.umich.edu

More information

Optimality of Walrand-Varaiya Type Policies and. Approximation Results for Zero-Delay Coding of. Markov Sources. Richard G. Wood

Optimality of Walrand-Varaiya Type Policies and. Approximation Results for Zero-Delay Coding of. Markov Sources. Richard G. Wood Optimality of Walrand-Varaiya Type Policies and Approximation Results for Zero-Delay Coding of Markov Sources by Richard G. Wood A thesis submitted to the Department of Mathematics & Statistics in conformity

More information

An introduction to basic information theory. Hampus Wessman

An introduction to basic information theory. Hampus Wessman An introduction to basic information theory Hampus Wessman Abstract We give a short and simple introduction to basic information theory, by stripping away all the non-essentials. Theoretical bounds on

More information