Information Leakage of Correlated Source Coded Sequences over a Channel with an Eavesdropper

Size: px
Start display at page:

Download "Information Leakage of Correlated Source Coded Sequences over a Channel with an Eavesdropper"

Transcription

1 Information Leakage of Correlated Source Coded Sequences over a Channel with an Eavesdropper Reevana Balmahoon and Ling Cheng School of Electrical and Information Engineering University of the Witwatersrand Private Bag 3, Wits. 2050, Johannesburg, South Africa reevana.balmahoon@students.wits.ac.za, ling.cheng@wits.ac.za arxiv: v2 [cs.it] 2 Oct 204 Abstract A new generalised approach for multiple correlated sources over a wiretap network is investigated. A basic model consisting of two correlated sources where each produce a component of the common information is initially investigated. There are several cases that consider wiretapped syndromes on the transmission links and based on these cases a new quantity, the information leakage at the source/s is determined. An interesting feature of the models described in this paper is the information leakage quantification. Shannon s cipher system with eavesdroppers is incorporated into the two correlated sources model to minimize key lengths. These aspects of quantifying information leakage and reducing key lengths using Shannon s cipher system are also considered for a multiple correlated source network approach. A new scheme that incorporates masking using common information combinations to reduce the key lengths is presented and applied to the generalised model for multiple sources. I. INTRODUCTION Keeping information secure has become a major concern with the advancement in technology. In this work, the information theory aspect of security is analyzed, as entropies are used to measure security. The system also incorporates some traditional ideas surrounding cryptography, namely Shannon s cipher system and adversarial attackers in the form of eavesdroppers. In cryptographic systems, there is usually a message in plaintext that needs to be sent to a receiver. In order to secure it, the plaintext is encrypted so as to prevent eavesdroppers from reading its contents. This ciphertext is then transmitted to the receiver. Shannon s cipher system (mentioned by Yamamoto []) incorporates this idea. The definition of Shannon s cipher system has been discussed by Hanawal and Sundaresan [2]. In Yamamoto s [] development on this model, a correlated source approach is introduced. This gives an interesting view of the problem, and is depicted in Figure. Correlated source coding incorporates the lossless compression of two or more correlated data streams. Correlated sources have the ability to decrease the bandwidth required to transmit and receive messages because a syndrome (compressed form of the original message) is sent across the communication links instead of the original message. A compressed message has more information per bit, and therefore has a higher entropy because the transmitted information is more unpredictable. The unpredictability of the compressed message is also beneficial in terms of securing the information. Key Generator Figure. Source W k X, Y W Encoder Wiretapper Decoder Yamamoto s development of the Shannon Cipher System The source sends information for the correlated sources, X and Y along the main transmission channel. A key W k, is produced and used by the encoder when producing the ciphertext. The wiretapper has access to the transmitted codeword, W. The decoded codewords are represented by X and Ŷ. In Yamamoto s scheme the security level was also focused on and found to be K H(XK, Y K W ) (i.e. the joint entropy of X and Y given W, where K is the length of X and Y ) when X and Y have equal importance, which is in accordance with traditional Shannon systems where the security is measured by the equivocation. When one source is more important than the other then the security level is measured by the pair of the individual uncertainties ( K H(XK W ), K H(Y K W )). In practical communication systems links are prone to eavesdropping and as such this work incorporates wiretapped channels, i.e. channels where an eavesdropper is present. There are specific kinds of wiretapped channels that have been developed. The mathematical model for this Wiretap Channel is given by Rouayheb et al. [3], and can be explained as follows: the channel between a transmitter and receiver is error-free and can transmit n symbols Y = (y,..., y n ) from which µ bits can be observed by the eavesdropper and the maximum secure rate can be shown to equal n µ bits. The security aspect of wiretap networks have been looked at in various ways by Cheng et al. [4], and Cai and Yeung [5], emphasising that it is of concern to secure these type of channels. Villard and Piantanida [6] also look at correlated sources and wireap networks: A source sends information to the receiver and an eavesdropper has access to information correlated to the source, which is used as side information. There is a second encoder that sends a compressed version of its own correlation observation of the source privately to the receiver. Here, the authors show that the use of correlation decreases ˆXŶ

2 2 the required communication rate and increases secrecy. Villard et al. [7] explore this side information concept further where security using side information at the receiver and eavesdropper is investigated. Side information is generally used to assist the decoder to determine the transmitted message. An earlier work involving side information is that by Yang et al. [8]. The concept can be considered to be generalised in that the side information could represent a source. It is an interesting problem when one source is more important and Hayashi and Yamamoto [9] consider it in another scheme, where only X is secure against wiretappers and Y must be transmitted to a legitimate receiver. They develop a security criterion based on the number of correct guesses of a wiretapper to retrieve a message. In an extension of the Shannon cipher system, Yamamoto [0] investigated the secret sharing communication system. In this case, we generalise a model for correlated sources across a channel with an eavesdropper and the security aspect is explored by quantifying the information leakage and reducing the key lengths when incorporating Shannon s cipher system. This paper initially describes a two correlated source model across wiretapped links, which is detailed in Section II. In Section III, the information leakage is investigated and proven for this two correlated source model. The information leakage is quantified to be the equivocation subtracted from the total obtained uncertainty. In Section IV the two correlated sources model is looked at according to Shannon s cipher system. The notation contained in the tables will be clarified in the following sections. The proofs for this Shannon cipher system aspect are detailed in Section V. Section VI details the extension of the two correlated source model where multiple correlated sources in a network scenario is investigated. There are two subsections here; one quantifying information leakage for the Slepian-Wolf scenario and the other incorporating Shannon s cipher system where key lengths are minimized and a masking method to save on keys is presented. Section VII explains how the models detailed in this paper are a generalised model of Yamamoto s [] model, and further offers comparison to other models. The future work for this research is detailed in Section VIII and the paper is concluded in Section IX. II. MODEL The independent, identically distributed (i.i.d.) sources X and Y are mutually correlated random variables, depicted in Figure 2. The alphabet sets for sources X and Y are represented by X and Y respectively. Assume that (X K, Y K ) are encoded into two syndromes (T X and T Y ). We can write T X = (V X, V CX ) and T Y = (V Y, V CY ) where T X and T Y are the syndromes of X and Y. Here, T X and T Y are characterised by (V X, V CX ) and (V Y, V CY ) respectively. The Venn diagram in Figure 3 easily illustrates this idea where it is shown that V X and V Y represent the private information of sources X and Y respectively and V CX and V CY represent the common information between X K and Y K generated by X K and Y K respectively. The correlated sources X and Y transmit messages (in the form of syndromes) to the receiver along wiretapped links. Figure 2. Figure 3. X K Encoder T X Decoder ˆX K, Ŷ K Y K Encoder T Y Correlated source coding for two sources X V X V CX V CY V Y T X = (V X, V CX ) T Y = (V Y, V CY ) The relation between private and common information The decoder determines X and Y only after receiving all of T X and T Y. The common information between the sources are transmitted through the portions V CX and V CY. In order to decode a transmitted message, a source s private information and both common information portions are necessary. This aids in security as it is not possible to determine, for example X by wiretapping all the contents transmitted along X s channel only. This is different to Yamamoto s [] model as here the common information consists of two portions. The aim is to keep the system as secure as possible and these following sections show how it is achieved by this new model. We assume that the function F is a one-to-one process with high probability, which means based on T X and T Y we can retrieve X K and Y K with minimal error. Furthermore, it reaches the Slepian-Wolf bound, H(T X, T Y ) = H(X K, Y K ). Here, we note that the lengths of T X and T Y are not fixed, as it depends on the encoding process and nature of the Slepian- Wolf codes. The process is therefore not ideally one-to-one and reversible and is another difference between our model and Yamamoto s [] model. The code described in this section satisfies the following inequalities for δ > 0 and sufficiently large K. P r{x K G(V X, V CX, V CY )} δ () P r{y K G(V Y, V CX, V CY )} δ (2) H(V X, V CX, V CY ) H(X K ) + δ (3) H(V Y, V CX, V CY ) H(Y K ) + δ (4) Y

3 3 H(V X, V Y, V CX, V CY ) H(X K, Y K ) + δ (5) H(Y X) ɛ 0 K H(W Y ) K log M Y H(Y X) + ɛ 0 (2) H(X K V X, V Y ) H(V CX ) + H(V CY ) δ (6) H(X K V CX, V CY ) H(V X ) + H(V CY ) δ (7) H(X K V CX, V CY, V Y ) H(V X ) + H(V CY ) δ (8) H(V CX ) + H(V X ) δ H(X K V CY, V Y ) H(X) H(V CY ) + δ (9) I(X; Y ) ɛ 0 K (H(W CX) + H(W CY )) K (log M CX + log M CY ) I(X; Y ) + ɛ 0 (3) K H(XK W Y ) H(X) ɛ 0 (4) K H(Y K W X ) H(Y ) ɛ 0 (5) We can see that () - (3) mean where G is a function to define the decoding process at the receiver. It can intuitively be seen from (3) and (4) that X and Y are recovered from the corresponding private information and the common information produced by X K and Y K. Equations (3), (4) and (5) show that the private information and common information produced by each source should contain no redundancy. It is also seen from (7) and (8) that V Y is independent of X K asymptotically. Here, V X, V Y, V CX and V CY are disjoint, which ensures that there is no redundant information sent to the decoder. To recover X the following components are necessary: V X, V CX and V CY. This comes from the property that X K cannot be derived from V X and V CX only and part of the common information between X K and Y K is produced by Y K. Yamamoto [] proved that a common information between X K and Y K is represented by the mutual information I(X; Y ). Yamamoto [] also defined two kinds of common information. The first common information is defined as the rate of the attainable minimum core V C (i.e. V CX, V CY in this model) by removing each private information, which is independent of the other information, from (X K, Y K ) as much as possible. The second common information is defined as the rate of the attainable maximum core V C such that if we lose V C then the uncertainty of X and Y becomes H(V C ). Here, we consider the common information that V CX and V CY represent. We begin demonstrating the relationship between the common information portions by constructing the prototype code (W X, W Y, W CX, W CY ) as per Lemma. Lemma : For any ɛ 0 0 and sufficiently large K, there exits a code W X = F X (X K ), W Y = F Y (Y K ), W CX = F CX (X K ), W CY = F CY (Y K ), XK, Ŷ K = G(W X, W Y, W CX, W CY ), where W X I MX, W Y I MY, W CX I MCX, W CY I MCY for I Mα, which is defined as {0,,..., M α }, that satisfies, P r{ X K, Ŷ K X K, Y K } ɛ (0) H(X Y ) ɛ 0 K H(W X) K log M shown by Yamamoto []. X H(X Y ) + ɛ 0 () H(X, Y ) 3ɛ 0 K (H(W X) + H(W Y ) + H(W CX ) + H(W CY )) H(X, Y ) + 3ɛ 0 (6) Hence from (0), (6) and the ordinary source coding theorem, (W X, W Y, W CX, W CY ) have no redundancy for sufficiently small ɛ 0 0. It can also be seen that W X and W Y are independent of Y K and X K respectively. Proof of Lemma : As seen by Slepian and Wolf, mentioned by Yamamoto [] there exist M X codes for the P Y X (y x) DMC (discrete memoryless channel) and M Y codes for the P X Y (x y) DMC. The codeword sets exist as Ci X and Cj Y, where CX i is a subset of the typical sequence of X K and Cj Y is a subset of the typical sequence of Y K. The encoding functions are similar, but we have created one decoding function as there is one decoder at the receiver: f Xi : I MCX C X i (7) f Y j : I MCY C Y j (8) g : X K, Y K I MCX I MCY (9) The relations for M X, M Y and the common information remain the same as per Yamamoto s and will therefore not be proven here. In this scheme, we use the average (V CX, V X, V CY, V Y ) transmitted for many codewords from X and Y. Thus, at any time either V CX or V CY is transmitted. Over time, the split between which common information portion is transmitted is determined and the protocol is prearranged accordingly. Therefore all the common information is either transmitted as l or m, and as such Yamamoto s encoding and decoding method may be used. As per Yamamoto s method the code does exist and that W X and W Y are independent of Y and X respectively, as

4 4 The common information is important in this model as the sum of V CX and V CY represent a common information between the sources. The following theorem holds for this common information: Theorem : K [H(V CX) + H(V CY )] = I(X; Y ) (20) where V CX is the common portion between X and Y produced by X K and V CY is the common portion between X and Y produced by Y K. It is noted that the (20) holds asymptotically, and does not hold with equality when K is finite. Here, we show the approximation when K is infinitely large. The private portions for X K and Y K are represented as V X and V Y respectively. As explained in Yamamoto s [] Theorem, two types of common information exist (the first is represented by I(X; Y ) and the second by min(h(x K ), H(Y K )). We will develop part of this idea to show that the sum of the common information portions produced by X K and Y K in this new model is represented by the mutual information between the sources. Proof of Theorem : The first part is to prove that H(V CX ) + H(V CY ) I(X; Y ), and is done as follows. We weaken the conditions () and (2) to Pr {X K, Y K G XY (V X, V Y, V CX, V CY }) δ (2) For any (V X,V Y, V CX, V CY ) C(3ɛ 0 ) (which can be seen from (6)), we have from (2) and the ordinary source coding theorem that H(X K, Y K ) δ K H(V X, V Y, V CX, V CY ) K [H(V X) + H(V Y ) + H(V CX ) + H(V CY )] (22) where δ 0 as δ 0. From Lemma, K H(V Y X K ) K H(V Y ) δ (23) K H(V X Y K ) K H(V X) δ (24) From (22) - (24), K [H(V CX) + H(V CY )] H(X, Y ) K H(V X) K H(V Y ) δ H(X, Y ) K H(V X Y ) On the other hand, we can see that K H(V Y X) δ 2δ(25) K H(XK, V Y ) H(X, Y ) + δ (26) This implies that K H(V Y X K ) H(Y X) + δ (27) and K H(V X Y K ) H(X Y ) + δ (28) From (25), (27) and (28) we get K [H(V CX) + H(V CY )] H(X, Y ) H(X Y ) H(Y X) δ 4δ = I(X; Y ) δ 4δ (29) It is possible to see from (3) that H(V CX ) + H(V CY ) I(X; Y ). From this result, (9) and (29), and as δ 0 and δ 0 it can be seen that K [H(V CX + H(V CY )] = I(X; Y ) (30) This model can cater for a scenario where a particular source, say X needs to be more secure than Y (possibly because of eavesdropping on the X channel). In such a case, the K H(V CX) term in (29) needs to be as high as possible. When this uncertainty is increased then the security of X is increased. Another security measure that this model incorporates is that X cannot be determined from wiretapping only X s link. III. INFORMATION LEAKAGE In order to determine the security of the system, a measure for the amount of information leaked has been developed. This is a new notation and quantification, which emphasizes the novelty of this work. The obtained information and total uncertainty are used to determine the leaked information. Information leakage is indicated by L P Q. Here P indicates the source/s for which information leakage is being quantified, P = {S,..., S n } where n is the number of sources (in this case, n = 2). Further, Q indicates the syndrome portion that has been wiretapped, Q = {V,..., V m } where m is the number of codewords (in this case, m = 4). The information leakage bounds are as follows: L XK V X,V Y H(X K ) H(V CX ) H(V CY ) + δ (3) L XK V CX,V CY H(X K ) H(V X ) H(V CY ) + δ (32) L XK V CX,V CY,V Y H(X K ) H(V X ) H(V CY ) + δ (33) H(V CY ) δ L XK V Y,V CY H(X K ) H(V CX ) H(V X ) + δ (34) Here, V Y is private information of source Y K and is independent of X K and therefore does not leak any information about X K, shown in (32) and (33). Equation (34) gives an indication of the minimum and maximum amount of leaked information for the interesting case where a syndrome has been wiretapped and its information leakage on the alternate source is quantified. The outstanding common information component

5 5 is the maximum information that can be leaked. For this case, the common information V CX and V CY can thus consist of added protection to reduce the amount of information leaked. These bounds developed in (3) - (34) are proven in the next section. The proofs for the above mentioned information leakage inequalities are now detailed. First, the inequalities in (6) - (9) will be proven, so as to prove that the information leakage equations hold. Lemma 2: The code (V X, V CX, V CY, V Y ) defined at the beginning of Section I, describing the model and () - (5) satisfy (6) - (9). Then the information leakage bounds are given by (3) - (34). Proof for (6): K H(XK V X, V Y ) = K [H(XK, V X, V Y ) H(V X, V Y )] = K [H(XK, V Y ) H(V X, V Y )] (35) = K [H(XK V Y ) + I(X K ; V Y ) + H(V Y X K )] K [H(V X V Y ) + I(V X ; V Y ) + H(V Y V X )] = K [H(XK V Y ) + H(V Y X K ) H(V X V Y ) H(V Y V X )] = K [H(XK ) + H(V Y ) H(V X ) H(V Y )] (36) = K [H(XK ) H(V X )] K [H(V X) + H(V CX ) + H(V CY ) H(V X )] δ = K [H(V CX) + H(V CY )] δ (37) where (35) holds because V X is a function of X and (36) holds because X is independent of V Y asymptotically and V X is independent of V Y asymptotically. For the proofs of (7) and (8), the following simplification for H(X V CY ) is used: Proof for (7): K H(XK V CX, V CY ) = K [H(XK, V CX, V CY ) H(V CX, V CY )] = K [H(XK, V CY ) H(V CX, V CY )] (40) = K [H(XK ) H(V CY ) + I(X; V CY ) + H(V CY X K )] + δ K [H(V CX V CY ) + I(V CX ; V CY ) + H(V CY V CX )] = K [H(XK ) H(V CY ) + H(V CY ) H(V CX ) H(V CY )] + δ (4) = K [H(XK ) H(V CY ) H(V CX )] + δ K [H(V X) + H(V CX ) + H(V CY ) H(V CY ) H(V CX )] δ = K H(V X) + δ δ (42) where (40) holds because V CX is a function of X K and (4) holds because X is independent of V CY asymptotically and V CX is independent of V CY asymptotically. The proof for H(X V CX, V CY, V Y ) is similar to that for H(X V CX, V CY ), because V Y is independent of X. Proof for (8): K H(XK V CX, V CY, V Y ) = K H(XK V CX, V CY ) (43) = K [H(XK, V CX, V CY ) H(V CX, V CY )] = K [H(XK, V CY ) H(V CX, V CY )] (44) = K [H(XK ) H(V CY ) + I(X; V CY ) + H(V CY X K )] + δ K [H(V CX V CY ) + I(V CX ; V CY ) + H(V CY V CX )] = K [H(XK ) H(V CY ) + H(V CY ) H(V CX ) H(V CY )] + δ (45) = H(X K V CY ) = H(X K, Y K ) H(V CY ) K [H(XK ) H(V CY ) H(V CX )] + δ = H(X K ) + H(V CY ) I(X; V CY ) H(V CY ) K [H(V X) + H(V CX ) + H(V CY ) H(V CY ) = H(X K ) + H(V CY ) H(V CY ) H(V CY ) H(V CX )] δ + δ + δ (38) = = H(X K K H(V X) δ + δ (46) ) H(V CY ) = δ (39) where (43) holds because V Y and X K are independent, (44) holds because V CX is a function of X K and (45) holds where I(X; V CY ) approximately equal to H(V CY ) in (38) because X K is independent of V CY asymptotically and V CX can be seen intuitively from the Venn diagram in Figure is independent of V CY asymptotically. 3. Since it is an approximation, δ, which is smaller than For the proof of (9), we look at the following probabilities: δ in the proofs below has been added to cater for the tolerance. Pr{V X, V CX G(T X )} δ (47)

6 6 Pr{V Y, V CY G(T Y )} δ (48) K H(XK T Y ) K H(XK, V CY, V Y )] + δ (49) = K [H(XK, V CY, V Y ) H(V CY, V Y )] + δ = K [H(XK, V Y ) H(V CY, V Y )] + δ (50) = K [H(XK V Y ) + I(X K ; V Y ) + H(V Y X K )] K [H(V CY V Y ) + I(V CY ; V Y ) + H(V Y V CY )] + δ = K [H(XK ) + H(V Y ) H(V CY ) H(V Y )] + δ (5) = K [H(XK ) H(V CY )] + δ (52) where (49) holds from (48), (50) holds because V CY and V Y are asymptotically independent. Furthermore, (5) holds because V CY and V Y are asymptotically independent and X K and V Y are asymptotically independent. Following a similar proof to those done above in this section, another bound for H(X K V CY, V Y ) can be found as follows: K H(XK V CY, V Y ) = K [H(XK, V CY, V Y ) H(V CY, V Y )] = K [H(XK, V Y ) H(V CY, V Y )] (53) = K [H(XK V Y ) + I(X K ; V Y ) + H(V Y X)] K [H(V CY V Y ) + I(V CY ; V Y ) + H(V Y V CY )] = K [H(XK ) + H(V Y ) H(V CY ) H(V Y )] (54) = K [H(XK ) H(V CY )] K [H(V X) + H(V CX ) + H(V CY ) H(V CY )] δ which proves (32). L XK V CX,V CY,V Y = H(X K ) H(X K V CX, V CY, V Y ) H(X K ) H(V X ) + δ (58) which proves (33). The two bounds for H(V CY, V Y ) are given by (52) and (55). From (52): L XK V Y,V CY H(X K ) [H(X) H(V CY ) + δ] H(V CY ) δ (59) and from (55): L XK V Y,V CY H(X K ) (H(V X ) + H(V CX ) δ) H(X K ) H(V X ) H(V CX ) + δ (60) Combining these results from (59) and (60) gives (34). IV. SHANNON S CIPHER SYSTEM Here, we discuss Shannon s cipher system for two independent correlated sources (depicted in Figure 4). The two source outputs are i.i.d random variables X and Y, taking on values in the finite sets X and Y. Both the transmitter and receiver have access to the key, a random variable, independent of X K and Y K and taking values in I Mk = {0,, 2,..., M k }. The sources X K and Y K compute the ciphertexts X and Y, which are the result of specific encryption functions on the plaintext from X and Y respectively. The encryption functions are invertible, thus knowing X and the key, X K can be retrieved. The mutual information between the plaintext and ciphertext should be small so that the wiretapper cannot gain much information about the plaintext. For perfect secrecy, this mutual information should be zero, then the length of the key should be at least the length of the plaintext. X K Y K = K [H(V X) + H(V CX )] δ (55) where (53) and (54) hold for the same reason as (50) and (5) respectively. Since we consider the information leakage as the total information obtained subtracted from the total uncertainty, the following hold for the four cases considered in this section: L XK V X,V Y = H(X K ) H(X K V X, V Y ) H(X K ) H(V CX ) H(V CY ) + δ (56) which proves (3). Figure 4. k k Encoder Encoder X Y k Decoder ˆX K, Ŷ K Shannon cipher system for two correlated sources L XK V CX,V CY = H(X K ) H(X K V CX, V CY ) H(X K ) H(V X ) + δ (57) respec- The encoder functions for X and Y, (E X and E Y tively) are given as:

7 7 E X : X K I MkX I M X = {0,,..., M X } I M CX = {0,,..., M CX }(6) K H(XK, Y K W ) h XY + ɛ (77) E Y : Y K I MkY I M Y = {0,,..., M Y } The decoder is defined as: I M CY = {0,,..., M CY }(62) D XY : (I M X, I M Y, I M CX, I M CY ) I MkX, I MkY The encoder and decoder mappings are below: or X K Y K (63) W = F EX (X K, W kx ) (64) W 2 = F EY (Y K, W ky ) (65) X K = F DX (W, W 2, W kx ) (66) Ŷ K = F DY (W, W 2, W ky ) (67) ( X K, Ŷ K ) = F DXY (W, W 2, W kx, W ky ) (68) The following conditions should be satisfied for cases - 4: K log M X R X + ɛ (69) K log M Y R Y + ɛ (70) K log M kx R kx + ɛ (7) K log M ky R ky + ɛ (72) Pr{ X K X K } ɛ (73) Pr{Ŷ K Y K } ɛ (74) K H(XK W ) h X + ɛ (75) K H(Y K W 2 ) h Y + ɛ (76) K H(XK, Y K W 2 ) h XY + ɛ (78) where R X is the the rate of source X s channel and R Y is the the rate of source Y s channel. Here, R kx is the rate of the key channel at X K and R ky is the rate of the key channel at Y K. The security levels, which are measured by the total and individual uncertainties are h XY and (h X, h Y ) respectively. The cases - 5 are: Case : When T X and T Y are leaked and both X K and Y K need to be kept secret. Case 2: When T X and T Y are leaked and X K needs to be kept secret. Case 3: When T X is leaked and both X K and Y K need to be kept secret. Case 4: When T X is leaked and Y K needs to be kept secret. Case 5: When T X is leaked and X K needs to be kept secret. where T X is the syndrome produced by X, containing V CX and V X and T Y is the syndrome produced by Y, containing V CY and V X. The admissible rate region for each case is defined as follows: Definition a: (R X, R Y, R kx, R ky, h XY ) is admissible for case if there exists a code (F EX, F DXY ) and (F EY, F DXY ) such that (69) - (74) and (78) hold for any ɛ 0 and sufficiently large K. Definition b: (R X, R Y, R kx, R ky, h X ) is admissible for case 2 if there exists a code (F EX, F DXY ) such that (69) - (75) hold for any ɛ 0 and sufficiently large K. Definition c: (R X, R Y, R kx, R ky, h X, h Y ) is admissible for case 3 if there exists a code (F EX, F DXY ) and (F EY, F DXY ) such that (69) - (74) and (76), (78) hold for any ɛ 0 and sufficiently large K. Definition d: (R X, R Y, R kx, R ky, h Y ) is admissible for case 4 if there exists a code (F EX, F DXY ) such that (69) - (74) and (76) hold for any ɛ 0 and sufficiently large K. Definition e: (R X, R Y, R kx, R ky, h X ) is admissible for case 5 if there exists a code (F EX, F DXY ) such that (69) - (74) and (75) hold for any ɛ 0 and sufficiently large K. Definition 2: The admissible rate regions of R j and of R k are defined as: R (h XY ) = {(R X, R Y, R kx, R ky ) : (R X, R Y, R kx, R ky, h XY ) is admissible for case } (79) R 2 (h X ) = {(R X, R Y, R kx, R ky ) : (R X, R Y, R kx, R ky, h X ) is admissible for case 2} (80)

8 8 R 3 (h X, h Y ) = {(R X, R Y, R kx, R ky ) : (R X, R Y, R kx, R ky, h X, h Y ) is admissible for case 3} (8) R 4 (h Y ) = {(R X, R Y, R kx, R ky ) : (R X, R Y, R kx, R ky, h Y ) is admissible for case 4} (82) R 5 (h X ) = {(R X, R Y, R kx, R ky ) : (R X, R Y, R kx, R ky, h X ) is admissible for case 5} (83) Theorems for these regions have been developed: Theorem 2: For 0 h XY H(X, Y ), R (h XY ) = {(R X, R Y, R kx, R ky ) : R X H(X Y ), R Y H(Y X), R X + R Y H(X, Y ) R kx h XY and R ky h XY } (84) Theorem 3: For 0 h X H(X), R 2 (h X ) = {(R X, R Y, R kx, R ky ) : R X H(X Y ), R Y H(Y X), R X + R Y H(X, Y ) R kx h X and R ky h Y } (85) Theorem 4: For 0 h X H(X) and 0 h Y H(Y ), R 3 (h X, h Y ) = {(R X, R Y, R kx, R ky ) : R X H(X Y ), R Y H(Y X), R X + R Y H(X, Y ) R kx h X and R ky h Y } (86) Theorem 5: For 0 h X H(X), R 5 (h X, h Y ) = {(R X, R Y, R kx, R ky ) : R X H(X Y ), R Y H(Y X), R X + R Y H(X, Y ) R kx h X and R ky 0} (87) When h X = 0 then case 5 can be reduced to that depicted in (86). Hence, Corollary follows: Corollary : R 4 (h Y ) = R 3 (0, h Y ) The security levels, which are measured by the total and individual uncertainties h XY and (h X, h Y ) respectively give an indication of the level of uncertainty in knowing certain information. When the uncertainty increases then less information is known to an eavesdropper and there is a higher level of security. V. PROOF OF THEOREMS 2-5 This section initially proves the direct parts of Theorems 2-5 and thereafter the converse parts. A. Direct parts All the channel rates in the theorems above are in accordance with Slepian-Wolf s theorem, hence there is no need to prove them. We construct a code based on the prototype code (W X, W Y, W CX, W CY ) in Lemma. In order to include a key in the prototype code, W X is divided into two parts as per the method used by Yamamoto []: W X = W X mod M X I MX = {0,, 2,..., M X } (88) W X2 = W X W X M X I MX2 = {0,, 2,..., M X2 } (89) where M X is a given integer and M X2 is the ceiling of M X /M X. The M X /M X is considered an integer for simplicity, because the difference between the ceiling value and the actual value can be ignored when K is sufficiently large. In the same way, W Y is divided: W Y = W Y mod M Y I MY = {0,, 2,..., M Y } (90) W Y 2 = W Y W Y M Y I MY 2 = {0,, 2,..., M Y 2 } (9) The common information components W CX and W CY are already portions and are not divided further. It can be shown that when some of the codewords are wiretapped the uncertainties of X K and Y K are bounded as follows: K H(XK W X2, W Y ) I(X; Y ) + K log M X ɛ 0 (92) K H(Y K W X, W Y 2 ) I(X; Y ) + K log M Y ɛ 0 (93) K H(XK W X, W Y 2 ) I(X; Y ) ɛ 0 (94) K H(XK W X, W Y, W CY ) K log M CX ɛ 0 (95) K H(Y K W X, W Y, W CY ) K log M CX ɛ 0 (96) K H(XK W Y, W CY ) H(X Y ) + K log M CX ɛ 0 (97) K H(Y K W Y, W CY ) K log M CX ɛ 0 (98) where ɛ 0 0 as ɛ 0 0. The proofs for (92) - (98) are the same as per Yamamoto s [] proof in Lemma A. The

9 9 difference is that W CX, W CY, M CX and M CY are described as W C, W C2, M C and M C2 respectively by Yamamoto. Here, we consider that W CX and W CY are represented by Yamamoto s W C and W C2 respectively. In addition there are some more inequalities considered here: K H(Y K W X, W CX, W CY, W Y 2 ) K log M Y ɛ 0 (99) The codewords W X and W Y and their keys W kx and W ky are now defined: W X = (W X W kcy, W X2 W kcy, W CX W kcy )(06) W Y = (W Y W kcy, W Y 2 W kcy, W CY ) (07) K H(Y K W X, W CX, W CY ) K log M Y + K log M Y 2 ɛ 0(00) K H(XK W X2, W CY ) K log M X K H(Y K W X2, W CY ) K log M Y + K log M CX ɛ 0 (0) + K log M Y 2 + K log M CX ɛ 0 (02) The inequalities (99) and (00) can be proved in the same way as per Yamamoto s [] Lemma A2, and (0) and (02) can be proved in the same way as per Yamamoto s [] Lemma A. For each proof we consider cases where a key already exists for either V CX or V CY and the encrypted common information portion is then used to mask the other portions (either V CX or V CY and the private information portions). There are two cases considered for each; firstly, when the common information portion entropy is greater than the entropy of the portion that needs to be masked, and secondly when the common information portion entropy is less than the entropy of the portion to be masked. For the latter case, a smaller key will need to be added so as to cover the portion entirely. This has the effect of reducing the required key length, which is explained in greater detail in Section VII. Proof of Theorem 2: Suppose that (R X, R Y, R KX, R KY ) R for h XY H(X, Y ). Without loss of generality, we assume that h X h Y. Then, from (84) R X H(X K Y K ) R Y H(Y K X K ) R X + R Y H(X K, Y K ) (03) R kx h XY, R ky h XY (04) Assuming a key exists for V CY. For the first case, consider the following: H(V CY ) H(V X ), H(V CY ) H(V Y ) and H(V CY ) H(V CX ). M CY = 2 Kh XY (05) W ky = (W kcy ) (08) where W α I Mα = {0,,..., M α }. The wiretapper will not know W X, W X2 and W CX from W X and W Y, W Y 2 and W CY from W Y as these are protected by the key (W kcy. In this case, R X, R Y, R kx and R ky satisfy from () - (3) and (03) - (05), that K log M X + K log M Y = K (log M X + log M X2 + log M CX ) + K (log M Y + log M Y 2 + log M CY ) H(X Y ) + H(Y X) + I(X; Y ) + ɛ 0 = H(X, Y ) R X + R Y (09) K log M kx = K log M CY = h XY (0) where (0) comes from (05). R kx () K log M ky = K log M CY = h XY (2) where (2) comes from (05). The security levels thus result: R ky (3) K H(XK W X, W Y ) = K H(X W X W kcy, W X2 W kcy, W CX W kcy, W Y W kcy, W Y 2 W kcy, W CY ) = H(X K ) (4) h X ɛ 0 (5) where (4) holds because W X, W X2, W CX, W Y, W Y 2 are covered by key W CY and W CY is covered by an existing random number key. Equations (0) - (6) imply that W X, W X2, W Y and W Y 2 have almost no redundancy and they are mutually independent.

10 0 Similarly, K H(Y K W X, W Y ) h Y ɛ 0 (6) Therefore (R X, R Y, R kx, R ky, h XY, h XY ) is admissible from (09) - (6). Next the case where: H(V CY ) < H(V X ), H(V CY ) < H(V Y ) and H(V CY ) < H(V CX ) is considered. Here, there are shorter length keys used in addition to the key provided by W CY in order to make the key lengths required by the individual portions. For example the key W k comprises W kcy and a short key W, which together provide the length of W X. The codewords W X and W Y and their keys W kx and W ky are now defined: W X = (W X W k, W X2 W k2, W CX W k3 ) (7) W Y = (W Y W k4, W Y 2 W k5, W CY ) (8) W kx = (W k, W k2, W k3 ) (9) W ky = (W k4, W k5 ) (20) where W α I Mα = {0,,..., M α }. The wiretapper will not know W X, W X2 and W CX from W X and W Y, W Y 2 and W CY from W Y as these are protected by the key (W kcy. In this case, R X, R Y, R kx and R ky satisfy that K log M X + K log M Y = K (log M X + log M X2 + log M CX ) + K (log M Y + log M Y 2 + log M CY ) H(X Y ) + H(Y X) + I(X; Y ) + ɛ 0 = H(X, Y ) R X + R Y (2) K log M kx = K [log M k + log M k2 + log M k3 ] = log M kcy + log M + log M kcy + log M 2 + log M kcy + log M 3 = 3 log M kcy + log M + log M 2 + log M 3 3h XY ɛ 0 (22) where (22) results from (05). h XY (23) K log M kx = K [log M k3 + log M k4 + log M kcy ] = log M kcy + log M 3 + log M kcy + log M 4 + log M kcy = 3 log M kcy + log M 3 + log M 4 (24) 3h XY ɛ 0 (25) h XY (26) where (25) results from (05). The security levels thus result: K H(XK W X, W Y ) = K H(X W X W k, W X2 W k2, W CX W k3, W Y W k4, W Y 2 W k5, W CY ) = H(X K ) (27) h X ɛ 0 (28) where (4) holds because W X, W X2, W CX, W Y, W Y 2 are covered by key W CY and some shorter length key and W CY is covered by an existing random number key. Similarly, K H(Y K W X, W Y ) h Y ɛ 0 (29) Therefore (R X, R Y, R kx, R ky, h XY, h XY ) is admissible from (2) - (29). Theorem 3-5 are proven in the same way with varying codewords and keys. The proofs follow: Theorem 3 proof: The consideration for the security levels is that h Y h X because Y contains the key the is used for masking. Suppose that (R X, R Y, R KX, R KY ) R 2. From (85) R X H(X K Y K ) R Y H(Y K X K ) R X + R Y H(X K, Y K ) (30) R kx h X, R ky h Y (3) Assuming a key exists for V CY. For the first case, consider the following: H(V CY ) H(V X ), H(V CY ) H(V Y ) and H(V CY ) H(V CX ). M CY = 2 Kh Y (32) The codewords W X and W Y and their keys W kx and W ky are now defined: W X = (W X W kcy, W X2 W kcy, W CX W kcy )(33) W Y = (W Y W kcy, W Y 2 W kcy, W CY ) (34)

11 W ky = (W kcy ) (35) where W α I Mα = {0,,..., M α }. The wiretapper will not know W X, W X2 and W CX from W X and W Y, W Y 2 and W CY from W Y as these are protected by the key (W kcy. In this case, R X, R Y, R kx and R ky satisfy from () - (3) and (30) - (32), that K log M X + K log M Y = K (log M X + log M X2 + log M CX ) + K (log M Y + log M Y 2 + log M CY ) H(X Y ) + H(Y X) + I(X; Y ) + ɛ 0 = H(X, Y ) R X + R Y (36) K log M kx = K log M CY = h Y (37) h X ɛ 0 (38) R kx (39) where (37) comes from (32) and (38) comes form the consideration stated at the beginning of this proof. K log M ky = K log M CY = h XY (40) where (40) comes from (32). The security levels thus result: R ky (4) K H(XK W X, W Y ) = K H(X W X W kcy, W X2 W kcy, W CX W kcy, W Y W kcy, W Y 2 W kcy, W CY ) = H(X K ) (42) h X ɛ 0 (43) where (67) holds because W X, W X2, W CX, W Y, W Y 2 are covered by key W CY and W CY is covered by an existing random number key. Equations (0) - (6) imply that W X, W X2, W Y and W Y 2 have almost no redundancy and they are mutually independent. Similarly, K H(Y K W X, W Y ) h Y ɛ 0 (44) Therefore (R X, R Y, R kx, R ky, h XY, h XY ) is admissible from (09) - (6). Next the case where: H(V CY ) < H(V X ), H(V CY ) < H(V Y ) and H(V CY ) < H(V CX ) is considered. Here, there are shorter length keys used in addition to the key provided by W CY in order to make the key lengths required by the individual portions. For example the key W k comprises W kcy and a short key W, which together provide the length of W X. The codewords W X and W Y and their keys W kx and W ky are now defined: W X = (W X W k, W X2 W k2, W CX W k3 ) (45) W Y = (W Y W k4, W Y 2 W k5, W CY ) (46) W kx = (W k, W k2, W k3 ) (47) W ky = (W k4, W k5 ) (48) where W α I Mα = {0,,..., M α }. The wiretapper will not know W X, W X2 and W CX from W X and W Y, W Y 2 and W CY from W Y as these are protected by the key (W kcy. In this case, R X, R Y, R kx and R ky satisfy that K log M X + K log M Y = K (log M X + log M X2 + log M CX ) + K (log M Y + log M Y 2 + log M CY ) H(X Y ) + H(Y X) + I(X; Y ) + ɛ 0 = H(X, Y ) R X + R Y (49) K log M kx = K [log M k + log M k2 + log M k3 ] = log M kcy + log M + log M kcy + log M 2 + log M kcy + log M 3 = 3 log M kcy + log M + log M 2 + log M 3 3h Y ɛ 0 (50) 3h X ɛ 0 h X (5) where (50) results from (32) and the result is from the consideration at the beginning of this proof. K log M ky = K [log M k3 + log M k4 + log M kcy ] = log M kcy + log M 3 + log M kcy + log M 4 + log M kcy = 3 log M kcy + log M 3 + log M 4 3h Y ɛ 0 (52) h Y (53)

12 2 where (52) results from (32). The security levels thus result: K H(XK W X, W Y ) = K H(X W X W k, W X2 W k2, W CX W k3, W Y W k4, W Y 2 W k5, W CY ) = H(X K ) (54) h X ɛ 0 (55) where (54) holds because W X, W X2, W CX, W Y, W Y 2 are covered by key W CY and some shorter length key and W CY is covered by an existing random number key. Similarly, K H(Y K W X, W Y ) h Y ɛ 0 (56) Therefore (R X, R Y, R kx, R ky, h XY, h XY ) is admissible from (2) - (29). Proof of Theorem 4: Again, the consideration for the security levels is that h Y h X because Y contains the key the is used for masking. Suppose that (R X, R Y, R KX, R KY ) R 3. From (85) R X H(X K Y K ) R Y H(Y K X K ) R X + R Y H(X K, Y K ) (57) R kx h X, R ky h Y (58) Assuming a key exists for V CY. For the first case, consider the following: H(V CY ) H(V X ), H(V CY ) H(V Y ) and H(V CY ) H(V CX ). M CY = 2 Kh Y (59) In the same way as theorem 2 and 3, the codewords W X and W Y and their keys W kx and W ky are now defined: W X = (W X W kcy, W X2 W kcy, W CX W kcy )(60) W Y = (W Y W kcy, W Y 2 W kcy, W CY ) (6) W ky = (W kcy ) (62) where W α I Mα = {0,,..., M α }. The wiretapper will not know W X, W X2 and W CX from W X and W Y, W Y 2 and W CY from W Y as these are protected by the key (W kcy. K log M X + K log M Y = K (log M X + log M X2 + log M CX ) + K (log M Y + log M Y 2 + log M CY ) H(X Y ) + H(Y X) + I(X; Y ) + ɛ 0 = H(X, Y ) R X + R Y (63) K log M kx = K log M CY = h Y (64) h X ɛ 0 (65) R kx (66) where (64) comes from (59) and (65) comes form the consideration stated at the beginning of this proof. K log M ky = K log M CY = h XY (67) where (67) comes from (59). The security levels thus result: R ky (68) K H(XK W X, W Y ) = K H(X W X W kcy, (69) W X2 W kcy, W CX W kcy, W Y W kcy, W Y 2 W kcy, W CY ) = H(X K ) (70) h X ɛ 0 (7) where (70) holds because W X, W X2, W CX, W Y, W Y 2 are covered by key W CY and W CY is covered by an existing random number key. Equations (0) - (6) imply that W X, W X2, W Y and W Y 2 have almost no redundancy and they are mutually independent. Similarly, K H(Y K W X, W Y ) h Y ɛ 0 (72) Therefore (R X, R Y, R kx, R ky, h XY, h XY ) is admissible from (09) - (6). Next the case where: H(V CY ) < H(V X ), H(V CY ) < H(V Y ) and H(V CY ) < H(V CX ) is considered. Here, there are shorter length keys used in addition to the key provided by W CY in order to make the key lengths required by the individual portions. For example the key W k comprises W kcy and a short key W, which together provide the length of W X. The codewords W X and W Y and their keys W kx and W ky are now defined: W X = (W X W k, W X2 W k2, W CX W k3 ) (73)

13 3 W Y = (W Y W k4, W Y 2 W k5, W CY ) (74) W kx = (W k, W k2, W k3 ) (75) W ky = (W k4, W k5 ) (76) where W α I Mα = {0,,..., M α }. The wiretapper will not know W X, W X2 and W CX from W X and W Y, W Y 2 and W CY from W Y as these are protected by the key (W kcy. In this case, R X, R Y, R kx and R ky satisfy that K log M X + K log M Y = K (log M X + log M X2 + log M CX ) + K (log M Y + log M Y 2 + log M CY ) H(X Y ) + H(Y X) + I(X; Y ) + ɛ 0 = H(X, Y ) R X + R Y (77) K log M kx = K [log M k + log M k2 + log M k3 ] = log M kcy + log M + log M kcy where (83) holds because W X, W X2, W CX, W Y, W Y 2 are covered by key W CY and some shorter length key and W CY is covered by an existing random number key. Similarly, K H(Y K W X, W Y ) h Y ɛ 0 (85) Therefore (R X, R Y, R kx, R ky, h XY, h XY ) is admissible from (77) - (85). The region indicated for R is derived from this region for R, when h X = 0. Proof of Theorem 5: As before, V CY may be used as a key, however here we use V CX as the key in this proof to show some variation. Now the consideration for the security levels is that h X h Y because X contains the key that is used for masking. Suppose that (R X, R Y, R KX, R KY ) R 5. From (85) R X H(X K Y K ) R Y H(Y K X K ) R X + R Y H(X K, Y K ) (86) R kx h X, R ky h Y (87) Assuming a key exists for V CX. For the first case, consider the following: H(V CX ) H(V X ), H(V CX ) H(V Y ) and H(V CX ) H(V CX ). + log M 2 + log M kcy + log M 3 = 3 log M kcy + log M + log M 2 + log M 3 3h Y ɛ 0 (78) 3h X ɛ 0 h X (79) where (78) results from (59) and the result is from the consideration at the beginning of this proof. K log M ky = K [log M k3 + log M k4 + log M kcy ] = log M kcy + log M 3 + log M kcy + log M 4 + log M kcy = 3 log M kcy + log M 3 + log M 4 3h Y ɛ 0 (80) h Y (8) where (209) results from (59). The security levels thus result: K H(XK W X, W Y ) = K H(X W X W k, (82) W X2 W k2, W CX W k3, W Y W k4, W Y 2 W k5, W CY ) = H(X K ) (83) h X ɛ 0 (84) M CX = 2 Kh X (88) The codewords W X and W Y and their keys W kx and W ky are now defined: W X = (W X W kcx, W X2 W kcx, W CX ) (89) W Y = (W Y W kcx, W Y 2 W kcx, W CY W kcx )(90) W kx = (W kcx ) (9) where W α I Mα = {0,,..., M α }. The wiretapper will not know W X, W X2 and W CX from W X and W Y, W Y 2 and W CY from W Y as these are protected by the key W kcx and W kcx is protected by a random number key. K log M X + K log M Y = K (log M X + log M X2 + log M CX ) + K (log M Y + log M Y 2 + log M CY ) H(X Y ) + H(Y X) + I(X; Y ) + ɛ 0 = H(X, Y ) R X + R Y (92)

14 4 K log M kx = K log M CX h X ɛ 0 (93) where (93) comes from (88). R kx (94) W Y 2 and W CY from W Y as these are protected by the key (W kcy. In this case, R X, R Y, R kx and R ky satisfy that K log M X + K log M Y = K (log M X + log M X2 + log M CX ) K log M ky = K log M CX = h X (95) h Y (96) R ky (97) where (96) comes from (88) and (96) comes form the consideration stated at the beginning of this proof. The security levels thus result: K H(XK W X, W Y ) = K H(X W X W kcx, W X2 W kcx, W CX, W CY W kcx, W Y W kcx, W Y 2 W kcx ) = H(X K ) (98) h X ɛ 0 (99) where (70) holds because W X, W X2, W CX, W Y, W Y 2 are covered by key W CY and W CY is covered by an existing random number key. Equations (0) - (6) imply that W X, W X2, W Y and W Y 2 have almost no redundancy and they are mutually independent. Similarly, K H(Y K W X, W Y ) h Y ɛ 0 (200) Therefore (R X, R Y, R kx, R ky, h XY, h XY ) is admissible from (09) - (6). Next the case where: H(V CX ) < H(V X ), H(V CX ) < H(V Y ) and H(V CX ) < H(V CX ) is considered. Here, there are shorter length keys used in addition to the key provided by W CX in order to make the key lengths required by the individual portions. For example the key W k comprises W kcx and a short key W, which together are the length of W X. The codewords W X and W Y and their keys W kx and are now defined: W ky W X = (W X W k, W X2 W k2, W CX ) (20) W Y = (W CY W k3, W Y W k4, W Y 2 W k5 ) (202) W kx = (W k, W k2, W k3 ) (203) W ky = (W k4, W k5 ) (204) where W α I Mα = {0,,..., M α }. The wiretapper will not know W X, W X2 and W CX from W X and W Y, + K (log M Y + log M Y 2 + log M CY ) H(X Y ) + H(Y X) + I(X; Y ) + ɛ 0 = H(X, Y ) R X + R Y (205) K log M kx = K [log M k + log M k2 + log M k3 ] = log M kcx + log M + log M kcx + log M 2 + log M kcx + log M 3 = 3 log M kcx + log M + log M 2 + log M 3 3h X ɛ 0 (206) h X (207) where (206) results from (88). K log M ky = K [log M k3 + log M k4 + log M kcy ] = log M kcx + log M 3 + log M kcx + log M 4 + log M kcx = 3 log M kcx + log M 3 + log M 4 3h X ɛ 0 (208) 3h Y ɛ 0 (209) h Y (20) where (208) results from (88) and (209) results from the consideration at the beginning of this proof. The security levels thus result: K H(XK W X, W Y ) = K H(X W X W k, W X2 W k2, W CX, W CY W k3, W Y W k4, W Y 2 W k5, W CY ) = H(X K ) (2) h X ɛ 0 (22) where (2) holds because W X, W X2, W CY, W Y, W Y 2 are covered by key W CX and some shorter length key and W CX is covered by an existing random number key. Similarly, K H(Y K W X, W Y ) h Y ɛ 0 (23) Therefore (R X, R Y, R kx, R ky, h XY, h XY ) is admissible from (205) - (23).

15 5 B. Converse parts From Slepian-Wolf s theorem we know that the channel rate must satisfy R X H(X Y ), R Y H(Y X) and R X +R Y H(X, Y ) to achieve a low error probability when decoding. Hence, the key rates are considered in this subsection. Converse part of Theorem 2: R kx K logm kx ɛ K H(W kx) ɛ K H(W kx W ) ɛ = K [H(W kx) I(W kx ; W )] ɛ = K [H((W kx X,Y,W ) + I(W kx ; W ) + I(W kx ; X Y, W ) + I(X, Y, W kx W ) + I(Y, W kx X, W ) I(W kx ; W )] ɛ = K [H(X, Y W ) H(X, Y W, W kx )] ɛ h XY K H(X, Y W, W kx ) ɛ (24) = h XY H(V CY ) ɛ = h XY ɛ (25) where (24) results from equation (77). Here, we consider the extremes of H(V CY ) in order to determine the limit for R kx. When this quantity is minimum then we are able to achieve the maximum bound of h XY. R ky K logm ky ɛ K H(W ky ) ɛ K H(W ky W 2 ) ɛ = K [H(W ky ) I(W ky ; W 2 )] ɛ = K [H((W ky X,Y,W 2 ) + I(W ky ; W 2 ) + I(W ky ; X Y, W 2 ) + I(X, Y, W ky W 2 ) + I(Y, W ky X, W 2 ) I(W ky ; W 2 )] ɛ = K [H(X, Y W 2) H(X, Y W 2, W ky )] ɛ h XY K H(X, Y W 2, W ky ) ɛ (26) = h XY H(V CX ) ɛ = h XY ɛ (27) where (26) results from equation (78). Here, we consider the extremes of H(V CX ) in order to determine the limit for R ky. When this quantity is minimum then we are able to achieve the maximum bound of h XY. Converse part of Theorem 3: R kx K logm kx ɛ K H(W kx) ɛ K H(W kx W ) ɛ = K [H(W kx) I(W kx ; W )] ɛ = K [H((W kx X,W ) + I(W kx ; W ) + I(X, W kx W ) I(W kx ; W )] ɛ K I(X, W kx W ) ɛ = K [H(X W ) H(X W, W kx )] ɛ h X H(V CY ) ɛ (28) = h X ɛ (29) where (28) results from (75). Here, we consider the extremes of H(V CY ) in order to determine the limit for R kx. When this quantity is minimum then we are able to achieve the maximum bound of h X. R ky K logm ky ɛ K H(W ky ) ɛ K H(W ky W 2 ) ɛ = K [H(W ky ) I(W ky ; W 2 )] ɛ = K [H((W ky Y,W 2 ) + I(W ky ; W 2 ) + I(X, W ky W 2 ) I(W ky ; W 2 )] ɛ K I(Y, W ky W 2 ) ɛ = K [H(Y W 2) H(Y W 2, W ky )] ɛ h Y H(V CX ) ɛ (220) = h Y ɛ (22) where (28) results from (76). Here, we consider the extremes of H(V CX ) in order to determine the limit for R ky. When this quantity is minimum then we are able to achieve the maximum bound of h Y. Since theorems 4-5 also have key rates of h X and h Y for X and Y respectively we can use the same methods to prove the converse. VI. SCHEME FOR MULTIPLE SOURCES The two correlated source model presented in Section II is generalised even further, and now concentrates on multiple correlated sources transmitting syndromes across multiple wiretapped links. This new approach represents a network scenario where there are many sources and one receiver. We consider the information leakage for this model for Slepian- Wolf coding and thereafter consider the Shannon s cipher system representation.

16 6 A. Information leakage using Slepian-Wolf coding Here, Figure 5 gives a pictorial view of the new extended model for multiple correlated sources. Figure 5. S S 2 S n T S T S2 T Sn Receiver Extended generalised model Consider a situation where there are many sources, which are part of the S set: S = {S, S 2,..., S n } where i represents the ith source (i =,..., n) and there are n sources in total. Each source may have some correlation between some other source and all sources are part of a binary alphabet. There is one receiver that is responsible for performing decoding. The syndrome for a source S i is represented by T Si, which is part of the same alphabet as the sources. The entropy of a source is given by a combination of a specific conditional entropy and mutual information. In order to present the entropy we first define the following sets: - The set, S that contains all sources: S = {S, S 2,..., S n }. - The set, S t that contains t unique elements from S and S t S, S i S t, S t S c t = S and S t = t Here, H(S i ) is obtained as follows: H(S i ) = H(S i S \Si ) + n ( ) t t=2 all possible S t I(S t S c t)(222) Here, n is the number of sources, H(S i S \Si ) denotes the conditional entropy of the source S i given S i subtracted from the set S and I(S t S c t) denotes the mutual information between all sources in the subset S t given the complement of S t. In the same way as for two sources, the generalised probabilities and entropies can be developed. It is then possible to decode the source message for source S i by receiving all components related to S i. This gives rise to the following inequality for H(S i ) in terms of the sources: n H(S i S \Si ) + ( ) t I(S t S c t) t=2 all possible S t H(S i ) + δ (223) In this type of model information from multiple links need to be gathered in order to determine the transmitted information for one source. Here, the common information between sources is represented by the I(S t S c t) term. The portions of common information sent by each source can be determined upfront and is an arbitrary allocation in our case. For example in a three source model where X, Y and Z are the correlated sources, the common information shared with X and the other sources is represented as: I(X; Y Z) and I(X; Z Y ). Each common information portion is divided such that the sources having access to it are able to produce a portion of it themselves. The common information I(X; Y Z) is divided into V CX and V CY where the former is the common information between X and Y, produced by X and the latter is the common information between X and Y, produced by Y. Similarly, I(X; Z Y ) consists of two common information portions, V CX2 and V CZ produced by X and Z respectively. As with the previous model for two correlated sources, since wiretapping is possible there is a need to develop the information leakage for the model. The information leakages for this multiple source model is indicated in (224) and (225). Remark : The leaked information for a source S i given the transmitted codewords T Si, is given by: L Si T Si = I(S i ; T Si ) (224) Since we use the notion that the information leakage is the conditional entropy of the source given the transmitted information subtracted from the source s uncertainty (i.e H(S i ) H(S i T Si )), the proof for (224) is trivial. Here, we note that the common information is the minimum amount of information leaked. Each source is responsible for transmitting its own private information and there is a possibility that this private information may also be leaked. The maximum leakage for this case is thus the uncertainty of the source itself, H(S i ). We also consider the information leakage for a source S i when another source S j( j i) has transmitted information. This gives rise to Remark 2. Remark 2: The leaked information for a source S i given the transmitted codewords T Sj, where i j is: L Si T Sj = H(S i ) H(S i T Sj ) = H(S i ) [H(S i ) I(S i ; T Sj )] = I(S i ; T Sj ) (225) The information leakage for a source is determined based on the information transmitted from any other channel using the common information between them. The private information is not considered as it is transmitted by each source itself and can therefore not be obtained from an alternate channel. Remark 2 therefore gives an indication of the maximum amount of information leaked for source S i, with knowledge of the syndrome T Sj. These remarks show that the common information can be used to quantify the leaked information. The common information provides information for more than one source and is therefore susceptible to leaking information about more than one source should it be compromised. This subsection gives an indication of the information leakage for the new generalised

3F1: Signals and Systems INFORMATION THEORY Examples Paper Solutions

3F1: Signals and Systems INFORMATION THEORY Examples Paper Solutions Engineering Tripos Part IIA THIRD YEAR 3F: Signals and Systems INFORMATION THEORY Examples Paper Solutions. Let the joint probability mass function of two binary random variables X and Y be given in the

More information

Achieving Shannon Capacity Region as Secrecy Rate Region in a Multiple Access Wiretap Channel

Achieving Shannon Capacity Region as Secrecy Rate Region in a Multiple Access Wiretap Channel Achieving Shannon Capacity Region as Secrecy Rate Region in a Multiple Access Wiretap Channel Shahid Mehraj Shah and Vinod Sharma Department of Electrical Communication Engineering, Indian Institute of

More information

Chapter 4. Data Transmission and Channel Capacity. Po-Ning Chen, Professor. Department of Communications Engineering. National Chiao Tung University

Chapter 4. Data Transmission and Channel Capacity. Po-Ning Chen, Professor. Department of Communications Engineering. National Chiao Tung University Chapter 4 Data Transmission and Channel Capacity Po-Ning Chen, Professor Department of Communications Engineering National Chiao Tung University Hsin Chu, Taiwan 30050, R.O.C. Principle of Data Transmission

More information

AN INTRODUCTION TO SECRECY CAPACITY. 1. Overview

AN INTRODUCTION TO SECRECY CAPACITY. 1. Overview AN INTRODUCTION TO SECRECY CAPACITY BRIAN DUNN. Overview This paper introduces the reader to several information theoretic aspects of covert communications. In particular, it discusses fundamental limits

More information

MULTITERMINAL SECRECY AND TREE PACKING. With Imre Csiszár, Sirin Nitinawarat, Chunxuan Ye, Alexander Barg and Alex Reznik

MULTITERMINAL SECRECY AND TREE PACKING. With Imre Csiszár, Sirin Nitinawarat, Chunxuan Ye, Alexander Barg and Alex Reznik MULTITERMINAL SECRECY AND TREE PACKING With Imre Csiszár, Sirin Nitinawarat, Chunxuan Ye, Alexander Barg and Alex Reznik Information Theoretic Security A complementary approach to computational security

More information

Shannon s noisy-channel theorem

Shannon s noisy-channel theorem Shannon s noisy-channel theorem Information theory Amon Elders Korteweg de Vries Institute for Mathematics University of Amsterdam. Tuesday, 26th of Januari Amon Elders (Korteweg de Vries Institute for

More information

PERFECT SECRECY AND ADVERSARIAL INDISTINGUISHABILITY

PERFECT SECRECY AND ADVERSARIAL INDISTINGUISHABILITY PERFECT SECRECY AND ADVERSARIAL INDISTINGUISHABILITY BURTON ROSENBERG UNIVERSITY OF MIAMI Contents 1. Perfect Secrecy 1 1.1. A Perfectly Secret Cipher 2 1.2. Odds Ratio and Bias 3 1.3. Conditions for Perfect

More information

On the Secrecy Capacity of Fading Channels

On the Secrecy Capacity of Fading Channels On the Secrecy Capacity of Fading Channels arxiv:cs/63v [cs.it] 7 Oct 26 Praveen Kumar Gopala, Lifeng Lai and Hesham El Gamal Department of Electrical and Computer Engineering The Ohio State University

More information

Lecture 6 I. CHANNEL CODING. X n (m) P Y X

Lecture 6 I. CHANNEL CODING. X n (m) P Y X 6- Introduction to Information Theory Lecture 6 Lecturer: Haim Permuter Scribe: Yoav Eisenberg and Yakov Miron I. CHANNEL CODING We consider the following channel coding problem: m = {,2,..,2 nr} Encoder

More information

Secret Key Agreement Using Asymmetry in Channel State Knowledge

Secret Key Agreement Using Asymmetry in Channel State Knowledge Secret Key Agreement Using Asymmetry in Channel State Knowledge Ashish Khisti Deutsche Telekom Inc. R&D Lab USA Los Altos, CA, 94040 Email: ashish.khisti@telekom.com Suhas Diggavi LICOS, EFL Lausanne,

More information

PART III. Outline. Codes and Cryptography. Sources. Optimal Codes (I) Jorge L. Villar. MAMME, Fall 2015

PART III. Outline. Codes and Cryptography. Sources. Optimal Codes (I) Jorge L. Villar. MAMME, Fall 2015 Outline Codes and Cryptography 1 Information Sources and Optimal Codes 2 Building Optimal Codes: Huffman Codes MAMME, Fall 2015 3 Shannon Entropy and Mutual Information PART III Sources Information source:

More information

ECE Information theory Final (Fall 2008)

ECE Information theory Final (Fall 2008) ECE 776 - Information theory Final (Fall 2008) Q.1. (1 point) Consider the following bursty transmission scheme for a Gaussian channel with noise power N and average power constraint P (i.e., 1/n X n i=1

More information

PERFECTLY secure key agreement has been studied recently

PERFECTLY secure key agreement has been studied recently IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 45, NO. 2, MARCH 1999 499 Unconditionally Secure Key Agreement the Intrinsic Conditional Information Ueli M. Maurer, Senior Member, IEEE, Stefan Wolf Abstract

More information

1 Background on Information Theory

1 Background on Information Theory Review of the book Information Theory: Coding Theorems for Discrete Memoryless Systems by Imre Csiszár and János Körner Second Edition Cambridge University Press, 2011 ISBN:978-0-521-19681-9 Review by

More information

EE5585 Data Compression May 2, Lecture 27

EE5585 Data Compression May 2, Lecture 27 EE5585 Data Compression May 2, 2013 Lecture 27 Instructor: Arya Mazumdar Scribe: Fangying Zhang Distributed Data Compression/Source Coding In the previous class we used a H-W table as a simple example,

More information

EE/Stat 376B Handout #5 Network Information Theory October, 14, Homework Set #2 Solutions

EE/Stat 376B Handout #5 Network Information Theory October, 14, Homework Set #2 Solutions EE/Stat 376B Handout #5 Network Information Theory October, 14, 014 1. Problem.4 parts (b) and (c). Homework Set # Solutions (b) Consider h(x + Y ) h(x + Y Y ) = h(x Y ) = h(x). (c) Let ay = Y 1 + Y, where

More information

Alphabet Size Reduction for Secure Network Coding: A Graph Theoretic Approach

Alphabet Size Reduction for Secure Network Coding: A Graph Theoretic Approach ALPHABET SIZE REDUCTION FOR SECURE NETWORK CODING: A GRAPH THEORETIC APPROACH 1 Alphabet Size Reduction for Secure Network Coding: A Graph Theoretic Approach Xuan Guang, Member, IEEE, and Raymond W. Yeung,

More information

Capacity of the Discrete Memoryless Energy Harvesting Channel with Side Information

Capacity of the Discrete Memoryless Energy Harvesting Channel with Side Information 204 IEEE International Symposium on Information Theory Capacity of the Discrete Memoryless Energy Harvesting Channel with Side Information Omur Ozel, Kaya Tutuncuoglu 2, Sennur Ulukus, and Aylin Yener

More information

A Numerical Study on the Wiretap Network with a Simple Network Topology

A Numerical Study on the Wiretap Network with a Simple Network Topology 1 A Numerical Study on the Wiretap Network with a Simple Network Topology Fan Cheng, Member, IEEE and Vincent Y. F. Tan, Senior Member, IEEE arxiv:1505.02862v3 [cs.it] 15 Jan 2016 Fig. 1. Abstract In this

More information

Password Cracking: The Effect of Bias on the Average Guesswork of Hash Functions

Password Cracking: The Effect of Bias on the Average Guesswork of Hash Functions Password Cracking: The Effect of Bias on the Average Guesswork of Hash Functions Yair Yona, and Suhas Diggavi, Fellow, IEEE Abstract arxiv:608.0232v4 [cs.cr] Jan 207 In this work we analyze the average

More information

Computer Science A Cryptography and Data Security. Claude Crépeau

Computer Science A Cryptography and Data Security. Claude Crépeau Computer Science 308-547A Cryptography and Data Security Claude Crépeau These notes are, largely, transcriptions by Anton Stiglic of class notes from the former course Cryptography and Data Security (308-647A)

More information

Channel Coding for Secure Transmissions

Channel Coding for Secure Transmissions Channel Coding for Secure Transmissions March 27, 2017 1 / 51 McEliece Cryptosystem Coding Approach: Noiseless Main Channel Coding Approach: Noisy Main Channel 2 / 51 Outline We present an overiew of linear

More information

Shannon s Noisy-Channel Coding Theorem

Shannon s Noisy-Channel Coding Theorem Shannon s Noisy-Channel Coding Theorem Lucas Slot Sebastian Zur February 2015 Abstract In information theory, Shannon s Noisy-Channel Coding Theorem states that it is possible to communicate over a noisy

More information

(Classical) Information Theory III: Noisy channel coding

(Classical) Information Theory III: Noisy channel coding (Classical) Information Theory III: Noisy channel coding Sibasish Ghosh The Institute of Mathematical Sciences CIT Campus, Taramani, Chennai 600 113, India. p. 1 Abstract What is the best possible way

More information

The simple ideal cipher system

The simple ideal cipher system The simple ideal cipher system Boris Ryabko February 19, 2001 1 Prof. and Head of Department of appl. math and cybernetics Siberian State University of Telecommunication and Computer Science Head of Laboratory

More information

Lecture 8: Shannon s Noise Models

Lecture 8: Shannon s Noise Models Error Correcting Codes: Combinatorics, Algorithms and Applications (Fall 2007) Lecture 8: Shannon s Noise Models September 14, 2007 Lecturer: Atri Rudra Scribe: Sandipan Kundu& Atri Rudra Till now we have

More information

Introduction to Cryptology. Lecture 2

Introduction to Cryptology. Lecture 2 Introduction to Cryptology Lecture 2 Announcements 2 nd vs. 1 st edition of textbook HW1 due Tuesday 2/9 Readings/quizzes (on Canvas) due Friday 2/12 Agenda Last time Historical ciphers and their cryptanalysis

More information

Network coding for multicast relation to compression and generalization of Slepian-Wolf

Network coding for multicast relation to compression and generalization of Slepian-Wolf Network coding for multicast relation to compression and generalization of Slepian-Wolf 1 Overview Review of Slepian-Wolf Distributed network compression Error exponents Source-channel separation issues

More information

Information Theoretic Security and Privacy of Information Systems. Edited by Holger Boche, Ashish Khisti, H. Vincent Poor, and Rafael F.

Information Theoretic Security and Privacy of Information Systems. Edited by Holger Boche, Ashish Khisti, H. Vincent Poor, and Rafael F. Information Theoretic Security and Privacy of Information Systems Edited by Holger Boche, Ashish Khisti, H. Vincent Poor, and Rafael F. Schaefer 1 Secure source coding Paul Cuff and Curt Schieler Abstract

More information

Cryptography. P. Danziger. Transmit...Bob...

Cryptography. P. Danziger. Transmit...Bob... 10.4 Cryptography P. Danziger 1 Cipher Schemes A cryptographic scheme is an example of a code. The special requirement is that the encoded message be difficult to retrieve without some special piece of

More information

Shift Cipher. For 0 i 25, the ith plaintext character is. E.g. k = 3

Shift Cipher. For 0 i 25, the ith plaintext character is. E.g. k = 3 Shift Cipher For 0 i 25, the ith plaintext character is shifted by some value 0 k 25 (mod 26). E.g. k = 3 a b c d e f g h i j k l m n o p q r s t u v w x y z D E F G H I J K L M N O P Q R S T U V W X Y

More information

Reverse Edge Cut-Set Bounds for Secure Network Coding

Reverse Edge Cut-Set Bounds for Secure Network Coding Reverse Edge Cut-Set Bounds for Secure Network Coding Wentao Huang and Tracey Ho California Institute of Technology Michael Langberg University at Buffalo, SUNY Joerg Kliewer New Jersey Institute of Technology

More information

An instantaneous code (prefix code, tree code) with the codeword lengths l 1,..., l N exists if and only if. 2 l i. i=1

An instantaneous code (prefix code, tree code) with the codeword lengths l 1,..., l N exists if and only if. 2 l i. i=1 Kraft s inequality An instantaneous code (prefix code, tree code) with the codeword lengths l 1,..., l N exists if and only if N 2 l i 1 Proof: Suppose that we have a tree code. Let l max = max{l 1,...,

More information

APPLICATIONS. Quantum Communications

APPLICATIONS. Quantum Communications SOFT PROCESSING TECHNIQUES FOR QUANTUM KEY DISTRIBUTION APPLICATIONS Marina Mondin January 27, 2012 Quantum Communications In the past decades, the key to improving computer performance has been the reduction

More information

Lecture 1: Perfect Secrecy and Statistical Authentication. 2 Introduction - Historical vs Modern Cryptography

Lecture 1: Perfect Secrecy and Statistical Authentication. 2 Introduction - Historical vs Modern Cryptography CS 7880 Graduate Cryptography September 10, 2015 Lecture 1: Perfect Secrecy and Statistical Authentication Lecturer: Daniel Wichs Scribe: Matthew Dippel 1 Topic Covered Definition of perfect secrecy One-time

More information

Secret Message Capacity of Erasure Broadcast Channels with Feedback

Secret Message Capacity of Erasure Broadcast Channels with Feedback Secret Message Capacity of Erasure Broadcast Channels with Feedback László Czap Vinod M. Prabhakaran Christina Fragouli École Polytechnique Fédérale de Lausanne, Switzerland Email: laszlo.czap vinod.prabhakaran

More information

Information Theory. Coding and Information Theory. Information Theory Textbooks. Entropy

Information Theory. Coding and Information Theory. Information Theory Textbooks. Entropy Coding and Information Theory Chris Williams, School of Informatics, University of Edinburgh Overview What is information theory? Entropy Coding Information Theory Shannon (1948): Information theory is

More information

18.2 Continuous Alphabet (discrete-time, memoryless) Channel

18.2 Continuous Alphabet (discrete-time, memoryless) Channel 0-704: Information Processing and Learning Spring 0 Lecture 8: Gaussian channel, Parallel channels and Rate-distortion theory Lecturer: Aarti Singh Scribe: Danai Koutra Disclaimer: These notes have not

More information

LECTURE 15. Last time: Feedback channel: setting up the problem. Lecture outline. Joint source and channel coding theorem

LECTURE 15. Last time: Feedback channel: setting up the problem. Lecture outline. Joint source and channel coding theorem LECTURE 15 Last time: Feedback channel: setting up the problem Perfect feedback Feedback capacity Data compression Lecture outline Joint source and channel coding theorem Converse Robustness Brain teaser

More information

Secret Key Agreement Using Conferencing in State- Dependent Multiple Access Channels with An Eavesdropper

Secret Key Agreement Using Conferencing in State- Dependent Multiple Access Channels with An Eavesdropper Secret Key Agreement Using Conferencing in State- Dependent Multiple Access Channels with An Eavesdropper Mohsen Bahrami, Ali Bereyhi, Mahtab Mirmohseni and Mohammad Reza Aref Information Systems and Security

More information

Lecture 11: Quantum Information III - Source Coding

Lecture 11: Quantum Information III - Source Coding CSCI5370 Quantum Computing November 25, 203 Lecture : Quantum Information III - Source Coding Lecturer: Shengyu Zhang Scribe: Hing Yin Tsang. Holevo s bound Suppose Alice has an information source X that

More information

Digital Communications III (ECE 154C) Introduction to Coding and Information Theory

Digital Communications III (ECE 154C) Introduction to Coding and Information Theory Digital Communications III (ECE 154C) Introduction to Coding and Information Theory Tara Javidi These lecture notes were originally developed by late Prof. J. K. Wolf. UC San Diego Spring 2014 1 / 8 I

More information

Lecture 15: Conditional and Joint Typicaility

Lecture 15: Conditional and Joint Typicaility EE376A Information Theory Lecture 1-02/26/2015 Lecture 15: Conditional and Joint Typicaility Lecturer: Kartik Venkat Scribe: Max Zimet, Brian Wai, Sepehr Nezami 1 Notation We always write a sequence of

More information

National University of Singapore Department of Electrical & Computer Engineering. Examination for

National University of Singapore Department of Electrical & Computer Engineering. Examination for National University of Singapore Department of Electrical & Computer Engineering Examination for EE5139R Information Theory for Communication Systems (Semester I, 2014/15) November/December 2014 Time Allowed:

More information

Capacity of a channel Shannon s second theorem. Information Theory 1/33

Capacity of a channel Shannon s second theorem. Information Theory 1/33 Capacity of a channel Shannon s second theorem Information Theory 1/33 Outline 1. Memoryless channels, examples ; 2. Capacity ; 3. Symmetric channels ; 4. Channel Coding ; 5. Shannon s second theorem,

More information

Reliable Computation over Multiple-Access Channels

Reliable Computation over Multiple-Access Channels Reliable Computation over Multiple-Access Channels Bobak Nazer and Michael Gastpar Dept. of Electrical Engineering and Computer Sciences University of California, Berkeley Berkeley, CA, 94720-1770 {bobak,

More information

Upper Bounds on the Capacity of Binary Intermittent Communication

Upper Bounds on the Capacity of Binary Intermittent Communication Upper Bounds on the Capacity of Binary Intermittent Communication Mostafa Khoshnevisan and J. Nicholas Laneman Department of Electrical Engineering University of Notre Dame Notre Dame, Indiana 46556 Email:{mhoshne,

More information

ELEC546 Review of Information Theory

ELEC546 Review of Information Theory ELEC546 Review of Information Theory Vincent Lau 1/1/004 1 Review of Information Theory Entropy: Measure of uncertainty of a random variable X. The entropy of X, H(X), is given by: If X is a discrete random

More information

Energy State Amplification in an Energy Harvesting Communication System

Energy State Amplification in an Energy Harvesting Communication System Energy State Amplification in an Energy Harvesting Communication System Omur Ozel Sennur Ulukus Department of Electrical and Computer Engineering University of Maryland College Park, MD 20742 omur@umd.edu

More information

Information-Theoretic Security: an overview

Information-Theoretic Security: an overview Information-Theoretic Security: an overview Rui A Costa 1 Relatório para a disciplina de Seminário, do Mestrado em Informática da Faculdade de Ciências da Universidade do Porto, sob a orientação do Prof

More information

Lecture 8: Channel and source-channel coding theorems; BEC & linear codes. 1 Intuitive justification for upper bound on channel capacity

Lecture 8: Channel and source-channel coding theorems; BEC & linear codes. 1 Intuitive justification for upper bound on channel capacity 5-859: Information Theory and Applications in TCS CMU: Spring 23 Lecture 8: Channel and source-channel coding theorems; BEC & linear codes February 7, 23 Lecturer: Venkatesan Guruswami Scribe: Dan Stahlke

More information

The Method of Types and Its Application to Information Hiding

The Method of Types and Its Application to Information Hiding The Method of Types and Its Application to Information Hiding Pierre Moulin University of Illinois at Urbana-Champaign www.ifp.uiuc.edu/ moulin/talks/eusipco05-slides.pdf EUSIPCO Antalya, September 7,

More information

ECE Information theory Final

ECE Information theory Final ECE 776 - Information theory Final Q1 (1 point) We would like to compress a Gaussian source with zero mean and variance 1 We consider two strategies In the first, we quantize with a step size so that the

More information

Distributed Functional Compression through Graph Coloring

Distributed Functional Compression through Graph Coloring Distributed Functional Compression through Graph Coloring Vishal Doshi, Devavrat Shah, Muriel Médard, and Sidharth Jaggi Laboratory for Information and Decision Systems Massachusetts Institute of Technology

More information

Distributed Source Coding Using LDPC Codes

Distributed Source Coding Using LDPC Codes Distributed Source Coding Using LDPC Codes Telecommunications Laboratory Alex Balatsoukas-Stimming Technical University of Crete May 29, 2010 Telecommunications Laboratory (TUC) Distributed Source Coding

More information

Lecture 2. Capacity of the Gaussian channel

Lecture 2. Capacity of the Gaussian channel Spring, 207 5237S, Wireless Communications II 2. Lecture 2 Capacity of the Gaussian channel Review on basic concepts in inf. theory ( Cover&Thomas: Elements of Inf. Theory, Tse&Viswanath: Appendix B) AWGN

More information

Cryptography. Lecture 2: Perfect Secrecy and its Limitations. Gil Segev

Cryptography. Lecture 2: Perfect Secrecy and its Limitations. Gil Segev Cryptography Lecture 2: Perfect Secrecy and its Limitations Gil Segev Last Week Symmetric-key encryption (KeyGen, Enc, Dec) Historical ciphers that are completely broken The basic principles of modern

More information

Chapter 9 Fundamental Limits in Information Theory

Chapter 9 Fundamental Limits in Information Theory Chapter 9 Fundamental Limits in Information Theory Information Theory is the fundamental theory behind information manipulation, including data compression and data transmission. 9.1 Introduction o For

More information

Topics. Probability Theory. Perfect Secrecy. Information Theory

Topics. Probability Theory. Perfect Secrecy. Information Theory Topics Probability Theory Perfect Secrecy Information Theory Some Terms (P,C,K,E,D) Computational Security Computational effort required to break cryptosystem Provable Security Relative to another, difficult

More information

On Function Computation with Privacy and Secrecy Constraints

On Function Computation with Privacy and Secrecy Constraints 1 On Function Computation with Privacy and Secrecy Constraints Wenwen Tu and Lifeng Lai Abstract In this paper, the problem of function computation with privacy and secrecy constraints is considered. The

More information

Lecture 11: Polar codes construction

Lecture 11: Polar codes construction 15-859: Information Theory and Applications in TCS CMU: Spring 2013 Lecturer: Venkatesan Guruswami Lecture 11: Polar codes construction February 26, 2013 Scribe: Dan Stahlke 1 Polar codes: recap of last

More information

Perfectly-Secret Encryption

Perfectly-Secret Encryption Perfectly-Secret Encryption CSE 5351: Introduction to Cryptography Reading assignment: Read Chapter 2 You may sip proofs, but are encouraged to read some of them. 1 Outline Definition of encryption schemes

More information

Approximately achieving the feedback interference channel capacity with point-to-point codes

Approximately achieving the feedback interference channel capacity with point-to-point codes Approximately achieving the feedback interference channel capacity with point-to-point codes Joyson Sebastian*, Can Karakus*, Suhas Diggavi* Abstract Superposition codes with rate-splitting have been used

More information

Secret Key and Private Key Constructions for Simple Multiterminal Source Models

Secret Key and Private Key Constructions for Simple Multiterminal Source Models Secret Key and Private Key Constructions for Simple Multiterminal Source Models arxiv:cs/05050v [csit] 3 Nov 005 Chunxuan Ye Department of Electrical and Computer Engineering and Institute for Systems

More information

Outline. Computer Science 418. Number of Keys in the Sum. More on Perfect Secrecy, One-Time Pad, Entropy. Mike Jacobson. Week 3

Outline. Computer Science 418. Number of Keys in the Sum. More on Perfect Secrecy, One-Time Pad, Entropy. Mike Jacobson. Week 3 Outline Computer Science 48 More on Perfect Secrecy, One-Time Pad, Mike Jacobson Department of Computer Science University of Calgary Week 3 2 3 Mike Jacobson (University of Calgary) Computer Science 48

More information

Information-theoretic Secrecy A Cryptographic Perspective

Information-theoretic Secrecy A Cryptographic Perspective Information-theoretic Secrecy A Cryptographic Perspective Stefano Tessaro UC Santa Barbara WCS 2017 April 30, 2017 based on joint works with M. Bellare and A. Vardy Cryptography Computational assumptions

More information

Side-information Scalable Source Coding

Side-information Scalable Source Coding Side-information Scalable Source Coding Chao Tian, Member, IEEE, Suhas N. Diggavi, Member, IEEE Abstract The problem of side-information scalable (SI-scalable) source coding is considered in this work,

More information

Lecture 9 - Symmetric Encryption

Lecture 9 - Symmetric Encryption 0368.4162: Introduction to Cryptography Ran Canetti Lecture 9 - Symmetric Encryption 29 December 2008 Fall 2008 Scribes: R. Levi, M. Rosen 1 Introduction Encryption, or guaranteeing secrecy of information,

More information

X 1 : X Table 1: Y = X X 2

X 1 : X Table 1: Y = X X 2 ECE 534: Elements of Information Theory, Fall 200 Homework 3 Solutions (ALL DUE to Kenneth S. Palacio Baus) December, 200. Problem 5.20. Multiple access (a) Find the capacity region for the multiple-access

More information

On Common Information and the Encoding of Sources that are Not Successively Refinable

On Common Information and the Encoding of Sources that are Not Successively Refinable On Common Information and the Encoding of Sources that are Not Successively Refinable Kumar Viswanatha, Emrah Akyol, Tejaswi Nanjundaswamy and Kenneth Rose ECE Department, University of California - Santa

More information

The Communication Complexity of Correlation. Prahladh Harsha Rahul Jain David McAllester Jaikumar Radhakrishnan

The Communication Complexity of Correlation. Prahladh Harsha Rahul Jain David McAllester Jaikumar Radhakrishnan The Communication Complexity of Correlation Prahladh Harsha Rahul Jain David McAllester Jaikumar Radhakrishnan Transmitting Correlated Variables (X, Y) pair of correlated random variables Transmitting

More information

Lecture 5: Channel Capacity. Copyright G. Caire (Sample Lectures) 122

Lecture 5: Channel Capacity. Copyright G. Caire (Sample Lectures) 122 Lecture 5: Channel Capacity Copyright G. Caire (Sample Lectures) 122 M Definitions and Problem Setup 2 X n Y n Encoder p(y x) Decoder ˆM Message Channel Estimate Definition 11. Discrete Memoryless Channel

More information

Superposition Encoding and Partial Decoding Is Optimal for a Class of Z-interference Channels

Superposition Encoding and Partial Decoding Is Optimal for a Class of Z-interference Channels Superposition Encoding and Partial Decoding Is Optimal for a Class of Z-interference Channels Nan Liu and Andrea Goldsmith Department of Electrical Engineering Stanford University, Stanford CA 94305 Email:

More information

Homework Set #2 Data Compression, Huffman code and AEP

Homework Set #2 Data Compression, Huffman code and AEP Homework Set #2 Data Compression, Huffman code and AEP 1. Huffman coding. Consider the random variable ( x1 x X = 2 x 3 x 4 x 5 x 6 x 7 0.50 0.26 0.11 0.04 0.04 0.03 0.02 (a Find a binary Huffman code

More information

CS6304 / Analog and Digital Communication UNIT IV - SOURCE AND ERROR CONTROL CODING PART A 1. What is the use of error control coding? The main use of error control coding is to reduce the overall probability

More information

Lecture 22: Final Review

Lecture 22: Final Review Lecture 22: Final Review Nuts and bolts Fundamental questions and limits Tools Practical algorithms Future topics Dr Yao Xie, ECE587, Information Theory, Duke University Basics Dr Yao Xie, ECE587, Information

More information

Interactive Communication for Data Exchange

Interactive Communication for Data Exchange Interactive Communication for Data Exchange Himanshu Tyagi Indian Institute of Science, Bangalore Joint work with Pramod Viswanath and Shun Watanabe The Data Exchange Problem [ElGamal-Orlitsky 84], [Csiszár-Narayan

More information

4F5: Advanced Communications and Coding Handout 2: The Typical Set, Compression, Mutual Information

4F5: Advanced Communications and Coding Handout 2: The Typical Set, Compression, Mutual Information 4F5: Advanced Communications and Coding Handout 2: The Typical Set, Compression, Mutual Information Ramji Venkataramanan Signal Processing and Communications Lab Department of Engineering ramji.v@eng.cam.ac.uk

More information

Information Theory CHAPTER. 5.1 Introduction. 5.2 Entropy

Information Theory CHAPTER. 5.1 Introduction. 5.2 Entropy Haykin_ch05_pp3.fm Page 207 Monday, November 26, 202 2:44 PM CHAPTER 5 Information Theory 5. Introduction As mentioned in Chapter and reiterated along the way, the purpose of a communication system is

More information

Secure RAID Schemes from EVENODD and STAR Codes

Secure RAID Schemes from EVENODD and STAR Codes Secure RAID Schemes from EVENODD and STAR Codes Wentao Huang and Jehoshua Bruck California Institute of Technology, Pasadena, USA {whuang,bruck}@caltechedu Abstract We study secure RAID, ie, low-complexity

More information

Notes 3: Stochastic channels and noisy coding theorem bound. 1 Model of information communication and noisy channel

Notes 3: Stochastic channels and noisy coding theorem bound. 1 Model of information communication and noisy channel Introduction to Coding Theory CMU: Spring 2010 Notes 3: Stochastic channels and noisy coding theorem bound January 2010 Lecturer: Venkatesan Guruswami Scribe: Venkatesan Guruswami We now turn to the basic

More information

Outer Bounds on the Secrecy Capacity Region of the 2-user Z Interference Channel With Unidirectional Transmitter Cooperation

Outer Bounds on the Secrecy Capacity Region of the 2-user Z Interference Channel With Unidirectional Transmitter Cooperation Outer Bounds on the Secrecy Capacity Region of the 2-user Z Interference Channel With Unidirectional Transmitter Cooperation Parthajit Mohapatra, Chandra R. Murthy, and Jemin Lee itrust, Centre for Research

More information

Optimal XOR based (2,n)-Visual Cryptography Schemes

Optimal XOR based (2,n)-Visual Cryptography Schemes Optimal XOR based (2,n)-Visual Cryptography Schemes Feng Liu and ChuanKun Wu State Key Laboratory Of Information Security, Institute of Software Chinese Academy of Sciences, Beijing 0090, China Email:

More information

Codes used in Cryptography

Codes used in Cryptography Prasad Krishnan Signal Processing and Communications Research Center, International Institute of Information Technology, Hyderabad March 29, 2016 Outline Coding Theory and Cryptography Linear Codes Codes

More information

On the Impact of Quantized Channel Feedback in Guaranteeing Secrecy with Artificial Noise

On the Impact of Quantized Channel Feedback in Guaranteeing Secrecy with Artificial Noise On the Impact of Quantized Channel Feedback in Guaranteeing Secrecy with Artificial Noise Ya-Lan Liang, Yung-Shun Wang, Tsung-Hui Chang, Y.-W. Peter Hong, and Chong-Yung Chi Institute of Communications

More information

Noisy channel communication

Noisy channel communication Information Theory http://www.inf.ed.ac.uk/teaching/courses/it/ Week 6 Communication channels and Information Some notes on the noisy channel setup: Iain Murray, 2012 School of Informatics, University

More information

Group Secret Key Agreement over State-Dependent Wireless Broadcast Channels

Group Secret Key Agreement over State-Dependent Wireless Broadcast Channels Group Secret Key Agreement over State-Dependent Wireless Broadcast Channels Mahdi Jafari Siavoshani Sharif University of Technology, Iran Shaunak Mishra, Suhas Diggavi, Christina Fragouli Institute of

More information

Solutions to Homework Set #1 Sanov s Theorem, Rate distortion

Solutions to Homework Set #1 Sanov s Theorem, Rate distortion st Semester 00/ Solutions to Homework Set # Sanov s Theorem, Rate distortion. Sanov s theorem: Prove the simple version of Sanov s theorem for the binary random variables, i.e., let X,X,...,X n be a sequence

More information

EE376A: Homework #3 Due by 11:59pm Saturday, February 10th, 2018

EE376A: Homework #3 Due by 11:59pm Saturday, February 10th, 2018 Please submit the solutions on Gradescope. EE376A: Homework #3 Due by 11:59pm Saturday, February 10th, 2018 1. Optimal codeword lengths. Although the codeword lengths of an optimal variable length code

More information

Information Theory. Lecture 10. Network Information Theory (CT15); a focus on channel capacity results

Information Theory. Lecture 10. Network Information Theory (CT15); a focus on channel capacity results Information Theory Lecture 10 Network Information Theory (CT15); a focus on channel capacity results The (two-user) multiple access channel (15.3) The (two-user) broadcast channel (15.6) The relay channel

More information

Noisy-Channel Coding

Noisy-Channel Coding Copyright Cambridge University Press 2003. On-screen viewing permitted. Printing not permitted. http://www.cambridge.org/05264298 Part II Noisy-Channel Coding Copyright Cambridge University Press 2003.

More information

Keyless authentication in the presence of a simultaneously transmitting adversary

Keyless authentication in the presence of a simultaneously transmitting adversary Keyless authentication in the presence of a simultaneously transmitting adversary Eric Graves Army Research Lab Adelphi MD 20783 U.S.A. ericsgra@ufl.edu Paul Yu Army Research Lab Adelphi MD 20783 U.S.A.

More information

Roll No. :... Invigilator's Signature :.. CS/B.TECH(ECE)/SEM-7/EC-703/ CODING & INFORMATION THEORY. Time Allotted : 3 Hours Full Marks : 70

Roll No. :... Invigilator's Signature :.. CS/B.TECH(ECE)/SEM-7/EC-703/ CODING & INFORMATION THEORY. Time Allotted : 3 Hours Full Marks : 70 Name : Roll No. :.... Invigilator's Signature :.. CS/B.TECH(ECE)/SEM-7/EC-703/2011-12 2011 CODING & INFORMATION THEORY Time Allotted : 3 Hours Full Marks : 70 The figures in the margin indicate full marks

More information

Cryptography - Session 2

Cryptography - Session 2 Cryptography - Session 2 O. Geil, Aalborg University November 18, 2010 Random variables Discrete random variable X: 1. Probability distribution on finite set X. 2. For x X write Pr(x) = Pr(X = x). X and

More information

Lecture 3: Channel Capacity

Lecture 3: Channel Capacity Lecture 3: Channel Capacity 1 Definitions Channel capacity is a measure of maximum information per channel usage one can get through a channel. This one of the fundamental concepts in information theory.

More information

Codes for Partially Stuck-at Memory Cells

Codes for Partially Stuck-at Memory Cells 1 Codes for Partially Stuck-at Memory Cells Antonia Wachter-Zeh and Eitan Yaakobi Department of Computer Science Technion Israel Institute of Technology, Haifa, Israel Email: {antonia, yaakobi@cs.technion.ac.il

More information

An Introduction to Probabilistic Encryption

An Introduction to Probabilistic Encryption Osječki matematički list 6(2006), 37 44 37 An Introduction to Probabilistic Encryption Georg J. Fuchsbauer Abstract. An introduction to probabilistic encryption is given, presenting the first probabilistic

More information

3F1 Information Theory, Lecture 3

3F1 Information Theory, Lecture 3 3F1 Information Theory, Lecture 3 Jossy Sayir Department of Engineering Michaelmas 2011, 28 November 2011 Memoryless Sources Arithmetic Coding Sources with Memory 2 / 19 Summary of last lecture Prefix-free

More information

CS 282A/MATH 209A: Foundations of Cryptography Prof. Rafail Ostrosky. Lecture 4

CS 282A/MATH 209A: Foundations of Cryptography Prof. Rafail Ostrosky. Lecture 4 CS 282A/MATH 209A: Foundations of Cryptography Prof. Rafail Ostrosky Lecture 4 Lecture date: January 26, 2005 Scribe: Paul Ray, Mike Welch, Fernando Pereira 1 Private Key Encryption Consider a game between

More information

A Practical and Optimal Symmetric Slepian-Wolf Compression Strategy Using Syndrome Formers and Inverse Syndrome Formers

A Practical and Optimal Symmetric Slepian-Wolf Compression Strategy Using Syndrome Formers and Inverse Syndrome Formers A Practical and Optimal Symmetric Slepian-Wolf Compression Strategy Using Syndrome Formers and Inverse Syndrome Formers Peiyu Tan and Jing Li (Tiffany) Electrical and Computer Engineering Dept, Lehigh

More information