Secret Key Agreement: General Capacity and Second-Order Asymptotics

Size: px
Start display at page:

Download "Secret Key Agreement: General Capacity and Second-Order Asymptotics"

Transcription

1 Secret Key Agreement: General Capacity and Second-Order Asymptotics Masahito Hayashi Himanshu Tyagi Shun Watanabe Abstract We revisit the problem of secret key agreement using interactive public communication for two parties and propose a new secret key agreement protocol. The protocol attains the secret key capacity for general observations and attains the second-order asymptotic term in the maximum length of a secret key for independent and identically distributed observations. In contrast to the previously suggested secret key agreement protocols, the proposed protocol uses interactive communication. In fact, the standard one-way communication protocol used prior to this work fails to attain the asymptotic results above. Our converse proofs rely on a recently established upper bound for secret key lengths. Both our lower and upper bounds are derived in a single-shot setup and the asymptotic results are obtained as corollaries. I. INTRODUCTION Two parties observing random variables (RVs) X and Y seek to agree on a secret key. They can communicate interactively over an error-free, authenticated, albeit insecure, communication channel of unlimited capacity. The secret key must be concealed from an eavesdropper with access to the communication and an additional side information Z. What is the maximum length S(X, Y Z) of a secret key that the parties can agree upon? A study of this question was initiated by Maurer [9] and Ahlswede and Csiszár [] for the case where the observations of the parties and the eavesdropper consist of n independent and identically distributed The Graduate School of Mathematics, Nagoya University, Japan, and The Center for Quantum Technologies, National University of Singapore, Singapore. masahito@math.nagoya-u.ac.jp Department of Electrical Communication Engineering, Indian Institute of Science, Bangalore 56002, India. htyagi@ece.iisc.ernet.in Department of Computer and Information Sciences, Tokyo University of Agriculture and Technology, Tokyo , Japan. shunwata@cc.tuat.ac.jp An initial version of this paper was presented at the IEEE International Symposium on Information Theory, Hawaii, USA, 204.

2 2 (IID) repetitions (X n, Y n, Z n ) of RVs (X, Y, Z). For the case when X Y Z form a Markov chain, it was shown in [9], [] that the secret key capacity equals I(X Y Z), namely S(X n, Y n Z n ) = ni(x Y Z) + o(n). However, in several applications (see, for instance, [6]) the observed data is not IID or even if the observations are IID, the observation length n is limited and a more precise asymptotic analysis is needed. In this paper, we address the secret key agreement problem for these two important practical situations. First, when the observations consist of general sources (cf. [], [0]) (X n, Y n, Z n ) such that X n Y n Z n is a Markov chain, we show that S(X n, Y n Z n ) = ni(x Y Z) + o(n), where I(X Y Z) is the inf-conditional information of X and Y given Z. Next, for the IID case with X Y Z, we identify the second-order asymptotic term in S(X n, Y n Z n ). Specifically, denoting by S ɛ,δ (X, Y Z) the maximum length of a secret key over which the parties agree with probability greater than ɛ and with secrecy parameter less than δ, we show that S ɛ,δ (X n, Y n Z n ) = ni(x Y Z) nv Q (ɛ + δ) ± O(log n), where Q is the tail probability of the standard Gaussian distribution and [ V := Var log P XY Z (X, Y Z) P X Z (X Z) P Y Z (Y Z) In particular, our bounds allow us to evaluate the gap to secret key capacity at a finite blocklength n. In Figure we illustrate this gap between the maximum possible rate of a secret key at a fixed n and the secret key capacity for the case where Z is a random bit, Y is obtained by flipping Z with probability 0.25 and X given by flipping Y with probability 0.25; see Example in Section VI for details. Underlying these results is a general single-shot characterization of the secret key length which shows that, when X Y Z, S ɛ,δ (X, Y Z) roughly equals the (ɛ + δ)-tail of the random variable i(x Y Z) = log ]. P XY Z (X, Y Z) P X Z (X Z) P Y Z (Y Z). Our main technical contribution in proving this result is a new single-shot secret key agreement protocol Following the pioneering work of Strassen [28], study of these second-order terms in coding theorems has been revived recently by Hayashi [3], [4] and Polyanskiy, Poor, and Verdú [23].

3 3 Fig.. Gap to secret key capacity at finite n for ɛ + δ = 0.0, 0.05, 0.. which uses interactive communication and attains the desired optimal performance. It was observed in [9], [] that a simple one-way communication protocol suffices to attain the secret key capacity, when the Markov relation X Y Z holds. Also, for a multiterminal setup with constant Z, [4] showed that a noninteractive communication protocol achieves the secret key capacity. Prior to this work, when the Markov relation X Y Z holds, only such noninteractive communication protocols were used for generating secret keys 2, even in single-shot setups (cf. [25]). In contrast, our proposed protocol uses interactive communication. We note in Remark 4 that none of the standard one-way communication protocols achieve the optimal asymptotic bounds, suggesting that perhaps interaction is necessary for generating a secret key of optimal length (see Section VII for further discussion and an illustrative example). Typically, secret key agreement protocols consist of two steps: information reconciliation and privacy amplification. In the first step, the parties communicate to generate some shared random bits, termed common randomness. However, the communication used leaks some information about the generated common randomness. Therefore, a second privacy amplification step is employed to extract from the common randomness secure random bits that are almost independent of the communication used. For IID observations, the information reconciliation step of the standard one-way secret key agreement protocol entails the two parties agreeing on X using a one-way communication of rate H(X Y ). In the privacy 2 Interaction is known to help in some cases where neither X Y Z nor Y X Z is satisfied [9], [35], [9].

4 4 amplification step, the rate H(X Z) residual randomness of X, which is almost independent of Z, is used to extract a secret key of rate H(X Z) H(X Y ) which is independent jointly of Z and the communication used in the information reconciliation stage. Under the Markov condition X Y Z, the resulting secret key attains the secret key capacity I(X Y Z). However, in the single-shot regime, the behavior of RVs log P X Z (X Z) and log P X Y (X Y ), rather than their expected values, becomes relevant. The difficulty in extending the standard one-way communication protocol to the single-shot setup lies in the spread of the information spectrums 3 of P X Y and P X Z. Specifically, while we require the random variable log P X Y (X Y ) itself to show up as the length of communication in the information reconciliation step, a naive extension requires as much communication as a large probability tail of log P X Y (X Y ). To remedy this, we slice the spectrum of P X Y into slices 4 of length each and adapt the protocol to the slice which contains (X, Y ). However, since neither party knows the value of log P X Y (X Y ), this adaptation requires interactive communication. Motivating this work, and underlying our converse proof, is a recently established single-shot upper bound on secret key lengths for the multiparty secret key agreement problem [33] (see, also, [3]). The proof relies on relating secret key agreement to binary hypothesis testing. In spirit, this result can be regarded as a multiterminal variant of a similar single-shot converse for the channel coding problem which appeared first in [2], [5] and has been termed the meta-converse by Polyanskiy, Poor, and Verdú [23], [22] (see, also, [34] and [2, Section 4.6]). The basic concepts of secret key agreement and a general result for converting a high reliability protocol to a high secrecy protocol are given in the next section. In Section III, we review the single-shot upper bound of [33] for the two party case. Our new secret key agreement protocol and its single-shot performance analysis is presented in Section IV. The single-shot results are applied to general sources in Section V and to IID sources in Section VI. The final section contains a discussion on the role of interaction in our secret key agreement protocols. II. SECRET KEYS We consider the problem of secret key agreement using interactive public communication by two (trusted) parties observing RVs X and Y taking values in countable sets X and Y, respectively. Upon making these observations, the parties communicate interactively over a public communication channel 3 The range of the log-likelihood log P X (x) (conditional log-likelihood log P X Y (x y)) is referred to as information spectrum of P X (conditional information spectrum of P X Y ). This notion was introduced in the seminal work [] and is appropriate for deriving single-shot coding theorems, without making assumptions on the underlying distribution. See [0] for a detailed account. 4 See Appendix C for a secret key agreement based on slicing the spectrum of P X.

5 5 that is accessible by an eavesdropper. We assume that the communication channel is error-free and authenticated. Specifically, the communication is sent over r rounds of interaction 5. In the jth round of communication, j r, each party sends a message which is a function of its observation, locally generated randomness denoted by 6 U x and U y, and the previously observed communication. The overall interactive communication is denoted by F. In addition to F, the eavesdropper observes a RV Z taking values in a countable set Z. The joint distribution P XY Z is known to all parties. Using the interactive communication F and their local observations, the parties agree on a secret key. A RV K constitutes a secret key if the two parties form estimates that agree with K with probability close to and K is concealed, in effect, from an eavesdropper with access to (F, Z). Formally, we have the following definition. Definition. A RV K with range K constitutes an (ɛ, δ)-secret key ((ɛ, δ)-sk) if there exist functions K x and K y of (U x, X, F) and (U y, Y, F), respectively, such that the following two conditions are satisfied P (K x = K y = K) ɛ, () P KFZ P unif P FZ δ, (2) where P unif is the uniform distribution on K and P Q = P(u) Q(u). 2 u The first condition above represents the reliability of the secret key and the second condition guarantees secrecy. Definition 2. Given ɛ, δ [0, ), the supremum over the lengths log K of an (ɛ, δ)-sk is denoted by S ɛ,δ (X, Y Z). Remark. The only interesting case is when ɛ + δ <, since otherwise S ɛ,δ (X, Y Z) is unbounded. Indeed, consider two trivial secret keys K and K 2 with range K generated as follows: For K, the first party generates K x = K uniformly over K and sends it to the second party. Thus, K constitutes a (0, / K )-SK, and therefore, also a (0, )-SK. For K 2, the first party generates K x = K 2 uniformly over K and the second party generates K y uniformly over K. Then, K 2 constitutes a ( / K, 0)-SK, and therefore, also a (, 0)-SK. If ɛ + δ, the RV K which equals K with probability ( ɛ) and 5 In the asymptotic regime considered in Sections V, the number of rounds r may depend on the block length n. 6 The RVs U x and U y are mutually independent and independent jointly of (X, Y ).

6 6 K 2 with probability ɛ constitutes (ɛ, ɛ)-sk of length log K, and therefore, also an (ɛ, δ)-sk of the same length. Since K was arbitrary, S ɛ,δ (X, Y Z) =. Remark exhibits a high reliability (0, )-SK and a high secrecy (, 0)-SK for the trivial case ɛ+δ. The two constructions together sufficed to characterize S ɛ,δ (X, Y Z). Following a similar approach for the regime ɛ + δ <, we can construct a high reliability (ɛ + δ, 0)-SK and a high secrecy (0, ɛ + δ)-sk and randomize over those two secret keys with probabilities ɛ/(ɛ + δ) and δ/(ɛ + δ) to obtain a hybrid, (ɛ, δ)-sk. However, the results below show that we do not need to construct both high secrecy and high reliability secret keys for the secrecy definition in (2) and a high reliability construction alone will suffice. We first show that any (ɛ, δ)-sk can be converted into a high secrecy, (ɛ + δ, 0)-SK. Proposition (Conversion to High Secrecy Protocol). Given an (ɛ, δ)-sk, there exists an (ɛ + δ, 0)-SK of the same length. Proof. Let K be an (ɛ, δ)-sk using interactive communication F, with local estimates K x and K y. We construct a new (ɛ + δ, 0)-SK K using the maximal coupling lemma, which asserts the following (cf. [29]): Given two distributions P and Q on a set X, there exists a joint distribution P XX on X X such that the marginals are P X = P and P X = Q, and under P XX P ( X X ) = P Q. (3) The distribution P XX is called the maximal coupling of P and Q. For each fixed realization of (F, Z), let P KK F,Z be the maximal coupling of P K FZ and P unif. Then P K FZ = P unif P FZ, and since K is an (ɛ, δ)-sk, we get by the maximal coupling property (3) that P ( K K ) = P KFZ P unif P FZ δ, and define P K KK xk yfxy ZU xu y := P K KFZP KKxK yfxy ZU xu y. (4) Since P (K = K x = K y ) ɛ, under P K KK xk yfxy Z we have P ( K x = K y = K ) ɛ δ. Thus, K constitutes an (ɛ + δ, 0)-SK. Proposition plays an important role in our secret key agreement protocol and allows us to convert a high reliability (η, α)-sk with small η into an (ɛ, δ)-sk for any arbitrary ɛ and δ satisfying (roughly)

7 7 ɛ + δ > α. Formally, we have the following. Proposition 2 (Hybrid Protocol). Given a protocol for generating (η, α)-sk, there exists a protocol for generating an (ɛ, δ)-sk of the same length for every 0 < ɛ, δ < such that ɛ η ɛ + δ α + η. Proof. Given an (η, α)-sk K, by Proposition there exists an (α+η, 0)-SK K 2. Let θ = δ/(ɛ η+δ). Consider a secret key K obtained by a hybrid use of the protocols for generating K and K 2, with the protocol for K executed with probability θ and that for K 2 with probability θ. Note from the proof of Proposition that it is the same secret key agreement protocol (K x, K y, F) that generates both K and K 2. Thus, the claim follows for the time-shared secret key K since P (K = K x = K y ) = θp (K = K x = K y ) + ( θ)p (K 2 = K x = K y ) δη + (ɛ η)(α + η) ɛ η + δ δη + (ɛ η)(ɛ + δ) ɛ η + δ (5) = ɛ, and P KFZ P unif P FZ θ P KFZ P unif P FZ + ( θ) P K2FZ P unif P FZ δα + (ɛ η) 0 ɛ η + δ δ(ɛ + δ η) ɛ η + δ (6) = δ, where we have used the assumption ɛ + δ α + η in (5) and (6). Remark 2. Note that the actual secret key K in Definition is not available to any party and has only a formal role in the secret key agreement protocol. Interestingly, the proof above says that the estimates (K x, K y ) of a high reliability (η, α)-sk with η 0 constitute an (ɛ, δ)-sk as well for every ɛ + δ α, albeit for a different hidden RVs K. Thus, it suffices to exhibit a high reliability (η, α)-sk with small η and desired α. Such a protocol is

8 8 given in Section IV and underlies all our achievability results. A more demanding secrecy requirement. A more demanding secrecy requirement enforces one of the estimates K x or K y itself to be secure, i.e., P KxFZ P unif P FZ δ or PKyFZ P unif P FZ δ. (7) The validity of Proposition for this secrecy requirement remains open. However, in the important special case when Z is a function of either X or Y, which includes the case of constant Z, Proposition holds even under the more demanding secrecy requirement (7). Indeed, let (K x, K y ) be an (ɛ, δ)-sk with K x satisfying the more demanding secrecy requirement above. Proceeding as in the proof of Proposition with K x in the role of K, we obtain a RV K such that P (K K y ) ɛ + δ and P K FZ = P unif P FZ. Let the joint distribution P K K xk yfxy ZU xu y be as in (4) with K x replacing K. To claim that K constitutes an (ɛ + δ, 0)-SK under (7), it suffices to show that one of the parties can simulate K. To that end, the party observing X can first run the original secret key agreement protocol to get K x and F. Also, Z is available to the party since it is a function of X. Thus, this party can simulate the required RV K using the distribution P K K xfz, which completes the proof of Proposition under the more demanding secrecy requirement. Note that the argument above relies on using local randomness to simulate K. For the original secrecy requirement (2), Proposition holds even when we restrict to deterministic protocols with no local randomness allowed. It turns out that this is not the case for the more demanding secrecy requirement (7), as the following simple counterexample shows: Let X be a binary RV taking with probability p < 2, and let Y = Z = constant. Then, K x = X and constitutes a -bit (p, /2 p)-sk under (7). If Proposition holds, the parties should be able to generate a (/2, 0)-SK. However, a (/2, 0)-SK consists of an unbiased bit, which cannot be generated without additional randomness. Therefore, for secrecy requirement (7), Proposition does not hold if we restrict to deterministic protocols. To conclude, for the special case when Z is a function of X, it suffices to construct only a high reliability protocol, provided that local randomness is available. In fact, the high reliability protocol proposed in Section IV satisfies the more demanding secrecy requirement (7) and, if Z is a function of either X or Y, all the results of this paper hold even under (7). III. UPPER BOUND ON S ɛ,δ (X, Y Z) We recall the conditional independence testing upper bound on S ɛ,δ (X, Y Z), which was established recently in [33], [32]. In fact, the general upper bound in [33], [32] is a single-shot upper bound on the secret key length for a multiparty secret key agreement problem. We recall a specialization of the

9 9 general result to the case at hand. In order to state our result, we need the following concept from binary hypothesis testing. Consider a binary hypothesis testing problem with null hypothesis P and alternative hypothesis Q, where P and Q are distributions on the same alphabet V. Upon observing a value v V, the observer needs to decide if the value was generated by the distribution P or the distribution Q. To this end, the observer applies a stochastic test T, which is a conditional distribution on {0, } given an observation v V. When v V is observed, the test T chooses the null hypothesis with probability T(0 v) and the alternative hypothesis with probability T ( v) = T (0 v). For 0 ɛ <, denote by β ɛ (P, Q) the infimum of the probability of error of type II given that the probability of error of type I is less than ɛ, i.e., where β ɛ (P, Q) := inf Q[T], (8) T : P[T] ɛ P[T] = v Q[T] = v P(v)T(0 v), Q(v)T(0 v). The definition of a secret key used in [33], [32] is different from Definition. However, the two definitions are closely related, and the upper bound of [33], [32] can be extended to our case as well. We review the alternative definition in Appendix A and relate it to Definition to derive the following upper bound, which will be instrumental in our converse proofs. Theorem 3 (Conditional independence testing bound). Given 0 ɛ + δ <, 0 < η < ɛ δ, the following bound holds: S ɛ,δ (X, Y Z) log β ɛ+δ+η ( PXY Z, Q X Z Q Y Z Q Z ) + 2 log(/η), for all joint distributions Q on X Y Z that render X and Y conditionally independent given Z. IV. THE SECRET KEY AGREEMENT PROTOCOL In this section we present our secret key agreement protocol, which will be used in all the achievability results of this paper. A typical secret key agreement protocol for two parties entails sharing the observations of one of the parties, referred to as information reconciliation, and then extracting a secret key out of the shared observations, referred to as privacy amplification (cf. [9], [], [24]). In another interpretation, the parties communicate first to establish a common randomness [2] and then extract a

10 0 secret key from the common randomness 7. Our protocol below, too, has these two components but the rate of the communication for information reconciliation and the rate of the randomness extracted by privacy amplification have to be chosen carefully. Heuristically, in the information reconciliation stage, the first party randomly bins X and sends it to the second party. If we do not use interaction, by the Slepian-Wolf theorem [27] (see [20],[0, Lemma 7.2.], [8] for a single-shot version) the length of communication needed is roughly equals a large probability upper bound for h(x Y ) := log P X Y (X Y ). However, in order to derive a lower bound that matches our upper bound, we expect to send X to Y using approximately h(x Y ) bits of communication, which can differ from the tail bound above by as much as the length of the spectrum of P X Y. To overcome this gap, we utilize spectrum slicing, a technique introduced in [0], to construct an adaptive scheme that can handle the spread of information spectrum. Specifically, we divide the spectrum of P X Y into slices of small lengths. The protocol proceeds interactively to adapt to the current slice index, allowing us to replace the spectrum length in the argument above with the length of a single slice. A. Formal description of the protocol We consider the essential spectrum of P X Y, i.e., the set of values taken by h(x Y ) between λ min and λ max, where λ min and λ max are chosen such that h(x Y ) lies in (λ min, λ max ) with large probability. We divide the essential spectrum of P X Y into L slices, L of them of width. Specifically, for j L, the jth slice of the spectrum of P X Y is defined as follows T j = {(x, y) : λ j log P X Y (x y) < λ j + }, where λ j = λ min + (j ). Note that the slice index is not available to any one party. The proposed protocol proceeds assuming the lowest index j = and uses interactive communication to adapt to the actual slice index. For information reconciliation, we simply send a random binning of X. However, the bin size is increased successively, where the incremental bin sizes M,..., M L are given by λ + + γ, j = log M j =, < j L, 7 For an interpretation of secrecy agreement in terms of common randomness decomposition, see [4], [5], [30].

11 For privacy amplification, we will rely on the leftover hash lemma [7], [25]. Let F be a 2-universal family of mappings f : X K, i.e., for each x x, the family F satisfies (f(x) = f(x )) F K. (9) f F The following lemma is a slight modification of the known forms of the leftover hash lemma (cf. [24]); we give a proof in Appendix B for completeness. Lemma 4 (Leftover Hash). Consider RVs X, Z, and V taking values in X, Z and V, respectively, where X and Z are countable and V is finite. Let S be a random seed such that f S is uniformly distributed over a 2-universal family as above. Then, for K = f S (X) and for any Q Z satisfying supp(p Z ) supp(q Z ), we have P KV ZS P unif P V Z P S 2 K V 2 Hmin(PXZ QZ), where P unif is the uniform distribution on K and H min (P XZ Q Z ) = log sup x,z:q Z(z)>0 P XZ (x, z). Q Z (z) The main benefit of the spectrum slicing approach above is that roughly h(x Y ) + L + bits are sent for each realization (X, Y ). At the same time, we can estimate h(x Y ) up to a precision of when the protocol stops a key property in our secrecy analysis. The complete protocol is described in Protocol. We remark that the random seed based secret key generation used in Protocol is only for the ease of security analysis. A slight modification of our protocol can work with deterministic extractors.

12 2 Protocol : Secret key agreement protocol Input: Observations X and Y Output: Secret key estimates K x and K y Information reconciliation Initiate the protocol with l = while l L and ACK not received do First party sends the random bin index of X into M l bins, B l = F l (X), to the second party if Second party find a unique x such that (x, Y ) T l and F j (x) = B j, Second party sets ˆX = x and sends back an ACK F 2i = to the first party else Second party sends back a NACK F 2i = 0 Parties update l l + if No ACK received then Protocol declares an error and aborts else Privacy amplification j l then First party generates the random seed S and sends it to the second party using public communication First party generates the secret key K x = K = f S (X) The second party generates the estimate K y of K as K y = f S ( ˆX) Remark 3. Note that since each ACK-NACK signal will require -bit of communication to implement, the number of bits physically sent in the protocol is roughly h(x Y ) + + L. However, the log of the number of values taken by the transcript is much less, roughly h(x Y ) + + log L, since the ACK-NACK sequence will be a stopped sequence consisting of NACKs followed by a single ACK. By Lemma 4, it is this latter quantity h(x Y ) + + log L that captures the amount of information leaked to the eavesdropper by public communication, which will be used in our security analysis of the protocol. B. Performance analysis of the protocol We now derive performance guarantees for the secret key agreement protocol of the previous section. In view of Proposition 2 and Remark 2, the protocol above will constitute an (ɛ, δ)-sk protocol for arbitrary ɛ, δ (0, ) if it yields an (η, ɛ + δ)-sk with η 0. The result below shows that Protocol indeed yields such a high reliability secret key.

13 3 Denote by i XY (x, y) the information density i XY (x, y) := log P XY (x, y) P X (x) P Y (y). Theorem 5. For λ min, λ max, > 0 with λ max λ min, let L = λ max λ min. Then, for every γ > 0 and λ 0, there exists an (ɛ, δ)-sk K taking values in K with ɛ P XY (T 0 ) + L2 γ, δ P (i XY (X, Y ) i XZ (X, Z) λ + ) + K 2 2 (λ γ 3 log L) + L + P XY (T 0 ) + L2 γ, where T 0 := { (x, y) : log P X Y (x y) λ max or log P X Y (x y) < λ min } Proof. We begin by analyzing the reliability of Protocol. Let ρ(x, Y ) denote the number of rounds after which the protocol stops when the observations are (X, Y ). An error occurs if (X, Y ) T 0 or if there exists a ˆx X such that (ˆx, Y ) T l and F j (X) = F j (ˆx) for all j such that j l, for some l ρ(x, Y ). Note that for each j L, {x : (x, y) T j } exp(λ j + ) y Y. Therefore, using a slight modification of the usual probability of error analysis for random binning, the probability of error for Protocol is bounded above as P e P XY (T 0 ) + x,y P XY (T 0 ) + x,y P XY (T 0 ) + x,y P XY (x, y) P XY (x, y) ρ(x,y) l= ρ(x,y) l= P (F j (x) = F j (ˆx), j l) ( ) (ˆx, y) T l ˆx x ˆx x ( ) (ˆx, y) T l M...M l ρ(x,y) P XY (x, y) 2 λl γ {ˆx : (ˆx, y) T l } l= P XY (T 0 ) + L 2 γ, (0) where we have used the fact that log M...M l = λ + l + γ = λ l + + γ. We now establish the secrecy of the protocol. Our proof entails establishing secrecy of the protocol

14 4 conditioned on each realization J = j of an appropriately defined RV, which roughly corresponds to the slice index for (X, Y ). Specifically, denote by E the set of (x, y) for which an error occurs in information reconciliation and by E 2 the set E 2 := {(x, y, z) : i XY (x, y) i XZ (x, z) λ + }, which is the same as { } (x, y, z) : log P X Z (x z) log P X Y (x y) λ + Let RV J taking values in the set {0,,..., L} be defined as follows: 0, if (X, Y ) T 0 E or (X, Y, Z) E 2, J = j if (X, Y ) T j E c and (X, Y, Z) Ec 2, j L. While we have used random coding in the information reconciliation stage, it is only for the ease of proof and the encoder can be easily derandomized. 8 For the remainder of the proof, we assume a deterministic encoder; in particular, J is a function of (X, Y, Z). We divide the indices 0 j L into good indices I g and the bad indices I b = Ig, c where I g = {j : j > 0 and P J (j) L } 2. Denoting by F l the communication up to l round of the protocol, i.e., F l := {(F j, F 2j ), j l} and by F = F ρ(x,y ) the overall communication, we have P KFZS P unif P FZS P KFZSJ P unif P FZSJ P (J I b ) + j I g P J (j) PKFZS J=j P unif P FZS J=j P XY (T 0 E ) + P XY Z (E 2 ) + L + j I g P J (j) PKFZS J=j P unif P FZS J=j. () To bound each term PKFZS J=j P unif P FZS J=j, j I g, first note that under each event J = j I g information reconciliation succeeds and F = F j. Furthermore, the number of possible transcripts sent in the information reconciliation stage up to jth round, i.e., the cardinality F j of the range of RV F j, 8 Since the final error probability is given by the expected probability of error, where the expectation is over the source distribution and the additional shared randomness used in communication, there exists a realization of the shared randomness for which the same expected error perfomance with respect to the source distribution is attained.

15 5 satisfies (cf. Remark 3) log F j λ j + + γ + log j. Let P j be the probability distribution of X, Y, Z given J = j, i.e., P j (x, y, z) := P XY Z (x, y, z) (J(x, y, z) = j), x X, y Y, z Z, 0 j L. P J (j) With P j,xz denoting the marginal on X Z induced by P j, for all j I g, we have log P j,xz (x, z) y = log P XY Z (x, y, z) (J(x, y, z) = j) P Z (z) P J (j) P Z (z) y log 2 λ P X Y (x y) P Y XZ (y x, z) (J(x, y, z) = j) P J (j) y log 2 λi λ P Y XZ (y x, z) (J(x, y, z) = j) P J (j) log 2 λi λ P J (j) λ i λ + 2 log L, where the first inequality holds since J(x, y, z) > 0 implies (x, y, z) E2 c, the second inequality holds since J(x, y, z) = j implies (x, y) T j, and the last inequality holds since j I g implies P J (j) > L 2. Thus, we obtain the following bound on H min (P j,xz P Z ): H min (P j,xz P Z ) λ j + λ + 2 log L. Therefore, noting that S is independent of (X, Z, F, J) and using Lemma 4, we get PKFZS J=j P unif P FZS J=j = PKFj ZS J=j P unif P Fj ZS J=j K F 2 j 2 Hmin(Pj,XZ PZ) K 2 2 (λ γ 3 log L), j I g. which gives the claimed secrecy by using the definition of E 2 and bounding P XY (T 0 E ) using the union bound as in (0). Thus, when the secret key length log K λ, the reliability parameter ɛ for Protocol can be made very small by appropriately choosing parameters λ min and λ max, and the secrecy parameter δ can be made roughly as small as the tail-probability P (i XY (X, Y ) i XZ (X, Z) λ). In fact, Proposition 2 allows us to shift this constraint on δ to a constraint on ɛ + δ and Protocol yields an (ɛ, δ)-sk of length

16 6 roughly equal to λ as long as ɛ + δ is greater than P (i XY (X, Y ) i XZ (X, Z) λ). Formally, we have the following simple corollary of Theorem 5. Corollary 6. For λ max, λ min, > 0 with λ max λ min, let and let L = λ max λ min, T 0 := { (x, y) : log P X Y (x y) λ max or log P X Y (x y) < λ min }. Then, for every λ 0 and every ɛ and δ satisfying ɛ P XY (T 0 ) + 4 ( L 4 K 2 λ) 3, (2) ɛ + δ P (i XY (X, Y ) i XZ (X, Z) λ + ) + L + 2P XY (T 0 ) ( L 4 K 2 λ) 3, (3) there exists an (ɛ, δ)-sk K taking values in K. Proof. Let η(γ) := P XY (T 0 ) + L2 γ, α(γ) := P (i XY (X, Y ) i XZ (X, Z) λ + ) + 2 K 2 (λ γ 3 log L) + L + P XY (T 0 ) + L2 γ. We first optimize η(γ) + α(γ) = P (i XY (X, Y ) i XZ (X, Z) λ + ) + 2 K 2 (λ γ 3 log L) + L + 2P XY (T 0 ) + 2L2 γ over γ. By setting a = L2 γ and by noting that the function f(a) = 2a+ a /2 L 2 2 K 2 λ has minimum value 3 ( 2 L 4 K 2 λ) 3 with a = ( 4 L 4 K 2 λ) 3, the minimum of η(γ) + α(γ) is achieved when η = η := P XY (T 0 ) + 4 ( L 4 K 2 λ) 3, α = α := P (i XY (X, Y ) i XZ (X, Z) λ + ) + L + P XY (T 0 ) ( L 4 K 2 λ) 3. The corollary follows by applying Proposition 2 with η = η and α to the resulting (η, α )-SK.

17 7 V. SECRET KEY CAPACITY FOR GENERAL SOURCES In this section we will establish the secret key capacity for a sequence of general sources (X n, Y n, Z n ) with joint distribution 9 P XnY nz n. The secret key capacity for general sources is defined as follows [9], [], [4]. Definition 3. The secret key capacity C is defined as where the sup is over all ɛ n, δ n 0 such that C := sup lim inf ɛ n,δ n n n S ɛ n,δ n (X n, Y n Z n ), lim ɛ n + δ n = 0. n To state our result, we need the following concepts from the information spectrum method; see [0] for a detailed account. For RVs (X n, Y n, Z n ) n=, the inf-conditional entropy rate H(X Y) and the sup-conditional entropy rate H(X Y) are defined as follows: { ( H(X Y) = sup α lim P ) n n log P X n Y n (X n Y n ) < α { ( H(X Y) = inf α lim P ) n n log P X n Y n (X n Y n ) > α Similarly, the inf-conditional information rate I(X Y Z) is defined as { I(X Y Z) = sup α } = 0, } = 0. ( ) } lim P n n i(x n, Y n Z n ) < α = 0, where, with a slight abuse of notation, i(x n, Y n Z n ) denotes the conditional information density i(x n, Y n Z n ) = log We also need the following result credited to Verdú. P XnY n Z n (X n, Y n Z n ) P Xn Z n (X n Z n ) P Yn Z n (Y n Z n ). Lemma 7. [0, Theorem 4..] For every ɛ n such that it holds that lim inf n lim ɛ n = 0, n n log β ɛ n ( PXnY nz n, P Xn Z n P Yn Z n P Zn ) I(X Y Z), 9 The distributions P XnY nz n need not satisfy the consistency conditions.

18 8 where β ɛn is defined in (8). Our result below characterizes the secret key capacity C for general sources for the special case when X n Y n Z n is a Markov chain. Theorem 8. For a sequence of sources {X n, Y n, Z n } n= such that X n Y n Z n form a Markov chain for all n, the secret key capacity C is given by 0 C = I(X Y Z). Proof. Applying Theorem 3 with η = η n = n, along with Lemma 7, gives C I(X Y Z). For the other direction, we construct a sequence of (ɛ n, δ n )-SKs K = K n with ɛ n, δ n 0 and rate approximately I(X Y Z). Indeed, in Theorem 5 choose λ max = n ( H(X Y) + ), λ min = n (H(X Y) ), γ = γ n = n /2, λ = λ n = n (I (X Y Z) ) ; thus, L = L n = n ( H(X Y) H(X Y) + 2 ). Since i XY (X, Y ) i XZ (X, Z) = i(x, Y Z) if X Y Z form a Markov chain, there exists an (ɛ n, δ n )-SK K n of rate given by n log K = n (λ n 3 log L n ) = I (X Y Z) 2 o(n), such that ɛ n, δ n 0 as n. Rates arbitrarily close to I(X Y Z) are achieved by this scheme as > 0 is arbitrary. In the achievability part of the proof above, we actually show that, in general, our protocol generates 0 We assume that H(X Y) <.

19 9 a secret key of rate { ( ) } sup α lim P n n [i(x n, Y n ) i(x n, Z n )] < α = 0, which matches the converse bound of I (X Y Z) in the special case when X n Y n Z n holds. VI. SECOND-ORDER ASYMPTOTICS OF SECRET KEY RATES The results of the previous section show that for with ɛ n, δ n 0, the largest length S ɛn,δ n (X n, Y n Z n ) of an (ɛ n, δ n )-SK K is sup S ɛn,δ n (X n, Y n Z n ) = ni(x Y Z) + o(n), (4) ɛ n,δ n if X n Y n Z n form a Markov chain. For the case when (X n, Y n, Z n ) = (X n, Y n, Z n ) is the n-iid repetition of (X, Y, Z) where X Y Z, we have I(X Y Z) = I(X Y Z). Furthermore, (4) holds even without ɛ n, δ n 0. In fact, a finer asymptotic analysis is possible and the second-order asymptotic term in the maximum length of an (ɛ, δ)-sk can be established; this is the subject-matter of the current section. Let V := Var [i(x, Y Z)], and let Q(a) := a [ ] exp t2 dt 2π 2 be the tail probability of the standard Gaussian distribution. Under the assumptions V X Y := Var[ log P X Y (X Y )] <, (5) [ T := E i(x, Y Y ) I(X Y Z) 3] <, (6) the result below establishes the second-order asymptotic term in S ɛ,δ (X n, Y n Z n ). Theorem 9. For every ɛ, δ > 0 such that ɛ + δ < and IID RVs (X n, Y n, Z n ) such that X Y Z is a Markov chain, we have S ɛ,δ (X n, Y n Z n ) = ni(x Y Z) nv Q (ɛ + δ) ± O(log n),

20 20 Proof. For the converse part, we proceed along the lines of [23, Lemma 58]. Recall the following simple bound for β ɛ (P, Q) (cf. [0, Lemma 4..2]): ( ({ log β ɛ (P, Q) λ log P x : log P(x) }) ) Q(x) λ ɛ. Thus, applying Theorem 3 with P XY Z = P Xn Y n Z n, Q XY Z = P Xn Z np Y n Z n, and η = η n = n /2, and choosing λ = ni(x Y Z) nv Q (ɛ + δ + θ n ), where θ n = 2 T 3 + n 2V 3/2 n, we get by the Berry-Esséen theorem (cf. [7], [26]) that P (i(x n, Y n ) i(x n, Z n ) λ) ɛ + δ + 2 n, which implies ( S ɛ,δ (X n, Y n Z n ) λ log P (i(x n, Y n ) i(x n, Z n ) λ) ɛ δ n /2) + log n ni(x Y Z) nv Q (ɛ + δ + θ n ) + 3 log n. (7) 2 Thus, we have the desired converse by using Taylor approximation of Q( ) to remove θ n. For the direct part, we use Corollary 6 by setting λ max = n(h(x Y ) + /2), λ min = n(h(x Y ) /2), λ = ni(x Y Z) nv Q (ɛ + δ θ n), and where 0 < < 2H(X Y ), and log K = ni(x Y Z) nv Q (ɛ + δ θ n) 2 θ n = 8V X Y n 2 + T 3 2V 3/2 n + n n. Note that L = n. Upon bounding the term P XY (T 0 ) in (2) and (3) by 4VX Y n 2 log n, (8) using Chebyshev s inequality, the condition (2) is satisfied for sufficiently large n. Furthermore, upon bounding the first

21 2 term of (3) by the Berry-Esséen theorem, the condition (3) is also satisfied. Thus, it follows that there exists an (ɛ, δ)-sk K taking values on K. The direct part follows by using Taylor approximation of Q( ) to remove θ n. Remark 4. Note that a standard noninteractive secret key agreement protocol based on information reconciliation and privacy amplification (cf. [25]) only gives the following suboptimal achievability bound on the second-order asymptotic term: S ɛ,δ (X n, Y n Z n ) ni(x Y Z) nv X Y Q (ɛ) nv X Z Q (δ) + o( n), where V X Y and V X Z are the variances of the conditional log-likelihoods of X given Y and Z respectively (cf. (5)). We close this section with a numerical example that illustrates the utility of our bounds in characterizing the gap to secret key capacity at a fixed n. Example (Gap to secret key capacity). For α 0, α (0, /2), let B 0 and B be independent random bits taking value with probability α 0 and α, respectively. Consider binary X, Y, Z where Z is a uniform random bit independent jointly of B 0 and B, Y = Z B 0, and X = Y B. We consider the rate S ɛ,δ (X n, Y n Z n )/n of an (ɛ, δ)-sk that can be generated using n IID copies of X and Y when the eavesdropper observes Z n. The following quantities, needed to evaluate (7) and (8), can be easily evaluated: I(X Y Z) = h(α 0 α ) h(α ), V = µ 2, T = µ 3, V X Y = α (log α h(α )) 2 + ( α )(log( α ) h(α )) 2, where µ r is the rth central moment of i(x, Y Z) and is given by α r µ r = α 0 α log I(X Y Z) α 0 α + ( α 0 )( α ) log α I(X Y Z) α 0 α α r + ( α 0 )α log I(X Y Z) α 0 α + α 0 ( α ) log α r I(X Y Z) α 0 α, h(x) = x log x ( x) log( x) is the binary entropy function and α 0 α = α 0 ( α )+( α 0 )α. In Figure (given in Section I), we plot the upper bound on S ɛ,δ (X n, Y n Z n )/n resulting from (7) and the lower bound resulting from (7) with = for α 0 = 0.25 and α = r

22 22 VII. DISCUSSION: IS INTERACTION NECESSARY? In contrast to the protocols in [9], [], [4], [25], our proposed Protocol for secret key agreement is interactive. In fact, the protocol requires as many rounds of interaction as the number of slices L, which can be pretty large in general. For instance, to obtain the second-order asymptotic term in the previous section, we chose L = n. In Appendix C, we present an alternative protocol which requires only -bit of feedback and, in the special case when Z is constant, achieves the asymptotic results of Sections V and VI. But is interaction necessary for attaining our asymptotic results? Below we present an example where none of the known (noninteractive) secret key agreement protocols achieves the general capacity of Theorem 8, suggesting that perhaps interaction is necessary. For i =, 2, let (Xi n, Y i n, Zn i ) be IID with X = Y = Z = {0, } such that P X n i Y n i Z n i (xn, y n, z n ) = 2 n W n i (y n x n )V n (z n y n ), where W i and V, respectively, are binary symmetric channels with crossover probabilities p i and q. Let (X n, Y n, Z n ) be the mixed source given by P XnY nz n (x n, y n, z n ) = 2 P X n Y n Z n (xn, y n, z n ) + 2 P X2 n Y n ] = 2 n [ 2 W n (y n x n ) + 2 W n 2 (y n x n ) 2 Z n 2 (xn, y n, z n ) V n (z n y n ). Note that X n Y n Z n forms a Markov chain. Suppose that 0 < p < p 2 < 2. Then, we have I(X Y Z) = min[h(x Z ) H(X Y ), H(X 2 Z 2 ) H(X 2 Y 2 )] = min[h(p q) h(p ), h(p 2 q) h(p 2 )] = h(p 2 q) h(p 2 ), where h( ) is the binary entropy function and is binary convolution. Using a standard noninteractive secret key agreement protocol based on information reconciliation and privacy amplification (cf. [25]), we can achieve only H(X Z) H(X Y) = min[h(x Z ), H(X 2 Z 2 )] max[h(x Y ), H(X 2 Y 2 )] = H(X Z ) H(X 2 Y 2 ) = h(p q) h(p 2 ),

23 23 which is less than the general secret key capacity of Theorem 8. Proving a precise limitation result for noninteractive protocols is a direction for future research. A. Proof of Theorem 3 APPENDIX The definition of a secret key used in [33], [32] is different from the one in Definition, and it conveniently combines the secrecy and the reliability requirements into a single expression. Instead of considering a separate RV K, the alternative definition directly works with the estimates K x and K y. Specifically, let K x and K y be functions of (U x, X, F) and (U y, Y, F), respectively, where F is an interactive communication. Then, RVs K x and K y with a common range K constitute an ɛ-secret key (ɛ-sk) if P KxK yfz P (2) unif P FZ ɛ, (9) where, for a pmf P on X, P (m) denotes its extension to X m given by P (m) (x,..., x m ) = P (x) (x =... = x m ), (x,..., x m ) X m. Note that the alternative definition captures reliability condition P (K x = K y ) ɛ by requiring that the joint distribution P KxK y is close to a uniform distribution on the diagonal of K K. The upper bound in [33], [32] holds under this alternative definition of a secret key. However, the next lemma says that this alternative definition is closely related to our Definition. Lemma 0. Given ɛ, δ [0, ) and an (ɛ, δ)-sk K, the local estimates K x and K y satisfy (9) with ɛ + δ, i.e., P KxK yfz P (2) unif P FZ ɛ + δ. Conversely, if K x and K y satisfy (9), either K x or K y constitutes an (ɛ, ɛ)-sk. Proof. We prove the direct part first. For an (ɛ, δ)-sk K, P KxK yfz P (2) unif P FZ P KKxK yfz P (3) unif P FZ P KKxK yfz P (3) K FZ P FZ + K FZ P FZ P (3) unif P FZ. P (3)

24 24 Since P Q = Q({x : Q(x) P(x)}) P({x : Q(x) P(x)}) and {(k, k x, k y, f, z) : P (3) K FZ (k, k x, k y f, z) P KKxK y FZ(k, k x, k y f, z)} = {(k, k x, k y, f, z) : k = k x = k y }, the first term on the right-side above satisfies P KKxK yfz P (3) K FZ P FZ = P (K = K x = K y ) ɛ. (20) Furthermore, the second term satisfies P (3) K FZ P FZ P (3) unif P FZ = P FZ (f, z) P K FZ (k f, z) (k = k x = k y ) (k = k x = k y ) K k,k x,k y,f,z = PK FZ P FZ P unif P FZ δ, where the last inequality is by the δ-secrecy condition K. Combining the bounds on the two terms above, the direct part follows. For the converse, ɛ-secrecy of K x (or K y ) holds since by the monotonicity of the variational distance P KxFZ P unif P FZ P KxK yfz P (2) unif P FZ ɛ. The ɛ-reliability condition, too, follows from the triangle inequality upon observing that (2) P KxK y P unif = P KxK y (k x, k y ) (k x = k y ) K k x,k y k x k y P KxK y (k x, k y ) = P (K x K y ). To prove Theorem 3, we first relate the length of a secret key satisfying (9) to the exponent of the probability of error of type II in a binary hypothesis testing problem where an observer of (K x, K y, F, Z)

25 25 seeks to find out if the underlying distribution was P XY Z of Q XY Z = Q X Z Q Y Z Q Z. This result is stated next. Lemma. For an ɛ-sk (K x, K y ) satisfying (9) generated by an interactive communication F, let W KxK yf XY Z denote the resulting conditional distribution on (K x, K y, F) given (X, Y, Z). Then, for every 0 < η < ɛ and every Q XY Z = Q X Z Q Y Z Q Z, we have log K log β ɛ+η (P KxK yfz, Q KxK yfz) + 2 log(/η), (2) where Q KxK yfz is the marginal of (K x, K y, F, Z) of the joint distribution Q KxK yfxy Z = Q XY Z W KxK yf XY Z. To prove Lemma, we need the following basic property of interactive communication (cf. [3]). Lemma 2 (Interactive communication property). Given Q XY Z = Q X Z Q Y Z Q Z and an interactive communication F, the following holds: Q XY FZ = Q X FZ Q Y FZ, i.e., conditionally independent observations remain so when conditioned additionally on an interactive communication. Proof of Lemma : We establish (2) by constructing a test for the hypothesis testing problem with null hypothesis P = P KxK yfz and alternative hypothesis Q = Q KxK yfz. Specifically, we use a deterministic test with the following acceptance region (for the null hypothesis) : { } P (2) unif A := (k x, k y, f, z) : log (k x, k y ) Q KxK y FZ (k x, k y f, z) λ, where λ = log K 2 log(/η). The values (k x, k y, f, z) with Q KxK y FZ (k x, k y f, z) = 0 are included in A.

26 26 For this test, the probability of type II is bounded above as Q KxK yfz (A) = f,z Q FZ (f, z) kx,ky : (kx,ky,f,z) A Q KxK y FZ (k x, k y f, z) 2 λ f,z Q FZ (f, z) k x,k y P (2) unif (k x, k y ) = K η 2. (22) On the other hand, the probability of error of type I is bounded above as P KxK yfz (A c ) P KxK yfz P (2) unif P FZ + P (2) unif P FZ(A c ) ɛ + P (2) unif P FZ(A c ), (23) where the first inequality follows from the definition of variational distance, and the second is a consequence of the security criterion (9). The second term above can be expressed as follows: P (2) unif P FZ(A c ) = f,z = f,z P FZ (f, z) K P FZ (f, z) K The inner sum can be further upper bounded as ((k, k, f, z) A c ) k ( Q KxK y FZ (k, k, f, z) K 2 η 2 > ). (24) k ( QKxK y FZ (k, k, f, z) K 2 η 2) 2 k ( Q KxK y FZ (k, k, f, z) K 2 η 2 > ) k = K η k Q KxK y FZ (k, k, f, z) 2 = K η k Q Kx FZ (k f, z) 2 QKy FZ (k f, z) 2, (25) where the previous equality uses Lemma 2 and the fact that given F, K x and K y are functions of (X, U x ) and (Y, U y ), respectively. Next, an application of the Cauchy-Schwartz inequality to the sum on the right-side of (25) yields ( ) Q Kx FZ (k f, z) 2 QKy FZ (k f, z) 2 2 Q Kx FZ (k x f, z) k x ky k Q Ky FZ (k y f, z) 2 =. (26)

27 27 Upon combining (24)-(26), we obtain P (2) unif P FZ(A c ) η, which along with (23) gives P KxK yfz (A c ) ɛ + η. (27) It follows from (27) and (22) that which completes the proof. β ɛ+η (P KxK yfz, Q KxK yfz) K η 2, Finally, we derive the upper bound for S ɛ,δ (X, Y Z) using the data processing property of β ɛ : let W be a stochastic mapping from V to V, i.e., for each v V, W ( v) is a distribution on V. Then, since the map W followed by a test on V can be regarded as a stochastic test on V, β ɛ (P, Q) β ɛ (P W, Q W ), (28) where (P W )(v ) = v P (v) W (v v). Proof of Theorem 3: Using the data processing inequality (28) with P = P XY Z, Q = Q XY Z, and W = W KxK yf XY Z, Lemma implies that any (K x, K y ) satisfying the secrecy criterion (9) must satisfy log K log β ɛ+η (P XY Z, Q XY Z ) + 2 log(/η). (29) Furthermore, from Lemma 0, (ɛ, δ)-sk implies existence of local estimates K x and K y satisfying (9) with (ɛ + δ) in place of ɛ. Thus, an (ɛ, δ)-sk with range K must satisfy (29) with ɛ replaced by (ɛ + δ), which completes the proof.

28 28 B. Proof of Lemma 4 Let K s = f s (X) be the key for a fixed seed. By using the Cauchy-Schwarz inequality, P KsV Z P unif P V Z = 2 P K sv Z (k, v, z) K P V Z (v, z) k,v,z = P KsV Z (k, v, z) K QZ (z) P V Z (v, z) 2 QZ (z) k,v,z ( ) 2 P KsV Z (k, v, z) K P V Z (v, z) K V. 2 Q Z (z) k,v,z Thus, by the concavity of, P KV ZS P unif P V Z P S K V P S (s) 2 k,v,z,s The numerator of the sum can be rewritten as ( P S (s) P KsV Z (k, v, z) ) 2 K P V Z (v, z) k,s = s = s = s = x P S (s) k [ P S (s) k ( ) 2 P KsV Z (k, v, z) K P V Z (v, z) Q Z (z) [ P KsV Z (k, v, z) 2 2P KsV Z (k, v, z) K P V Z (v, z) + K 2 P V Z (v, z) 2 P KsV Z (k, v, z) 2 K P V Z (v, z) 2 [ ( P S (s) P XV Z (x, v, z) P XV Z x, v, z ) x,x P XV Z (x, v, z) 2 s P S (s) { } K + x x P XV Z (x, z, v) P XV Z ( x, v, z ) s ] { ( f s (x) = f s (x ) ) } ] K { P S (s) ( f s (x) = f s (x ) ) } K. ] x P XV Z (x, v, z) 2,

29 29 where we used the property of two-universality (9) in the last inequality. Thus, we have P KV ZS P unif P V Z P S K V P XV Z (x, v, z) 2 2 Q x,v,z Z (z) K V P XV Z (x, v, z) P XZ (x, z) 2 Q x,v,z Z (z) = K V P XZ (x, z) 2 2 Q x,z Z (z) K V 2 2 Hmin(PXZ QZ). Note that the last step in the proof above shows that it is, in fact, the conditional Rényi entropy of order 2 that determines the leakage (see [3] for a similar observation). However, the weaker bound proved above suffices for our case, as it does for many other cases (cf. [24]). C. A secret key agreement protocol requiring -bit feedback In this section, we present a secret key agreement protocol which requires only -bit of feedback for generating an (ɛ, δ)-sk, in the special case when Z is a constant. The main component is a high secrecy protocol which achieves arbitrarily high secrecy and required reliability. In contrast to Protocol, which relied on slicing the spectrum of P X Y, the high secrecy protocol is based on slicing the spectrum of P X. Since the party observing X can determine the corresponding slice index, feedback is not needed and oneway communication suffices. We then convert this high secrecy protocol into a high reliability protocol, using a -bit feedback. The required protocol for generating an (ɛ, δ)-sk is obtained by randomizing between the high secrecy and the high reliability protocols. This protocol appeared in a conference version containing some of the results of this paper [6], but was discovered independently by [8] in a slightly different setting. Note that this is a different approach from the one used in Section IV where a high reliability protocol was constructed and Proposition was invoked to obtain a high secrecy protocol. Description of the high secrecy protocol. We now describe our protocol formally. The information reconciliation step of our protocol relies on a single-shot version of the classical Slepian-Wolf theorem [27] in distributed source coding for two sources [20], [0, Lemma 7.2.] (see, also, [8]). We need a slight modification of the standard version the encoder is still a random binning but for decoding, instead of using a typical-set decoder for the underlying distribution, we use a mismatched typical-set decoder. We provide a proof for completeness.

30 30 Lemma 3 (Slepian-Wolf Coding). Given two distributions P XY and Q XY on X Y, for every γ > 0 there exists a code (e, d) of size M with encoder e : X {,..., M}, and a decoder d : {,..., M} Y X, such that P XY ({(x, y) : x d(e(x), y)}) P XY ( {(x, y) : log QX Y (x y) log M γ} ) + 2 γ. Proof. We use a random encoder given by random binning, i.e., for each x X, we independently randomly assign i =,..., M. For the decoder, we use a typicality-like argument, but instead of using the standard typical set defined via P X Y, we use the mismatched typical-set T QX Y := { (x, y) : log Q X Y (x y) < log M γ }. Then, upon receiving i {,..., M}, the decoder outputs ˆx if there exists a unique ˆx satisfying e(ˆx) = i and (ˆx, y) T QX Y. An error occur if (X, Y ) / T QX Y or there exists x X such that ( x, Y ) T QX Y and e( x) = e(x). The former error event occurs with probability P XY ( {(x, y) : log QX Y (x y) log M γ} ). The probability of the second error event averaged over the random binning is bounded as E [ P ( )] x X s.t. e( x) = e(x), ( x, Y ) T QX Y P XY (x, y) E (e( x) = e(x)) ( ) ( x, y) T QX Y x,y x x = x,y x,y P XY (x, y) x x P XY (x, y) 2 γ M ( ) ( x, y) T QX Y = 2 γ, where the expectation is over the random encoder e. The first inequality above is by the union bound, the first equality is a property of random binning, and the second inequality follows from {x : (x, y) T QX Y } M2 γ y Y. Thus, there exists a code (e, d) satisfying the desired bound. We are now in a position to describe our protocol, which is based on slicing the spectrum of P X. We first slice the spectrum of P X into L + parts. Specifically, for i L, let λ i = λ min + (i )

Secret Key Agreement: General Capacity and Second-Order Asymptotics

Secret Key Agreement: General Capacity and Second-Order Asymptotics Secret Key Agreement: General Capacity and Second-Order Asymptotics Masahito Hayashi Himanshu Tyagi Shun Watanabe 1 Abstract We revisit the problem of secret key agreement using interactive public communication

More information

Secret Key Agreement: General Capacity and Second-Order Asymptotics. Masahito Hayashi Himanshu Tyagi Shun Watanabe

Secret Key Agreement: General Capacity and Second-Order Asymptotics. Masahito Hayashi Himanshu Tyagi Shun Watanabe Secret Key Agreement: General Capacity and Second-Order Asymptotics Masahito Hayashi Himanshu Tyagi Shun Watanabe Two party secret key agreement Maurer 93, Ahlswede-Csiszár 93 X F Y K x K y ArandomvariableK

More information

Interactive Communication for Data Exchange

Interactive Communication for Data Exchange Interactive Communication for Data Exchange Himanshu Tyagi Indian Institute of Science, Bangalore Joint work with Pramod Viswanath and Shun Watanabe The Data Exchange Problem [ElGamal-Orlitsky 84], [Csiszár-Narayan

More information

Common Randomness Principles of Secrecy

Common Randomness Principles of Secrecy Common Randomness Principles of Secrecy Himanshu Tyagi Department of Electrical and Computer Engineering and Institute of Systems Research 1 Correlated Data, Distributed in Space and Time Sensor Networks

More information

Interactive Communication for Data Exchange

Interactive Communication for Data Exchange Interactive Communication for Data Exchange Himanshu Tyagi, Member, IEEE, Pramod Viswanath, Fellow, IEEE, and Shun Watanabe, Member, IEEE Abstract Two parties observing correlated data seek to exchange

More information

A Bound For Multiparty Secret Key Agreement And Implications For A Problem Of Secure Computing. Himanshu Tyagi and Shun Watanabe

A Bound For Multiparty Secret Key Agreement And Implications For A Problem Of Secure Computing. Himanshu Tyagi and Shun Watanabe A Bound For Multiparty Secret Key Agreement And Implications For A Problem Of Secure Computing Himanshu Tyagi and Shun Watanabe Multiparty Secret Key Agreement COMMUNICATION NETWORK F X 1 X 2 X m K 1 K

More information

Converses For Secret Key Agreement and Secure Computing

Converses For Secret Key Agreement and Secure Computing Converses For Secret Key Agreement and Secure Computing Himanshu Tyagi and Shun Watanabe Abstract We consider information theoretic secret key agreement and secure function computation by multiple parties

More information

Lecture 4 Noisy Channel Coding

Lecture 4 Noisy Channel Coding Lecture 4 Noisy Channel Coding I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw October 9, 2015 1 / 56 I-Hsiang Wang IT Lecture 4 The Channel Coding Problem

More information

EE5139R: Problem Set 7 Assigned: 30/09/15, Due: 07/10/15

EE5139R: Problem Set 7 Assigned: 30/09/15, Due: 07/10/15 EE5139R: Problem Set 7 Assigned: 30/09/15, Due: 07/10/15 1. Cascade of Binary Symmetric Channels The conditional probability distribution py x for each of the BSCs may be expressed by the transition probability

More information

Second-Order Asymptotics in Information Theory

Second-Order Asymptotics in Information Theory Second-Order Asymptotics in Information Theory Vincent Y. F. Tan (vtan@nus.edu.sg) Dept. of ECE and Dept. of Mathematics National University of Singapore (NUS) National Taiwan University November 2015

More information

A Hierarchy of Information Quantities for Finite Block Length Analysis of Quantum Tasks

A Hierarchy of Information Quantities for Finite Block Length Analysis of Quantum Tasks A Hierarchy of Information Quantities for Finite Block Length Analysis of Quantum Tasks Marco Tomamichel, Masahito Hayashi arxiv: 1208.1478 Also discussing results of: Second Order Asymptotics for Quantum

More information

Dispersion of the Gilbert-Elliott Channel

Dispersion of the Gilbert-Elliott Channel Dispersion of the Gilbert-Elliott Channel Yury Polyanskiy Email: ypolyans@princeton.edu H. Vincent Poor Email: poor@princeton.edu Sergio Verdú Email: verdu@princeton.edu Abstract Channel dispersion plays

More information

SHARED INFORMATION. Prakash Narayan with. Imre Csiszár, Sirin Nitinawarat, Himanshu Tyagi, Shun Watanabe

SHARED INFORMATION. Prakash Narayan with. Imre Csiszár, Sirin Nitinawarat, Himanshu Tyagi, Shun Watanabe SHARED INFORMATION Prakash Narayan with Imre Csiszár, Sirin Nitinawarat, Himanshu Tyagi, Shun Watanabe 2/40 Acknowledgement Praneeth Boda Himanshu Tyagi Shun Watanabe 3/40 Outline Two-terminal model: Mutual

More information

Correlation Detection and an Operational Interpretation of the Rényi Mutual Information

Correlation Detection and an Operational Interpretation of the Rényi Mutual Information Correlation Detection and an Operational Interpretation of the Rényi Mutual Information Masahito Hayashi 1, Marco Tomamichel 2 1 Graduate School of Mathematics, Nagoya University, and Centre for Quantum

More information

Secret Key Generation and Secure Computing

Secret Key Generation and Secure Computing Secret Key Generation and Secure Computing Himanshu Tyagi Department of Electrical and Computer Engineering and Institute of System Research University of Maryland, College Park, USA. Joint work with Prakash

More information

The Communication Complexity of Correlation. Prahladh Harsha Rahul Jain David McAllester Jaikumar Radhakrishnan

The Communication Complexity of Correlation. Prahladh Harsha Rahul Jain David McAllester Jaikumar Radhakrishnan The Communication Complexity of Correlation Prahladh Harsha Rahul Jain David McAllester Jaikumar Radhakrishnan Transmitting Correlated Variables (X, Y) pair of correlated random variables Transmitting

More information

SHARED INFORMATION. Prakash Narayan with. Imre Csiszár, Sirin Nitinawarat, Himanshu Tyagi, Shun Watanabe

SHARED INFORMATION. Prakash Narayan with. Imre Csiszár, Sirin Nitinawarat, Himanshu Tyagi, Shun Watanabe SHARED INFORMATION Prakash Narayan with Imre Csiszár, Sirin Nitinawarat, Himanshu Tyagi, Shun Watanabe 2/41 Outline Two-terminal model: Mutual information Operational meaning in: Channel coding: channel

More information

Lecture 7. Union bound for reducing M-ary to binary hypothesis testing

Lecture 7. Union bound for reducing M-ary to binary hypothesis testing Lecture 7 Agenda for the lecture M-ary hypothesis testing and the MAP rule Union bound for reducing M-ary to binary hypothesis testing Introduction of the channel coding problem 7.1 M-ary hypothesis testing

More information

Secret Key and Private Key Constructions for Simple Multiterminal Source Models

Secret Key and Private Key Constructions for Simple Multiterminal Source Models Secret Key and Private Key Constructions for Simple Multiterminal Source Models arxiv:cs/05050v [csit] 3 Nov 005 Chunxuan Ye Department of Electrical and Computer Engineering and Institute for Systems

More information

Frans M.J. Willems. Authentication Based on Secret-Key Generation. Frans M.J. Willems. (joint work w. Tanya Ignatenko)

Frans M.J. Willems. Authentication Based on Secret-Key Generation. Frans M.J. Willems. (joint work w. Tanya Ignatenko) Eindhoven University of Technology IEEE EURASIP Spain Seminar on Signal Processing, Communication and Information Theory, Universidad Carlos III de Madrid, December 11, 2014 : Secret-Based Authentication

More information

Capacity of a channel Shannon s second theorem. Information Theory 1/33

Capacity of a channel Shannon s second theorem. Information Theory 1/33 Capacity of a channel Shannon s second theorem Information Theory 1/33 Outline 1. Memoryless channels, examples ; 2. Capacity ; 3. Symmetric channels ; 4. Channel Coding ; 5. Shannon s second theorem,

More information

Lecture 6 I. CHANNEL CODING. X n (m) P Y X

Lecture 6 I. CHANNEL CODING. X n (m) P Y X 6- Introduction to Information Theory Lecture 6 Lecturer: Haim Permuter Scribe: Yoav Eisenberg and Yakov Miron I. CHANNEL CODING We consider the following channel coding problem: m = {,2,..,2 nr} Encoder

More information

AN INTRODUCTION TO SECRECY CAPACITY. 1. Overview

AN INTRODUCTION TO SECRECY CAPACITY. 1. Overview AN INTRODUCTION TO SECRECY CAPACITY BRIAN DUNN. Overview This paper introduces the reader to several information theoretic aspects of covert communications. In particular, it discusses fundamental limits

More information

A Formula for the Capacity of the General Gel fand-pinsker Channel

A Formula for the Capacity of the General Gel fand-pinsker Channel A Formula for the Capacity of the General Gel fand-pinsker Channel Vincent Y. F. Tan Institute for Infocomm Research (I 2 R, A*STAR, Email: tanyfv@i2r.a-star.edu.sg ECE Dept., National University of Singapore

More information

ECE 4400:693 - Information Theory

ECE 4400:693 - Information Theory ECE 4400:693 - Information Theory Dr. Nghi Tran Lecture 8: Differential Entropy Dr. Nghi Tran (ECE-University of Akron) ECE 4400:693 Lecture 1 / 43 Outline 1 Review: Entropy of discrete RVs 2 Differential

More information

Lossy Distributed Source Coding

Lossy Distributed Source Coding Lossy Distributed Source Coding John MacLaren Walsh, Ph.D. Multiterminal Information Theory, Spring Quarter, 202 Lossy Distributed Source Coding Problem X X 2 S {,...,2 R } S 2 {,...,2 R2 } Ẑ Ẑ 2 E d(z,n,

More information

On Function Computation with Privacy and Secrecy Constraints

On Function Computation with Privacy and Secrecy Constraints 1 On Function Computation with Privacy and Secrecy Constraints Wenwen Tu and Lifeng Lai Abstract In this paper, the problem of function computation with privacy and secrecy constraints is considered. The

More information

Capacity of the Discrete Memoryless Energy Harvesting Channel with Side Information

Capacity of the Discrete Memoryless Energy Harvesting Channel with Side Information 204 IEEE International Symposium on Information Theory Capacity of the Discrete Memoryless Energy Harvesting Channel with Side Information Omur Ozel, Kaya Tutuncuoglu 2, Sennur Ulukus, and Aylin Yener

More information

MULTITERMINAL SECRECY AND TREE PACKING. With Imre Csiszár, Sirin Nitinawarat, Chunxuan Ye, Alexander Barg and Alex Reznik

MULTITERMINAL SECRECY AND TREE PACKING. With Imre Csiszár, Sirin Nitinawarat, Chunxuan Ye, Alexander Barg and Alex Reznik MULTITERMINAL SECRECY AND TREE PACKING With Imre Csiszár, Sirin Nitinawarat, Chunxuan Ye, Alexander Barg and Alex Reznik Information Theoretic Security A complementary approach to computational security

More information

Information Masking and Amplification: The Source Coding Setting

Information Masking and Amplification: The Source Coding Setting 202 IEEE International Symposium on Information Theory Proceedings Information Masking and Amplification: The Source Coding Setting Thomas A. Courtade Department of Electrical Engineering University of

More information

Network coding for multicast relation to compression and generalization of Slepian-Wolf

Network coding for multicast relation to compression and generalization of Slepian-Wolf Network coding for multicast relation to compression and generalization of Slepian-Wolf 1 Overview Review of Slepian-Wolf Distributed network compression Error exponents Source-channel separation issues

More information

Arimoto Channel Coding Converse and Rényi Divergence

Arimoto Channel Coding Converse and Rényi Divergence Arimoto Channel Coding Converse and Rényi Divergence Yury Polyanskiy and Sergio Verdú Abstract Arimoto proved a non-asymptotic upper bound on the probability of successful decoding achievable by any code

More information

EECS 750. Hypothesis Testing with Communication Constraints

EECS 750. Hypothesis Testing with Communication Constraints EECS 750 Hypothesis Testing with Communication Constraints Name: Dinesh Krithivasan Abstract In this report, we study a modification of the classical statistical problem of bivariate hypothesis testing.

More information

Soft Covering with High Probability

Soft Covering with High Probability Soft Covering with High Probability Paul Cuff Princeton University arxiv:605.06396v [cs.it] 20 May 206 Abstract Wyner s soft-covering lemma is the central analysis step for achievability proofs of information

More information

Secret-Key Agreement over Unauthenticated Public Channels Part I: Definitions and a Completeness Result

Secret-Key Agreement over Unauthenticated Public Channels Part I: Definitions and a Completeness Result Secret-Key Agreement over Unauthenticated Public Channels Part I: Definitions and a Completeness Result Ueli Maurer, Fellow, IEEE Stefan Wolf Abstract This is the first part of a three-part paper on secret-key

More information

Common Randomness and Secret Key Generation with a Helper

Common Randomness and Secret Key Generation with a Helper 344 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 46, NO. 2, MARCH 2000 Common Romness Secret Key Generation with a Helper Imre Csiszár, Fellow, IEEE, Prakash Narayan, Senior Member, IEEE Abstract We consider

More information

x log x, which is strictly convex, and use Jensen s Inequality:

x log x, which is strictly convex, and use Jensen s Inequality: 2. Information measures: mutual information 2.1 Divergence: main inequality Theorem 2.1 (Information Inequality). D(P Q) 0 ; D(P Q) = 0 iff P = Q Proof. Let ϕ(x) x log x, which is strictly convex, and

More information

Side-information Scalable Source Coding

Side-information Scalable Source Coding Side-information Scalable Source Coding Chao Tian, Member, IEEE, Suhas N. Diggavi, Member, IEEE Abstract The problem of side-information scalable (SI-scalable) source coding is considered in this work,

More information

Distributed Lossless Compression. Distributed lossless compression system

Distributed Lossless Compression. Distributed lossless compression system Lecture #3 Distributed Lossless Compression (Reading: NIT 10.1 10.5, 4.4) Distributed lossless source coding Lossless source coding via random binning Time Sharing Achievability proof of the Slepian Wolf

More information

EE/Stat 376B Handout #5 Network Information Theory October, 14, Homework Set #2 Solutions

EE/Stat 376B Handout #5 Network Information Theory October, 14, Homework Set #2 Solutions EE/Stat 376B Handout #5 Network Information Theory October, 14, 014 1. Problem.4 parts (b) and (c). Homework Set # Solutions (b) Consider h(x + Y ) h(x + Y Y ) = h(x Y ) = h(x). (c) Let ay = Y 1 + Y, where

More information

Entropy and Ergodic Theory Lecture 4: Conditional entropy and mutual information

Entropy and Ergodic Theory Lecture 4: Conditional entropy and mutual information Entropy and Ergodic Theory Lecture 4: Conditional entropy and mutual information 1 Conditional entropy Let (Ω, F, P) be a probability space, let X be a RV taking values in some finite set A. In this lecture

More information

Explicit Capacity-Achieving Coding Scheme for the Gaussian Wiretap Channel. Himanshu Tyagi and Alexander Vardy

Explicit Capacity-Achieving Coding Scheme for the Gaussian Wiretap Channel. Himanshu Tyagi and Alexander Vardy Explicit Capacity-Achieving Coding Scheme for the Gaussian Wiretap Channel Himanshu Tyagi and Alexander Vardy The Gaussian wiretap channel M Encoder X n N(0,σ 2 TI) Y n Decoder ˆM N(0,σ 2 WI) Z n Eavesdropper

More information

PERFECTLY secure key agreement has been studied recently

PERFECTLY secure key agreement has been studied recently IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 45, NO. 2, MARCH 1999 499 Unconditionally Secure Key Agreement the Intrinsic Conditional Information Ueli M. Maurer, Senior Member, IEEE, Stefan Wolf Abstract

More information

Lecture 7 Introduction to Statistical Decision Theory

Lecture 7 Introduction to Statistical Decision Theory Lecture 7 Introduction to Statistical Decision Theory I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw December 20, 2016 1 / 55 I-Hsiang Wang IT Lecture 7

More information

EE5319R: Problem Set 3 Assigned: 24/08/16, Due: 31/08/16

EE5319R: Problem Set 3 Assigned: 24/08/16, Due: 31/08/16 EE539R: Problem Set 3 Assigned: 24/08/6, Due: 3/08/6. Cover and Thomas: Problem 2.30 (Maimum Entropy): Solution: We are required to maimize H(P X ) over all distributions P X on the non-negative integers

More information

Non-Asymptotic and Asymptotic Analyses on Markov Chains in Several Problems

Non-Asymptotic and Asymptotic Analyses on Markov Chains in Several Problems Non-Asymptotic and Asymptotic Analyses on Markov Chains in Several Problems Masahito Hayashi and Shun Watanabe Graduate School of Mathematics, Nagoya University, Japan, and Centre for Quantum Technologies,

More information

Notes on Zero Knowledge

Notes on Zero Knowledge U.C. Berkeley CS172: Automata, Computability and Complexity Handout 9 Professor Luca Trevisan 4/21/2015 Notes on Zero Knowledge These notes on zero knowledge protocols for quadratic residuosity are based

More information

Information Theoretic Limits of Randomness Generation

Information Theoretic Limits of Randomness Generation Information Theoretic Limits of Randomness Generation Abbas El Gamal Stanford University Shannon Centennial, University of Michigan, September 2016 Information theory The fundamental problem of communication

More information

Interference Channels with Source Cooperation

Interference Channels with Source Cooperation Interference Channels with Source Cooperation arxiv:95.319v1 [cs.it] 19 May 29 Vinod Prabhakaran and Pramod Viswanath Coordinated Science Laboratory University of Illinois, Urbana-Champaign Urbana, IL

More information

ECE598: Information-theoretic methods in high-dimensional statistics Spring 2016

ECE598: Information-theoretic methods in high-dimensional statistics Spring 2016 ECE598: Information-theoretic methods in high-dimensional statistics Spring 06 Lecture : Mutual Information Method Lecturer: Yihong Wu Scribe: Jaeho Lee, Mar, 06 Ed. Mar 9 Quick review: Assouad s lemma

More information

The Method of Types and Its Application to Information Hiding

The Method of Types and Its Application to Information Hiding The Method of Types and Its Application to Information Hiding Pierre Moulin University of Illinois at Urbana-Champaign www.ifp.uiuc.edu/ moulin/talks/eusipco05-slides.pdf EUSIPCO Antalya, September 7,

More information

A new converse in rate-distortion theory

A new converse in rate-distortion theory A new converse in rate-distortion theory Victoria Kostina, Sergio Verdú Dept. of Electrical Engineering, Princeton University, NJ, 08544, USA Abstract This paper shows new finite-blocklength converse bounds

More information

Mismatched Multi-letter Successive Decoding for the Multiple-Access Channel

Mismatched Multi-letter Successive Decoding for the Multiple-Access Channel Mismatched Multi-letter Successive Decoding for the Multiple-Access Channel Jonathan Scarlett University of Cambridge jms265@cam.ac.uk Alfonso Martinez Universitat Pompeu Fabra alfonso.martinez@ieee.org

More information

Information measures in simple coding problems

Information measures in simple coding problems Part I Information measures in simple coding problems in this web service in this web service Source coding and hypothesis testing; information measures A(discrete)source is a sequence {X i } i= of random

More information

Information Leakage of Correlated Source Coded Sequences over a Channel with an Eavesdropper

Information Leakage of Correlated Source Coded Sequences over a Channel with an Eavesdropper Information Leakage of Correlated Source Coded Sequences over a Channel with an Eavesdropper Reevana Balmahoon and Ling Cheng School of Electrical and Information Engineering University of the Witwatersrand

More information

Lecture 4 Channel Coding

Lecture 4 Channel Coding Capacity and the Weak Converse Lecture 4 Coding I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw October 15, 2014 1 / 16 I-Hsiang Wang NIT Lecture 4 Capacity

More information

On Scalable Coding in the Presence of Decoder Side Information

On Scalable Coding in the Presence of Decoder Side Information On Scalable Coding in the Presence of Decoder Side Information Emrah Akyol, Urbashi Mitra Dep. of Electrical Eng. USC, CA, US Email: {eakyol, ubli}@usc.edu Ertem Tuncel Dep. of Electrical Eng. UC Riverside,

More information

Group Secret Key Agreement over State-Dependent Wireless Broadcast Channels

Group Secret Key Agreement over State-Dependent Wireless Broadcast Channels Group Secret Key Agreement over State-Dependent Wireless Broadcast Channels Mahdi Jafari Siavoshani Sharif University of Technology, Iran Shaunak Mishra, Suhas Diggavi, Christina Fragouli Institute of

More information

CS Communication Complexity: Applications and New Directions

CS Communication Complexity: Applications and New Directions CS 2429 - Communication Complexity: Applications and New Directions Lecturer: Toniann Pitassi 1 Introduction In this course we will define the basic two-party model of communication, as introduced in the

More information

The Capacity Region for Multi-source Multi-sink Network Coding

The Capacity Region for Multi-source Multi-sink Network Coding The Capacity Region for Multi-source Multi-sink Network Coding Xijin Yan Dept. of Electrical Eng. - Systems University of Southern California Los Angeles, CA, U.S.A. xyan@usc.edu Raymond W. Yeung Dept.

More information

Lecture 5 Channel Coding over Continuous Channels

Lecture 5 Channel Coding over Continuous Channels Lecture 5 Channel Coding over Continuous Channels I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw November 14, 2014 1 / 34 I-Hsiang Wang NIT Lecture 5 From

More information

ELEC546 Review of Information Theory

ELEC546 Review of Information Theory ELEC546 Review of Information Theory Vincent Lau 1/1/004 1 Review of Information Theory Entropy: Measure of uncertainty of a random variable X. The entropy of X, H(X), is given by: If X is a discrete random

More information

Homework Set #2 Data Compression, Huffman code and AEP

Homework Set #2 Data Compression, Huffman code and AEP Homework Set #2 Data Compression, Huffman code and AEP 1. Huffman coding. Consider the random variable ( x1 x X = 2 x 3 x 4 x 5 x 6 x 7 0.50 0.26 0.11 0.04 0.04 0.03 0.02 (a Find a binary Huffman code

More information

On the Capacity Region for Secure Index Coding

On the Capacity Region for Secure Index Coding On the Capacity Region for Secure Index Coding Yuxin Liu, Badri N. Vellambi, Young-Han Kim, and Parastoo Sadeghi Research School of Engineering, Australian National University, {yuxin.liu, parastoo.sadeghi}@anu.edu.au

More information

Universal Incremental Slepian-Wolf Coding

Universal Incremental Slepian-Wolf Coding Proceedings of the 43rd annual Allerton Conference, Monticello, IL, September 2004 Universal Incremental Slepian-Wolf Coding Stark C. Draper University of California, Berkeley Berkeley, CA, 94720 USA sdraper@eecs.berkeley.edu

More information

Chapter 4. Data Transmission and Channel Capacity. Po-Ning Chen, Professor. Department of Communications Engineering. National Chiao Tung University

Chapter 4. Data Transmission and Channel Capacity. Po-Ning Chen, Professor. Department of Communications Engineering. National Chiao Tung University Chapter 4 Data Transmission and Channel Capacity Po-Ning Chen, Professor Department of Communications Engineering National Chiao Tung University Hsin Chu, Taiwan 30050, R.O.C. Principle of Data Transmission

More information

Entropies & Information Theory

Entropies & Information Theory Entropies & Information Theory LECTURE I Nilanjana Datta University of Cambridge,U.K. See lecture notes on: http://www.qi.damtp.cam.ac.uk/node/223 Quantum Information Theory Born out of Classical Information

More information

Minimum Energy Per Bit for Secret Key Acquisition Over Multipath Wireless Channels

Minimum Energy Per Bit for Secret Key Acquisition Over Multipath Wireless Channels Minimum Energy Per Bit for Secret Key Acquisition Over Multipath Wireless Channels Tzu-Han Chou Email: tchou@wisc.edu Akbar M. Sayeed Email: akbar@engr.wisc.edu Stark C. Draper Email: sdraper@ece.wisc.edu

More information

Simple Channel Coding Bounds

Simple Channel Coding Bounds ISIT 2009, Seoul, Korea, June 28 - July 3,2009 Simple Channel Coding Bounds Ligong Wang Signal and Information Processing Laboratory wang@isi.ee.ethz.ch Roger Colbeck Institute for Theoretical Physics,

More information

Keyless authentication in the presence of a simultaneously transmitting adversary

Keyless authentication in the presence of a simultaneously transmitting adversary Keyless authentication in the presence of a simultaneously transmitting adversary Eric Graves Army Research Lab Adelphi MD 20783 U.S.A. ericsgra@ufl.edu Paul Yu Army Research Lab Adelphi MD 20783 U.S.A.

More information

Towards control over fading channels

Towards control over fading channels Towards control over fading channels Paolo Minero, Massimo Franceschetti Advanced Network Science University of California San Diego, CA, USA mail: {minero,massimo}@ucsd.edu Invited Paper) Subhrakanti

More information

Simple and Tight Bounds for Information Reconciliation and Privacy Amplification

Simple and Tight Bounds for Information Reconciliation and Privacy Amplification Simple and Tight Bounds for Information Reconciliation and Privacy Amplification Renato Renner 1 and Stefan Wolf 2 1 Computer Science Department, ETH Zürich, Switzerland. renner@inf.ethz.ch. 2 Département

More information

ECE Information theory Final (Fall 2008)

ECE Information theory Final (Fall 2008) ECE 776 - Information theory Final (Fall 2008) Q.1. (1 point) Consider the following bursty transmission scheme for a Gaussian channel with noise power N and average power constraint P (i.e., 1/n X n i=1

More information

A Tight Upper Bound on the Second-Order Coding Rate of Parallel Gaussian Channels with Feedback

A Tight Upper Bound on the Second-Order Coding Rate of Parallel Gaussian Channels with Feedback A Tight Upper Bound on the Second-Order Coding Rate of Parallel Gaussian Channels with Feedback Vincent Y. F. Tan (NUS) Joint work with Silas L. Fong (Toronto) 2017 Information Theory Workshop, Kaohsiung,

More information

Source Coding and Function Computation: Optimal Rate in Zero-Error and Vanishing Zero-Error Regime

Source Coding and Function Computation: Optimal Rate in Zero-Error and Vanishing Zero-Error Regime Source Coding and Function Computation: Optimal Rate in Zero-Error and Vanishing Zero-Error Regime Solmaz Torabi Dept. of Electrical and Computer Engineering Drexel University st669@drexel.edu Advisor:

More information

Perfect Omniscience, Perfect Secrecy and Steiner Tree Packing

Perfect Omniscience, Perfect Secrecy and Steiner Tree Packing Perfect Omniscience, Perfect Secrecy and Steiner Tree Packing S. Nitinawarat and P. Narayan Department of Electrical and Computer Engineering and Institute for Systems Research University of Maryland College

More information

Quantum Achievability Proof via Collision Relative Entropy

Quantum Achievability Proof via Collision Relative Entropy Quantum Achievability Proof via Collision Relative Entropy Salman Beigi Institute for Research in Fundamental Sciences (IPM) Tehran, Iran Setemper 8, 2014 Based on a joint work with Amin Gohari arxiv:1312.3822

More information

(each row defines a probability distribution). Given n-strings x X n, y Y n we can use the absence of memory in the channel to compute

(each row defines a probability distribution). Given n-strings x X n, y Y n we can use the absence of memory in the channel to compute ENEE 739C: Advanced Topics in Signal Processing: Coding Theory Instructor: Alexander Barg Lecture 6 (draft; 9/6/03. Error exponents for Discrete Memoryless Channels http://www.enee.umd.edu/ abarg/enee739c/course.html

More information

Information-theoretic Secrecy A Cryptographic Perspective

Information-theoretic Secrecy A Cryptographic Perspective Information-theoretic Secrecy A Cryptographic Perspective Stefano Tessaro UC Santa Barbara WCS 2017 April 30, 2017 based on joint works with M. Bellare and A. Vardy Cryptography Computational assumptions

More information

3. Review of Probability and Statistics

3. Review of Probability and Statistics 3. Review of Probability and Statistics ECE 830, Spring 2014 Probabilistic models will be used throughout the course to represent noise, errors, and uncertainty in signal processing problems. This lecture

More information

A Summary of Multiple Access Channels

A Summary of Multiple Access Channels A Summary of Multiple Access Channels Wenyi Zhang February 24, 2003 Abstract In this summary we attempt to present a brief overview of the classical results on the information-theoretic aspects of multiple

More information

Achieving Shannon Capacity Region as Secrecy Rate Region in a Multiple Access Wiretap Channel

Achieving Shannon Capacity Region as Secrecy Rate Region in a Multiple Access Wiretap Channel Achieving Shannon Capacity Region as Secrecy Rate Region in a Multiple Access Wiretap Channel Shahid Mehraj Shah and Vinod Sharma Department of Electrical Communication Engineering, Indian Institute of

More information

Lecture 3: Channel Capacity

Lecture 3: Channel Capacity Lecture 3: Channel Capacity 1 Definitions Channel capacity is a measure of maximum information per channel usage one can get through a channel. This one of the fundamental concepts in information theory.

More information

Lecture 21: Minimax Theory

Lecture 21: Minimax Theory Lecture : Minimax Theory Akshay Krishnamurthy akshay@cs.umass.edu November 8, 07 Recap In the first part of the course, we spent the majority of our time studying risk minimization. We found many ways

More information

Password Cracking: The Effect of Bias on the Average Guesswork of Hash Functions

Password Cracking: The Effect of Bias on the Average Guesswork of Hash Functions Password Cracking: The Effect of Bias on the Average Guesswork of Hash Functions Yair Yona, and Suhas Diggavi, Fellow, IEEE Abstract arxiv:608.0232v4 [cs.cr] Jan 207 In this work we analyze the average

More information

National University of Singapore Department of Electrical & Computer Engineering. Examination for

National University of Singapore Department of Electrical & Computer Engineering. Examination for National University of Singapore Department of Electrical & Computer Engineering Examination for EE5139R Information Theory for Communication Systems (Semester I, 2014/15) November/December 2014 Time Allowed:

More information

On Third-Order Asymptotics for DMCs

On Third-Order Asymptotics for DMCs On Third-Order Asymptotics for DMCs Vincent Y. F. Tan Institute for Infocomm Research (I R) National University of Singapore (NUS) January 0, 013 Vincent Tan (I R and NUS) Third-Order Asymptotics for DMCs

More information

Lecture 15: Conditional and Joint Typicaility

Lecture 15: Conditional and Joint Typicaility EE376A Information Theory Lecture 1-02/26/2015 Lecture 15: Conditional and Joint Typicaility Lecturer: Kartik Venkat Scribe: Max Zimet, Brian Wai, Sepehr Nezami 1 Notation We always write a sequence of

More information

Lecture 11: Quantum Information III - Source Coding

Lecture 11: Quantum Information III - Source Coding CSCI5370 Quantum Computing November 25, 203 Lecture : Quantum Information III - Source Coding Lecturer: Shengyu Zhang Scribe: Hing Yin Tsang. Holevo s bound Suppose Alice has an information source X that

More information

Quantum Error Correcting Codes and Quantum Cryptography. Peter Shor M.I.T. Cambridge, MA 02139

Quantum Error Correcting Codes and Quantum Cryptography. Peter Shor M.I.T. Cambridge, MA 02139 Quantum Error Correcting Codes and Quantum Cryptography Peter Shor M.I.T. Cambridge, MA 02139 1 We start out with two processes which are fundamentally quantum: superdense coding and teleportation. Superdense

More information

Random Bernstein-Markov factors

Random Bernstein-Markov factors Random Bernstein-Markov factors Igor Pritsker and Koushik Ramachandran October 20, 208 Abstract For a polynomial P n of degree n, Bernstein s inequality states that P n n P n for all L p norms on the unit

More information

MMSE Dimension. snr. 1 We use the following asymptotic notation: f(x) = O (g(x)) if and only

MMSE Dimension. snr. 1 We use the following asymptotic notation: f(x) = O (g(x)) if and only MMSE Dimension Yihong Wu Department of Electrical Engineering Princeton University Princeton, NJ 08544, USA Email: yihongwu@princeton.edu Sergio Verdú Department of Electrical Engineering Princeton University

More information

MGMT 69000: Topics in High-dimensional Data Analysis Falll 2016

MGMT 69000: Topics in High-dimensional Data Analysis Falll 2016 MGMT 69000: Topics in High-dimensional Data Analysis Falll 2016 Lecture 14: Information Theoretic Methods Lecturer: Jiaming Xu Scribe: Hilda Ibriga, Adarsh Barik, December 02, 2016 Outline f-divergence

More information

Variable Length Codes for Degraded Broadcast Channels

Variable Length Codes for Degraded Broadcast Channels Variable Length Codes for Degraded Broadcast Channels Stéphane Musy School of Computer and Communication Sciences, EPFL CH-1015 Lausanne, Switzerland Email: stephane.musy@ep.ch Abstract This paper investigates

More information

Chapter 4: Asymptotic Properties of the MLE

Chapter 4: Asymptotic Properties of the MLE Chapter 4: Asymptotic Properties of the MLE Daniel O. Scharfstein 09/19/13 1 / 1 Maximum Likelihood Maximum likelihood is the most powerful tool for estimation. In this part of the course, we will consider

More information

An Achievable Error Exponent for the Mismatched Multiple-Access Channel

An Achievable Error Exponent for the Mismatched Multiple-Access Channel An Achievable Error Exponent for the Mismatched Multiple-Access Channel Jonathan Scarlett University of Cambridge jms265@camacuk Albert Guillén i Fàbregas ICREA & Universitat Pompeu Fabra University of

More information

The Central Limit Theorem: More of the Story

The Central Limit Theorem: More of the Story The Central Limit Theorem: More of the Story Steven Janke November 2015 Steven Janke (Seminar) The Central Limit Theorem:More of the Story November 2015 1 / 33 Central Limit Theorem Theorem (Central Limit

More information

Simultaneous Nonunique Decoding Is Rate-Optimal

Simultaneous Nonunique Decoding Is Rate-Optimal Fiftieth Annual Allerton Conference Allerton House, UIUC, Illinois, USA October 1-5, 2012 Simultaneous Nonunique Decoding Is Rate-Optimal Bernd Bandemer University of California, San Diego La Jolla, CA

More information

Multiterminal Source Coding with an Entropy-Based Distortion Measure

Multiterminal Source Coding with an Entropy-Based Distortion Measure Multiterminal Source Coding with an Entropy-Based Distortion Measure Thomas Courtade and Rick Wesel Department of Electrical Engineering University of California, Los Angeles 4 August, 2011 IEEE International

More information

Chapter 2: Entropy and Mutual Information. University of Illinois at Chicago ECE 534, Natasha Devroye

Chapter 2: Entropy and Mutual Information. University of Illinois at Chicago ECE 534, Natasha Devroye Chapter 2: Entropy and Mutual Information Chapter 2 outline Definitions Entropy Joint entropy, conditional entropy Relative entropy, mutual information Chain rules Jensen s inequality Log-sum inequality

More information