Achievable Error Exponents for the Private Fingerprinting Game

Size: px
Start display at page:

Download "Achievable Error Exponents for the Private Fingerprinting Game"

Transcription

1 Achievable Error Exponents for the Private Fingerprinting Game Anelia Somekh-Baruch and Neri Merhav Department of Electrical Engineering Technion - IIT, Haifa 32000, Israel anelia@tx, merhav@eetechnionacil Abstract Fingerprinting systems in the presence of collusive attacks are analyzed as a game between a fingerprinter and a decoder, on the one hand, and a coalition of two or more attackers, on the other hand The fingerprinter distributes, to different users, different fingerprinted copies of a host data (covertext), drawn from a memoryless stationary source, embedded with different fingerprints The coalition members create a forgery of the data while aig at erasing the fingerprints in order not to be detected Their action is modelled by a multiple access channel (MAC) We analyze the performance of two classes of decoders, associated with different kinds of error events The decoder of the first class aims at detecting the entire coalition, whereas the second is satisfied with the detection of at least one member of the coalition Both decoders have access to the original covertext data and observe the forgery in order to identify member/s of the coalition Motivated by a worst-case approach, we assume that the coalition of attackers is informed of the hiding strategy taken by the fingerprinter and the decoder, while they are uninformed of the attacking scheme Single letter expressions for the error exponents of the two kinds are obtained, a decoder that is optimal with respect to the two kinds of errors is introduced, and the worst-case attack channel is characterized 1 Introduction In fingerprinting systems several copies of the same host data are embedded with different fingerprints (that designate, eg, the different digital signatures or serial numbers of the copies they are provided with) and distributed to different users The fingerprints identify one of many users in order to enable copyright protection In this situation, two or more users can form a coalition, and collusive attacks on the fingerprinting system are possible and have to be taken into account in the code design Each of the coalition members contributes his distinct fingerprinted copy in order to create a better forgery Hence, the fingerprinting problem can be thought of as a game between the fingerprinter and the coalition of attackers

2 As mentioned in [32], the fingerprinting game is closely related to (and is actually an extension of) the watermarking game, that in turn can be modelled as a coded communication system equipped with side information, for a single user as opposed to one of many users Watermarking systems have been studied from the information theoretic point of view in several works 1 (see eg, [4], [7], [8], [14], [19], [21], [23], [24], [25], [30], [31] and [33]) Several researchers (see, eg, [1]-[3], [5], [6], [9], [12], [15], [17], [18], [26], [27], [29], [34]-[36] and references therein) have proposed and analyzed fingerprinting systems that aim at protection against collusion attacks under various conditions The research in this problem area has largely focused on combinatorial analysis and algorithmic issues, whereas there has been much less work on information theoretic aspects As an exception to the last statement, we mention several papers, such as [2], [20], [32] and [34] In [32], we have presented and analyzed a game-theoretic model of private 2 fingerprinting systems in the presence of colluding attacks The players of the game are on the one hand an encoder-decoder and on the other hand, a few attackers The decoder is facing the rather complicated goal of reliably identifying coalition members based on the forgery and the covertext The distortion between the forgery and the original data should not exceed a certain level A realistic worst-case approach, taken in [32], is based on the assumption that the attackers are informed of the covertext distribution and the coding-decoding strategy (up to a random secret key), whereas the encoder and decoder are not informed of the attack strategy Thus, the encoder and decoder are assumed to adopt a random coding strategy as a means of protection against malicious attackers Random coding is enabled by a secret key that specifies the particular codebook that has been drawn, which is shared by the encoder and decoder The action of the attackers is allowed to be stochastic, and therefore one can model the attack by a MAC, whose output is the forgery and whose inputs are fingerprinted copies that are observed by the coalition members The problem of the maxi game between the fingerprinter and the MAC applied by the users is addressed in [32] Two types of decoders are considered: a single-output (SO) decoder whose single output is a message index and a multiple-output (MO) decoder whose output is a list containing L message indices (where L is the size of the coalition) The two decoders aim at detecting only one member of the coalition When the SO decoder is concerned, an error is declared if its output does not belong to the coalition, whereas when the MO decoder is concerned, an error is declared when none of its outputs belongs to the coalition The reason for aig at this goal (adopted also in, eg, [2],[29] and references therein) is that when the forgery is required to resemble to at least one of the fingerprinted copies observed by the attacker target of detecting the entire coalition is impossible since the attacker can decide to almost ignore one of its inputs, and thus prevent reliable decoding of the input that was ignored It is assumed in [32] that the encoder uses constant composition (CC) codes and under this assumption, a single-letter expression for the capacity of the private fingerprinting game with respect to (wrt) the two types of decoders is found, and it is shown that their capacities 1 For a more comprehensive survey, see [32] 2 Two models of the game may be considered: the private game in which the covertext is available also at the decoder s side, and the public game, where it is only available to the encoder 2

3 are the same Asymptotically optimal strategies, taken by the adversaries, are characterized Also, lower bounds on the error exponents of the two types of decoders are derived In this paper, we analyze private fingerprinting systems from a different perspective Two kinds of error events are investigated An error of the first kind is associated with a decoder that aims at decoding the entire coalition, whereas an error of the second kind is associated with a decoder whose target is to identify at least one member of the coalition The setup of the game is similar to the one investigated in [32], with a few modifications The first modification is that here, a more refined analysis of error exponents is performed while in [32], the capacity is the main quantity of interest (although some lower bounds on the error exponents are provided as well) The second difference between this paper and [32] concerns the distortion constraint imposed on the attacker In this paper, the distortion between each of the fingerprinted copies observed by the attacker and the forgery he produces should be kept small, whereas in [32] it was sufficient to maintain a small amount of distortion between the forgery and one of the observed fingerprinted copies This difference stems from the need to prevent an attack strategy that almost ignores one of its inputs, thereby rendering the goal of identifying the entire coalition impossible Moreover, although ignoring one of the observed fingerprinted copies is a legitimate attack, it makes no sense that an effort will be made by the attacker to enlarge his coalition (by purchasing as many legal copies as possible), and then, at the end, ignore some of them We define the the achievable error exponent of the first/second kind as a number E such that there exists a random CC scheme, whose asymptotic performance in terms of average probability of error (in the logarithmic scale) of the first/second kind is given by E, assug the attacker knows the encoding-decoding scheme and can adopt the worst-case strategy associated with the kind of error of interest Single-letter lower and upper bounds are provided for the error exponents of the first and second kinds In the case of an error of the first kind, the lower and upper bounds coincide, while when the error of the second kind is concerned, they coincide for a certain range of low rates Another important result is that we show that one can use a universal decoder (used also in [32]) that is asymptotically optimal for the error of the first kind as well as an error of the second kind We also deduce lower bounds on the capacities of the fingerprinting games corresponding to the two kinds of errors The paper is organized as follows: In Section 2, some notation conventions are defined A statement of the problem, which is relevant to the entire paper, is given in Section 3 In Section 4 we describe our main results concerning the set of achievable error exponents, and Section 5 is devoted to a discussion of the results Finally, the proofs of the theorems appear in Sections 6 and 7 and proofs of lemmas are deferred to Section 8 3

4 2 Notation and Definitions Henceforth, we adopt the following notation conventions Random variables (RV s) will be denoted by capital letters, while their realizations will be denoted by the respective lower case letters Random vectors of dimension n will be denoted by boldface letters Thus, for example, if X denotes a random vector (X 1,, X n ), then x = (x 1,, x n ) will designate a specific sample value of X The alphabet of a scalar RV, X, will be designated by the corresponding caligraphic letter X The n-fold Cartesian power of a generic alphabet A, that is, the set of all n-vectors over A, will be denoted A n The set of probability mass functions (pmf s), defined on a alphabet X, will be denoted by P(X ), and the set of conditional pmf s from U to X will be denoted P(X U), ie, P(X U) = P (X U) : P (x u) 0, P (x u) = 1, (u, x) U X (1) x X The notation 1 A, where A is an event, will designate the indicator function of A, ie, 1 A = 1 if A occurs, and 1 A = 0 otherwise We adopt the convention that if a set T is empty, then t T f(t) =, and similarly, max t T f(t) = The notation c n = d n, for two sequences c n n 1 and d n n 1, will express asymptotic equality in the exponential 1 scale, ie, lim n log c n 1 n dn = 0 Similarly, c n d n will stand for lim inf n log c n n dn 0, and so on The empirical pmf, induced by a vector x X n, is the vector ˆP x = ˆPx (a), a X, where ˆP x (a) is the relative frequency of the letter a in the vector x The set of empirical pmf s induced by all n-vectors in X n will be denoted by P n (X ), ie, P n (X ) = ˆPx (2) x X n The type class T x (or T ( ˆP x )) is the set of n-vectors x such that ˆP x = ˆP x Similarly, the joint empirical pmf induced by two n-vectors, x, y, is the vector ˆP xy = ˆPxy (a, b), a X n, b Y n, where ˆP xy (a, b) is the relative frequency of (x i, y i ) = (a, b) The type class T xy (or T ( ˆP xy )) is the set of all pairs of n-vectors x X n, ỹ Y n, such that ˆP xỹ = ˆP xy The conditional type class T y x (or T x ( ˆP y x )), for a given x, is the set of all n-vectors ỹ Y n such that T xỹ = T xy, and the conditional empirical pmf ˆP y x is defined by ˆP y x (y x) = ˆP xy (x,y), x X : ˆP ˆP x (x) x (x) > 0 For a given empirical pmf ˆP P n (A), define the set of conditional empirical pmf s, P n (B, ˆP ) = P P(B A) : n ˆP (a)p (b a) is an integer for all (a, b) A B (3) The variational distance between two pmfs P and P, defined on the same set A, will be denoted by P P, ie, P P = a A P (a) P (a) (4) 4

5 Information-theoretic quantities, such as the entropy of the random variable X, whose pmf is P, will be denoted by either H P (X) or H(P ) interchangeably Similarly, the mutual information between X and Y given U, with joint pmf P will be denoted by I P (X; Y U), etc The divergence, or Kullback-Leibler distance, between two pmf s P and Q on A, where A <, is defined as D(P Q) = P (a) a A P (a) log, where we use the convention that Q(a) 0 log 0 = 0 and p log p = 0 Information-theoretic quantities governed by empirical measures induced by the n-vectors u, x, y, will have a special notation, eg, Ĥ x H ˆPx (X) Î x;y u I ˆPuxy (X; Y U) (5) The notation U X Y will signify that the RV s U, X, Y, in this order, form a Markov chain We shall have a particular interest in strongly exchangeable channels A strongly exchangeable channel is defined as a conditional distribution P Y X X with input alphabet (X n ) 2 and output alphabet Y n for which, for every x X n, x X n, y Y n and every permutation π of 1,, n, P Y XX (y xx ) = P Y XX (πy πxπx ) (6) Obviously, every DMC is strongly exchangeable 3 Statement of the Problem As mentioned earlier, the setup of the game considered in this paper resembles the one investigated in [32] with several modifications discussed in the Introduction Nevertheless, for the sake of completeness, we include the entire description of the modified game here Let U, X and Y be finite sets designating the alphabets of a covertext symbol, fingerprinted symbol and forgery symbol, respectively For convenience, we assume 3 U = X Let U designate the random covertext sequence within which the fingerprints will be hidden The n-vector U is composed of n iid RV s whose joint pmf is denoted by PU n with single-letter marginal pmf P U, and we shall assume that u U P U (u) > 0 The fingerprinter creates, at random, M = 2 nr fingerprinted versions of the covertext, denoted X i, i = 1,, M, and will be referred to as a codebook A secret key, K n, is an RV, independent of the fingerprints and the covertext, known to both the encoder and decoder, but unknown to the attacker Let K n stand for the alphabet of K n, and let P Kn stand for the distribution of K n Definition 1 A rate-r fingerprinting encoder of block-length n is a function which maps the secret key realization k n, the covertext data u, and the watermark message m M n into 3 It is natural to make this assumption, because one of the things one wants to keep secret is the very existence of fingerprints If U = X, it would be immediately apparent that the image or signal is fingerprinted 5

6 a stegotext vector (or, fingerprinted vector) x, ie, where f n : K n U n M n X n, (7) M n = 1,, M (8) Having created the fingerprinted copies X i, i M n, the encoder distributes them arbitrarily to M different users We assume that the distributed copies should meet the following distortion constraint wrt a given distortion level D 1, ie, Pr d 1 (U, X i ) nd 1 = 1 i M n, (9) where d 1 : U X R + denotes a single-letter distortion measure and d 1 (u, x) = n j=1 d 1(u j, x j ) for u U n and x X n Let W a and W b be the two 4 users that take part in the coalition It is assumed that W a, W b are independent and both uniformly distributed over the set M n Let X a and X b stand for the two corresponding sequences of fingerprinted data received by the users W a and W b Namely, X a and X b are available to the attacker who creates the forgery, Y, as the output of an attack channel P Y XaXb from X n X n to Y n Let d 2 : X Y R + denote another single-letter distortion measure, and denote d 2 (x, y) = n i=1 d 2(x i, y i ) for x X n and y Y n The attacker is assumed to meet the following distortion constraint 5 Pr d 2 (X a, Y) nd 2 and d 2 (X b, Y) nd 2 = 1 (10) Denote by P d 2 n the set of channels satisfying (10) The decoder observes the realization of the secret key (and thus, knows the particular codebook that has been drawn), the covertext U and the forgery Y and its aim is to detect members of the coalition Thus, it is a function φ n : K n U n Y n M 2 n As mentioned earlier, we analyze two kinds of errors: the first refers to a decoder that aims at decoding the two messages (detecting the entire coalition), and the second refers to a decoder that is less ambitious and is satisfied with correct decoding of at least one of the messages We refer to the resulting errors as error of the first kind and error of the second kind, respectively The quadruple F n = (P Kn, f n, M n, φ n ) will be referred to as a rate R randomized fingerprinting code Definition 2 Let N n (D 1 ) be the set of mappings η : U n P(X U), st and η(u) P n (X, ˆP u ), u U n E ˆPu η(u) d 1(U, X) D 1 u U n, (11) 4 In the general fingerprinting game, the coalition may have more than two members However, for the sake of simplicity, we focus on the case of two coalition members The results can be extended to a general coalition size 5 In [32], the constraint was Pr d 2 (X a, Y) nd 2 and d 2 (X b, Y) nd 2 = 1 6

7 We shall focus on the following subclass of fingerprinting codes: Definition 3 A rate-r constant composition (CC) fingerprinting code of block-length n is a code with the following structure of a secret key and encoder: the fingerprinted copies X i, i M n, are drawn independently given U, uniformly over a single conditional type class T x u for all u The choice of the conditional type class is defined by a mapping 6 η N n (D 1 ), ie, it is given by T u (η(u)) In the sequel, we shall use the abbreviation P (2) e T η (u) = T u (η(u)) (12) Denote by F d 1 n (R) the set of rate-r randomized CC fingerprinting codes induced by a mapping η n N n (D 1 ) A code F n F d 1 n (R) is therefore defined by the triple (η n, R, φ n ) The fact that we focus on the wide class of CC fingerprinting encoders can be justified by practical considerations As explained in [32], any practical randomized encoder should have some enumeration mechanism, where one first randomly selects a number under the uniform distribution in some range (in particular, an integer according to the key), and then this number is mapped to a codeword (given U) It is desired then that to implement this mapping, one should not need (exponentially) large tables but can use a simple function It is well known that there are indeed simple ways to enumerate sequences which belong to the same type class See, for example, [10], where such an enumeration method is proposed, and the same idea can be easily extended to conditional type classes For a given realization of a secret key, K n, let the output of the decoder be given by Ŵ = (Ŵ1, Ŵ2) = φ n (K n, U, Y) An error of the first kind occurs when not all coalition members are correctly detected 7 and an error of the second kind occurs if no coalition member is correctly detected by the decoder, hence, the average probability of error of the first and second kinds are given by ( ) P e (1) Fn, P Y Xa X b Pr Ŵ (Wa, W b ) and Ŵ (W b, W a ) or W a = W b and Ŵ1 W a and Ŵ2 W a ( ) Fn, P Y Xa X b Pr Ŵ1 W a and Ŵ1 W b and Ŵ2 W a and Ŵ2 W b, (13) respectively, where the probability is induced by the covertext, the members of the coalition, the ensemble of all possible codebooks, and the action of the attack channel P Y XaX b, when the randomized code F n is employed Definition 4 An achievable rate R wrt error of the first kind is one for which there exists a sequence F n F d 1 n (R), n 1 such that lim sup n sup P (1) ( ) PY X ax b P d 2 e Fn, P Y Xa X n b = 0 6 We require that η N n (D 1 ) in order to consider conditional types such that constraint (9) is met 7 In the rare case where W a = W b, it is sufficient that either Ŵ1 = W a or Ŵ2 = W a 7

8 Definition 5 The CC capacity of the private fingerprinting game wrt a error of the first kind C (1) (P U, D 1, D 2 ) is defined as the supremum of all achievable rates Similar definitions apply for the achievable rate and the CC capacity wrt error of the second kind C (2) (P U, D 1, D 2 ) In fact, the capacity is a function of the distortion levels D 1, D 2 and the covertext symbol distribution P U Define the negative normalized log error probability of the fingerprinting game wrt error of the first and second kind when the code F n = (η n, φ n, R) is applied i=1,2, respectively e (i) n (P U, D 1, D 2, η n, φ n, R) 1 (i) log P e (η n, φ n, R, P Y Xa X n b ), (14) Definition 6 The error exponents of the fingerprinting game wrt error of the first and second kind at rate R are defined by e (i) (P U, D 1, D 2, R) lim inf n i = 1, 2, respectively max η n N n(d 1 ),φ n P Y XaXb P d 2 n e (i) n (P U, D 1, D 2, η n, φ n, R), (15) Our main goal in this paper is to establish a closed form expression for the error exponents of the fingerprinting game wrt error of the first and second kinds at rate R 4 Main Results For a given measure PŨX P(U X ), define the set P d2 (PŨX, D 2 ) P XY ŨX P(X Y U X ) : P X Ũ = P X Ũ, maxed 2(X, Y ), Ed 2 ( X, Y ) D 2, (16) where the conditional measures P X Ũ and P X Ũ are the appropriate marginals of P ŨX P XY ŨX, and the expectations are wrt P ŨX P XY ŨX For a given pmf P = PŨX XY P(U X 2 Y) define the quantities ɛ a (PŨX XY, R) I P (X; XY Ũ) R, I P ( X; XY Ũ) R, I P ( X; Y Ũ) + I P (X; XY Ũ) 2R + ɛ b (PŨX XY, R) IP ( X; Y Ũ) + I P (X; XY Ũ) 2R + (17) Define the following quantities for i = a, b E i (P U, D 1, D 2, R) PŨ max P X Ũ P XY ŨX [ ] D(PŨX XY P U P X Ũ P X Ũ P Y X X ) + ɛ i(pũx XY, R), (18) 8

9 where the outmost imization is over PŨ P(U), the maximization is over P X Ũ such that Ed 1 (Ũ, X) D 1, and the inner imization is over P XY ŨX P d 2 (PŨX, D 2 ) The measure P Y X X is the appropriate marginal distribution induced by PŨX XY Note that D(PŨX XY P U P X Ũ P X Ũ P Y X X ) = D(P Ũ P U) + I P (X; X Ũ) + I P (Ũ; Y X X) (19) For convenience we denote D(PŨX XY, P U ) = D(PŨX XY P U P X Ũ P X Ũ P Y X X ) (20) The following theorem provides a single-letter expression of the error of the first kind Theorem 1 For all P U, D 1, D 2, R e (1) (P U, D 1, D 2, R) = E a (P U, D 1, D 2, R) (21) The proof of Theorem 1 appears in Section 6 It is composed of a lower bound and an upper bound on e (1) (P U, D 1, D 2, R) which coincide As for the error of the second kind, define: Ẽ b (P U, D 1, D 2, R) PŨ max P X Ũ P XY ŨX [ D(PŨX XY P U P X Ũ P X Ũ P Y X X ) + ɛ b(pũx XY, R) ], (22) where the outmost imization is over PŨ P(U), the maximization is over P X Ũ such that Ed 1 (Ũ, X) D 1, and the inner imization is over P XY ŨX P d 2 (PŨX, D 2 ) : I P (X; XY Ũ), I P ( X; XY Ũ) R Let R 0 (P U, D 1, D 2 ) be the lowest rate for which the constraint I P (X; XY Ũ), I P ( X; XY Ũ) R appearing in the imization becomes effective The following theorem provides lower and upper bounds on the error exponent of the second kind Theorem 2 For all P U, D 1, D 2, R, E b (P U, D 1, D 2, R) e (2) (P U, D 1, D 2, R) Ẽb(P U, D 1, D 2, R), (23) with equality whenever R [0, R 0 (P U, D 1, D 2 )] The proof of Theorem 2 appears in Section 7 The term ɛ i (PŨX XY, R) which appears in E i (P U, D 1, D 2, R), can be interpreted as the error exponent conditioned on the event that (U, X, X, Y) lies within the type-class corresponding to PŨX XY, and the term D(PŨX XY P U P X Ũ P X Ũ P Y X X ) is a result of the averaging over the types 9

10 5 Discussion The asymptotically optimal decoder is the same decoder as in [32] denoted φ n that is informed of u and the codebook, observes the forgery y and operates as follows: (ŵ 1, ŵ 2 ) = arg w1, w 2 w 1 T xw1 x w2 uy, (24) where ties are broken arbitrarily Since this decoder achieves the lower bound on the average probability of error of the two kinds, it can be regarded as a universal decoder for the class of channels P d 2 n, wrt random CC coding The fact that the maximizations over φ n in (15) are performed separately for the two kinds of error exponents implies that the decoding rule can be chosen to imize the kind of error of interest In spite of this fact, it turns out that the same decoder is asymptotically optimal for the two kinds of errors, regardless of the encoding scheme Since T xx uy = e nĥxx uy, the alternative decoder which is also asymptotically optimal is given by (ŵ 1, ŵ 2 ) = arg w1, w 2 w 1 Ĥ xw1 x w2 uy (25) As the value of max exponent is detered by the doant joint type class (of the covertext sequence, the two fingerprinted copies of the coalition members and the forgery), an attack channel denoted P that assigns equal probability to all the conditional type Y X X classes T y x x such that the distortion constraint is not violated and is uniform within each conditional type class is introduced Namely, P Y X X (y x, x) = 1 maxd 2(x, y), d 2 ( x, y) nd 2 c n,x, x T y x x (26) where c n,x, x is the appropriate polynomial normalization factor This channel is shown to be a worst-case attack channel wrt the error exponent of the first kind, and it is also used in the derivation of the upper bound on the error exponent of the second kind Ẽb(P U, D 1, D 2, R) (in (23)) The proof of Theorems 1 and 2 involves the following lemma (proved independently also in [28]) whose proof appears in Section 8 Lemma 1 For all a [0, 1] and every integer M 1, hence, 1 2 1, Ma 1 (1 a)m 1, Ma, (27) 1 (1 a) M = 1, Ma (28) This lemma implies that the union bound on the random coding error exponent is tight, and this lemma can also be used in other contexts such as in [13] to simplify the derivation 10

11 The derivation performed in this paper provides us also with lower bounds on the capacities of the fingerprinting systems corresponding to the two kinds of errors, which are given by the smallest rates for which E i (P U, D 1, D 2, R) = 0, i = a, b It is easily verified that this yields the following lower bounds C (1) (P U, D 1, D 2 ) max P X U C (2) (P U, D 1, D 2 ) max P X U P Y X X P Y X X I( X; Y UX), I(X; Y U X), 1 2 I(X X; Y U) 1 2 I(X X; Y U), (29) where U P U, X is an RV which is independent of X given U and satisfies P X U = P X U, and U (X, X) Y, the maximizations are over P X Ũ : E PŨ P X U d 1 (U, X) D 1 and the imizations are over P Y XX : maxe PX XPY d XX 2 (X, Y ), E PX XPY d XX 2 ( X, Y ) D 2 The difference between the lower bound on the error exponent of the second kind, E b (P U, D 1, D 2, R), and the lower bound on the error exponent of the MO decoder of [32] is that while in E b (P U, D 1, D 2, R) (see (18)) the imization is over P XY ŨX P d 2 (PŨX, D 2 ), the bound in [32] includes a imization over P XY ŨX such that P X Ũ = P X U and maxed 2 (X, Y ), Ed 2 ( X, Y ) D 2 This difference stems from the different distortion constraints imposed on the attacker In spite of the differences between the model of the ordinary MAC and the present scenario (see the discussion in the Introduction), the lower bound on the capacity C (1) (P U, D 1, D 2 ) bears some resemblance to the capacity region of the MAC The capacity region of the MAC given input distributions P 1 (X), P 2 ( X) is given by the set of rate pairs (R 1, R 2 ), satisfying 0 R 1 I(X; Y X) ; 0 R 2 I( X; Y X) ; 0 R 1 + R 2 I(X, X; Y ) (30) In the case of two users who (use the same codebook and hence) have the same rate, R 1 = R 2 = R, the corresponding upper limit is R I(X; Y X), I( X; Y X), 1I(X, X; Y ), 2 the imizer depending on which of the above three lines is crossed first by the 45-degree line R 1 = R 2 The expression for the error exponent E a (P U, D 1, D 2, R) also resembles the lower bound of the error exponent of the classical MAC (see [16]) That bound is given by [ D(V X XY U P X U P X U P X P Y X U ) V UX XY + maxi(x; XY U) R 1, I( X; XY U) R 2, I( X; Y U) + I(X; XY ] U) R 1 R 2, where U is some auxiliary RV (used for time sharing) with distribution on some finite alphabet 8, P X U and P X U are conditional distributions (that can be optimized) defined on P(X U) and P( X U), respectively, with X, X being the inputs alphabets of the channel, and the imization is over V UX XY P(U X X Y) with marginals V UX = P UX 8 It is proved in [16] that the size of the alphabet of U can be 4 without loss of generality (31) 11

12 and V U X = P U X The main differences between (31) and E a (P U, D 1, D 2, R) stem from the distortion constraints imposed on both parties of the fingerprinting game and from the fact that we consider a coalition of two users sharing the same codebook and thus operating at the same rate It should be noted that while U in E a (P U, D 1, D 2, R) represents a covertext symbol, in [16] it stands for a time-sharing symbol A modification of the derivation performed in this paper (the lower bound on P e (1) ( ) Fn, P Y XaXb ), can be used to show that the bound of [16] is tight in the random coding sense 6 Proof of Theorem 1 61 Proof of the Direct Part of Theorem 1 The performance of the decoder φ n (see (24)) will serve as an upper bound on the average probability of error attainable by the optimal decoder The encoder is a CC fingerprinting code defined by a mapping η N n (D 1 ) that satisfies the following condition η(u) = η(πu) (32) for all u U n and every permutation π of 1,, n In other words, the channel from U to X is strongly exchangeable Without loss of generality 9 one can assume that the transmitted messages indices are (1, 2) With a little abuse of notation, we shall denote by P M u the joint pmf of X i M i=1 conditioned on the event U = u, and by P M u,x 1,x 2 the joint pmf of X i M i=1 conditioned on the event U = u, X 1 = x 1, X 2 = x 2 Assug the covertext is u, the codewords observed by the attacker are x 1, x 2, and y is the forgery, the probability that the proposed decoder (24) fails to decode the entire coalition is given by Pr error ux 1 x 2 y = P M u,x 1,x 2 (i, j) (1, 2) : Txi,x j uy T x1,x 2 uy = P M u,x 1,x 2 k 3 : Xk T x ux 1 y + + T x ux1 y : T x ux 1 y T x 2 ux 1 y T x ux2 y : T x ux 2 y T x 1 ux 2 y T x x uy : T x x uy T x1,x 2 uy P M u,x 1,x 2 k 3 : Xk T x ux 2 y P M u,x 1,x 2 i 3, j 3 : (Xi, X j ) T x x uy (33) (34) 9 The case w 1 = w 2 can be treated similarly 12

13 [ ( = max 1 1 T x ux 1 y T x ux1 y : T x ux 1 y T x 2 ux 1 y T η (u) [ + max T x ux2 y : T x ux 2 y T x 1 ux 2 y 1 ( 1 T x ux 2 y T η (u) ) M 2 ] ) M 2 ] + max T x x uy : T x x uy T x1,x 2 uy PM u,x 1,x 2 i 3, j 3 : (Xi, X j ) T x x uy [ ( = 1 1 T ) ] [ M 2 ( x 2 ux 1 y T ) ] M 2 x 1 ux 2 y T η (u) T η (u) + max T x x uy : T x x uy T x1,x 2 uy PM u,x 1,x 2 i 3, j 3 : (Xi, X j ) T x x uy, (35) where the three summands in (33) are corresponding to (i) error only in Ŵ2, (ii) error only in Ŵ1, and (iii) error in both Ŵ1 and Ŵ2 Next, we present a key lemma that will be used to evaluate (35) and the lower bound as well Recall the abbreviation (12) Lemma 2 Let u, x, x, y U n X n X n Y n be given n-vectors such that x T η (u), x T η (u) The quantity P M P M (x, x, u, y) = P M u (i, j) 1,, M 2 st (X i, X j ) T x, x uy (36) satisfies where P M 1, C M (37) P M C M q M C M (38) [ q M q M (x, x, u, y) = 1 T ] x u xy T x u xy M, (39) T η (u) ( ) M C M C M (x, x, u, y) = T x, x uy 2 T η (u) (40) 2 The lemma is proved in Section 8 Obviously, by (35) and (37) we have the upper bound (i, j) (1, 2) : Txi,x j uy T x, x uy P M u 2(1 q M 2 ) + max P M u i 3, j 3 : (Xi, X j ) T x x uy T x x uy : T x x uy T x, x uy = 1 q M 2 + max P M 2(ux x y) T x x uy : T x x uy T x, x uy 1 q M 2 + max T x x uy : T x x uy T x, x uy 1, C M 2(ux x y) = 1 q M 2 + 1, C M 2 (41) = max 1 q M 2, 1, C M 2 (42) 13

14 where (41) follows since C M (ux x y) is increasing with T x x uy Thus, Pr error of the proposed decoder ux 1 x 2 y max 1 q M 2 (ux 1 x 2 y), 1, C M 2 (ux 1 x 2 y), (43) Next, denote a = a(u, x, x, y) T x u xy T x uxy, (44) T x u and note that since T η (u) = T x u and q M behavior of 1 q M and implies, = (1 a) M, Lemma 1 provides the asymptotic 1 q M = 1, Ma, (45) and consequently, For the sake of convenience, we shall denote We thus have established the upper bound where max 1 q M 2, 1, C M 2 = max 1, Ma, 1, C M = 1, max Ma, C M (46) α M ( ˆP ux xy ) 1, max Ma, C M (47) η N n(d 1 ),φ n η N n (D 1 ),φ n max P Y X X P d 2 n max P Y X X P d 2 n P e (1) (η, φ n, R, P Y X X) u,x, x,y Pr(u, x, x, y)α M ( ˆP ux xy ), (48) Pr(u, x, x, y) = P n U (u) 1x T η(u), x T η (u) T η (u) 2 P Y X X(y x x) (49) The next two lemmas, whose proofs appear in Section 8, conclude the proof of the direct part of Theorem 1 Denote by N ex n (D 1 ) the set of mappings η N n (D 1 ) that satisfy (32) Lemma 3 For every η N ex n (D 1 ), φ n max P Y X X P d 2 n P e (1) (η, φ n, R, P Y X X) α M ( ˆPux xy) Pr(u, x, x), (50) T u,x, x y x, x y: maxd 2 (x,y),d 2 ( x,y) nd 2 where Pr(u, x, x) = P n U (u) 1x T η(u), x T η (u) T η(u) 2 14

15 Lemma 4 η Nn ex (D 1 ) = Pr(u, x, x) u,x, x η N n(d 1 ) u,x, x = exp ( Pr(u, x, x) n PŨ max P X Ũ P XY Ũ,X α M ( ˆPux xy) T y x, x y: maxd 2 (x,y),d 2 ( x,y) nd 2 α M ( ˆPux xy) T y x, x y: maxd 2 (x,y),d 2 ( x,y) nd 2 (51) ) [D(PŨX XY, P U ) + ɛ a (PŨX XY, R)], (52) where the outmost imization is over PŨ P n (U), the maximization is over P X Ũ P n (X, PŨ) : Ed 1 (Ũ, X) D 1, and the innermost imization is over P XY ŨX P n(x Y, PŨX ) : st P X Ũ = P X Ũ and maxed 2(X, Y ), Ed 2 ( X, Y ) D 2 The gap between the rhs of (52) and E a (P U, D 1, D 2, R) is only in that in (52), the optimizations are over empirical measures while in (93) the optimizations are over sets of continuous measures Due to the continuity considerations, as n tends infinity, the rhs of (52) converges to E a (P U, D 2, R) 62 Proof of the Converse Part of Theorem 1 When deriving a lower bound on the average probability of error, one can assume (a) that the attacker is constrained to use a strongly exchangeable channel and (b) that the channel is known at the decoder s side which can implement the maximum likelihood decoding rule The next lemma will be used to establish a lower bound on the probability of error of the ML decoder under these assumptions Lemma 5 For any fixed codebook and a known channel that is strongly exchangeable, the ML decoder assigns the same likelihood to two pairs of codewords that lie in the same conditional type given (u, y) Proof Let B u denote the codebook corresponding to u, ie, the collection of codewords x i (u) M i=1 Let m, m be two message indices Since (u, B u, y) are known at the decoder, the ML decoder should maximize the quantity Pr(m, m u, B u, y) over all m M n m M n m m We have Pr(m, m u, B u, y) (a) = Pr(m, m, u, B u ) Pr(y m, m, u, B u ) Pr(u, B u, y) (b) = 1 Pr(u, B M 2 u ) Pr(u, B u, y) P Y X, X (y x m(u), x m (u)) (53) where (a) follows from Bayes rule, (b) follows from Bayes rule and the fact that Pr(y u, m, m, B u ) = Pr(y x m (u), x m (u)), Pr(m, m ) = 1 M 2, and (u, B u ) is independent of the message indices 15

16 Hence, the ML decoder should maximize P Y X, X(y x m (u), x m (u)) over all message indices m m Since P Y X,X is strongly exchangeable, by definition (see (6)), if (x m (u), x m (u)) T xm (u),x m (u) y one has and the lemma follows P Y X, X(y x m (u), x m (u)) = P Y X, X(y x m (u), x m (u)) (54) Thus, the probability that the ML decoder fails (given u, x 1, x 2, y) is lower bounded as follows: P M u Pr error of ML decoder, known exchangeable channel ux 1 x 2 y P M u,x 1,x 2 (i, j) (1, 2) : (xi, x j ) T x1 x 2 uy (55) We can now use (38) to lower bound the probability of the event of interest (i, j) (1, 2) : (xi, x j ) T x, x uy = P M u k 3 : Xk T x u xy T x uxy or (i, j), i, j 3 : (X i, X j ) T x, x uy max P M u k 3 : Xk T x u xy T x uxy, P M u (i, j), i, j 3 : (Xi, X j ) T x, x uy = max 1 q M 2, P M 2 C M 2 max 1 q M 2, q M C M 2 C M 2 max 1 q M 2, q M 2, (56) 1 + C M 2 where (56) is due to the fact that q M is decreasing with M Next, denote for convenience q q M 2 and C = C M 2 C max 1 q, q 1 + C C = (1 q) 1 1 q q 1 + C C +q 1 + C 1 C 1 q < q 1 + C = (1 q) 1 q 1 + C 1 + 2C C +q 1 + C 1 q > 1 + C 1 + 2C ( C ) 1 q 1 + C 1 + 2C 1 + 2C C 1 + 2C C 1 + C 1 q > 1 + C 1 + 2C C = (57) 1 + 2C 16

17 and obviously, thus, (57) and (58) imply C max 1 q, q (1 q), (58) 1 + C C M 2 max 1 q M 2, q M C M 2 C M 2 max 1 q M 2, 1 + 2C M 2 1 max 1 q M 2, 3 1, C M 2 (59) = max 1 q M 2, 1, C M 2, (60) C where (59) follows because either C M 2 1 and then M 2 1+2C M 2 C M 2 or C 3 M 2 1, and C consequently M 2 1+2C M The fact that the bounds (43) and (60) coincide yields that the average probability of error (given u, x 1, x 2, y) of the proposed decoder (24) achieves the lower bound on the average probability of error attainable by the ML decoder which corresponds to every strongly exchangeable channel that is known at the decoder We have thus established the lower bound η N n (D 1 ),φ n η N n(d 1 ) max P Y X X P d 2 n max P Y X X P d 2,ex n P e (1) (η, φ n, R, P Y X X) u,x, x,y Pr(u, x, x, y)α M ( ˆP ux xy ), (61) where Pr(u, x, x, y) is as in (49) and P d 2,ex n = P Y X X P d 2 n : P Y X X is strongly exchangeable (62) The gap between (48) and (61) stems only from the fact that the set over which the maximization is performed in the lower bound is P d 2,ex n while in the lower bound the set is P d 2 n To bridge this gap, we introduce the attack channel P given in (26), and to conclude Y X X the proof we note that max Pr(u, x, x, y)α M ( ˆP ux xy ) η N n (D 1 ) P Y X X P d 2 n,ex u,x, x,y Pr(u, x, x)p Y X X (y x x)α M( ˆP ux xy ) u,x, x,y = α M ( ˆPux xy) Pr(u, x, x), (63) T u,x, x y x, x y: maxd 2 (x,y),d 2 ( x,y) nd 2 applying Lemma 4 and its proceeding remark, the converse part of Theorem 1 follows 17

18 7 Proof of Theorem 2 71 Proof of the Lower Bound of Theorem 2 When the error of the second kind is concerned, the probability that the proposed decoder (24) fails to decode at least one member of the coalition is given by We have, Pr error of the proposed decoder ux 1 x 2 y = P M ux 1,x 2 i 3, j 3 : Txi,x j uy T x1,x 2 uy = P M u,x 1,x 2 i 3, j 3 : (Xi, X j ) T x x uy T x x uy : T x x uy T x1,x 2 uy = max T x x uy : T x x uy T x1,x 2 uy PM u,x 1,x 2 i 3, j 3 : (Xi, X j ) T x x uy = max T x x uy : T x x uy T x1,x 2 uy P M 2(ux x y) (64) max P M 2 (ux x y) T x x uy : T x x uy T x1,x 2 uy max 1, C M 2 (ux x y) T x x uy : T x x uy T x1,x 2 uy = 1, C M 2 (ux 1 x 2 y) (65) where the last step follows since C M (ux x y) is increasing with T x x uy Now we use a similar derivation to the one performed in Lemmas 3 and 4 (replacing α M ( ˆP ux xy ) defined in (47) by α M ( ˆP ux xy ) = 1, C M (ux 1 x 2 y)), and this yields η N n (D 1 ),φ n max P Y X X P d 2 n P e (1) (η, φ n, R, P Y X X) (66) max φ n e (2) n (P U, D 2, η, φ n, R) ˆP ux xy P (n) d 2 (η,d 2 ) [ D( ˆP ux xy P U ˆP x u ˆP x u ˆP y x x ) + ɛ b ( ˆP ux xy, R) ] (67) Since η N n (D 1 ), the mapping η is continuous, and thus, the rhs of the above equality converges to E b (P U, D 2, η, R) This concludes the proof of the lhs of (23) 72 Proof of the Upper Bound of Theorem 2 Similarly to the argumentation used in the analysis of the error of the first kind, when deriving a lower bound on the probability of error of the second kind, one can assume (a) 18

19 that the attacker is constrained to use strongly exchangeable channels and (b) that the channel is known at the decoder s side which can implement an optimal decoding rule wrt the error of second kind We next characterize this optimal decoder Recall that Let B u denotes the codebook corresponding to u First, note that given (u, y) and when the optimal decoder for the error of second kind is used, if there exists another pair (x i, x j ) where i 3 and j 3 such that (x i, x j ) T x1,x 2 u,y an error occur with high probability, because, the optimal decoder for the error of second kind imizes Pr(m 1 not sent, m 2 not sent u, B u, y) = 1 Pr(m 1 sent or m 2 sent u, B u, y) (68) among all m 1, m 2 M n, m 1 m 2, or alternatively, maximizes multiplying by Pr(m 1 sent u, B u, y) + Pr(m 2 sent u, B u, y) Pr(m 1 sent, m 2 sent u, B u, y) (69) Pr(y, m 1 sent u, B u ) Pr(m 1 sent u, B u ) Pr(y u,b u) Pr(m 1 sent u,b u ) = Pr(y, m 1 sent u, B u ) Pr(m 1 sent u, B u ) we obtain + Pr(y, m 2 sent u, B u ) Pr(m 1 sent u, B u ) + Pr(y, m 2 sent u, B u ) Pr(m 2 sent u, B u ) Pr(y, m 1 sent, m 2 sent u, B u ) Pr(m 1 sent u, B u ) 2 M 2 ( 2 1 ) Pr(y, m 1 sent, m 2 sent u, B u ) Pr(m M M 2 1 sent, m 2 sent u, B u ), where the last step follows since Pr(m 1 sent u, B u ) = Pr(m 2 sent u, B u ) = 2 1 M Pr(m 1 sent, m 2 sent u, B u ) = 2 Hence, the optimal decoder maximizes M 2 2 Pr(y u, B u, m 1 sent ) + Pr(y u, B u, m 2 sent) 2M 1 Pr(y u, B u, m 1 sent, m 2 sent ) 2 = Pr(y u, B u, x m1 ) + Pr(y u, B u, x m2 ) 2M 1 Pr(y x m 1, x m2 ) = 1 [ PY XX (y x m1, x m ) + P Y XX (y x m, x m1,) ] M m + 1 [ PY XX (y x m2, x m ) + P Y XX (y x m, x m2 ) ] M m 1 2M 1 [P Y XX (y x m1, x m2 ) + P Y XX (y x m2, x m1 ) ] f u,bu,y(x m1, x m2 ) Therefore, even for the optimal decoder (wrt average probability of error of the second kind) one has Pr error ux 1 x 2 y = P M u,x 1,x 2 i 3, j 3 : f u,bu,y(x i, x j ) f u,bu,y(x 1, x 2 ) ux 1 x 2 y P M u,x 1,x 2 i 3, j 3 : (xi, x j ) T x1,x 2 uy, f u,bu,y(x i, x j ) f u,bu,y(x 1, x 2 ) (72) M 2 (70) and (71) 19

20 Next by symmetry we have P M u,x 1,x 2 i 3, j 3 : (xi, x j ) T x1,x 2 uy, f u,bu,y(x i, x j ) f u,bu,y(x 1, x 2 ) = P M u,x 1,x 2 i 3, j 3 : (xi, x j ) T x1,x 2 uy, f u,bu,y(x i, x j ) f u,bu,y(x 1, x 2 ) (73) To realize this, let B u = x m (u) M m=1 be a codebook and let i, j 3, be a pair of indices such that (x i, x j ) T x1,x 2 uy, and f u,bu,y(x i, x j ) f u,bu,y(x 1, x 2 ) Let π be a permutation such that (πx i, πx j, πu, πy) = (x 1, x 2, u, y) (such a permutation exists since (x i, x j ) T x1,x 2 uy) Consider the codebook B u = x 1 (u), x 2 (u), x i (u), x j (u), πx m (u) m 1,2,i,j Obviously, by definition of π, we have f u,b u,y(x i, x j ) f u,b u,y(x 1, x 2 ) and since the symmetry argument holds Eqs (73) yields P M u,x 1,x 2 B u = P M u,x 1,x 2 B u (74) P M u,x 1,x 2 i 3, j 3 : (xi, x j ) T x1,x 2 uy, f u,bu,y(x i, x j ) f u,bu,y(x 1, x 2 ) 1 2 PM u,x 1,x 2 i 3, j 3 : (xi, x j ) T x1,x 2 uy (75) Hence, (72) and (75) yield the lower bound Pr error ux 1 x 2 y 1 2 PM u,x 1,x 2 i 3, j 3 : (xi, x j ) T x1,x 2 uy = P M 2 (ux 1 x 2 y) q M 4 (ux 1 x 2 y) C M 2 (ux 1 x 2 y) 1 + C M 2 (ux 1 x 2 y) q M 4 (ux 1 x 2 y) 1 2 1, C M 2(ux 1 x 2 y) = q M (ux 1 x 2 y) 1 2 1, C M(ux 1 x 2 y), (76) where the first inequality follows from (38), and the second inequality follows because either C M 1 and then C M 1+C M C M 2 or C M 1, and consequently C M 1+C M 1 Now, recall 2 that q M (ux 1 x 2 y) = [1 a] M where a = max T x1 ux 2 y, T x2 ux 1 y T x u = e n Îx 1 ;x 2 y u,îx 2 ;x 1 y u, so whenever R Îx1 ;x 2 y u, Îx 2 ;x 1 y u, we have q M = e 1 = 1 Hence, using an equivalent of Lemma 3, the expression for the error exponent resulting from (76), is upper bounded (on the exponential scale) by ˆP ux xy P (n) d (η,d 2 2 ),R Îx; xy u,î x;xy u [ D( ˆP ux xy P U η( ˆP u ) ˆP y x x ) + Î x;y u + Îx; xy u 2R +], (77) 20

21 which differs from Ẽb(P U, D 2, η, R) (see (22)) only by the fact that the imization is over empirical measures of order n rather than over the continuum Since η is a continuous mapping, the above expression converges to Ẽb(P U, D 2, η, R) This concludes the proof of the rhs of (23) Now, by definition of R 0 (η, D 2 ) (following eq (22)), E b (P U, D 2, η, R) = Ẽb(P U, D 2, η, R) for all R [0, R 0 (η, D 2 )] and this concludes the proof of Theorem 2 8 Proofs of Lemmas Proof of Lemma 1 Proof The rhs of (27) follows trivially As for the lhs, it is easy to show that for all a [0, 1], 1 (1 a) M Ma 1 + Ma (78) To see that, note that if (1 a) M 1, we have 1 (1 1+Ma a)m Ma, and if (1 1+Ma a) M 1, we have 1 (1 1+Ma a)m am(1 a) M 1 a M, thus, 1 (1 (1 a)(1+ma) a)m Ma, M 1+Ma (1 a)(1+ma) The lemma follows since = Ma 1+Ma Ma 1 + Ma 1 1, Ma (79) 2 Proof of Lemma 2 ( ) M Proof The first part of the lemma (37) follows trivially because is the number of 2 possibilities of choosing a pair among the M vectors, and T x, x uy T x u is the probability that two 2 iid vectors uniformly distributed over T x u lie within T x, x uy In order to prove (38), note that P M P M u There exists a single pair (i, j) st (Xi, X j ) T x, x uy = C M P M u (X1, X 2 ) is the single pair in T x, x uy (X1, X 2 ) T x, x uy Xi T x u xy T x uxy i 3, (X i, X j ) T x, x uy i, j 3 (a) = C M P M u (b) [ C M P M u Xi T x u xy T x uxy i 3 + P M u = C M [q M P M 2 1] (Xi, X j ) T x, x uy i, j 3 1 ] C M [q M 2 P M ] (80) 21

22 where (a) is because if, for example, there exists i 3 such that X i T x u xy, then T xi,x 2 uy = T x1,x 2 uy = T x, x uy, (b) follows since for two events A, B, 1 P (A B) P (A) + P (B) P (A B), thus, P (A B) P (A) + P (B) 1, and the last step holds since by definition P M is non decreasing with M Proof of Lemma 3 Proof To establish the proof we shall show that the rhs of (48) is upper bounded (in the exponential scale) by the rhs of (50), and that the rhs of (61) is lower bounded by the rhs of (50) Let P Y X X be a given attack channel, which is a member of P d 2 n, and let π be one of the n! permutations of 1,, n Denote by P π the channel defined by Y X X P π Y X X (y x x) = P Y X X (πy πxπ x), (81) where πx designates the sequence x permuted according to π For a given watermarking channel P X U P d 1 n denote L(P Y X X) PU n 1 (u) P T u η (u) 2 Y X X(y x x)α M ( ˆP ux xy ) (82) y x, x T η(u) Since PU n is memoryless, the encoder satisfies (32), α M( ˆP ux xy ) depends solely on the type class T ux xy, and since L(P Y X X) is an affine functional of P Y X X, we have ( ) ( ) ( ) L P Y X X = L P π 1 Y X X = L P π n Y X X (83) π Note that 1 n π P π Y X X is a strongly exchangeable channel, as if T x xy = T x x y, there exists a permutation π such that (x x y ) = ( πx π x πỹ), and thus π P Y X X (πy πx x) = π P Y X X (πy πx x ) Thus, we can denote that 1 n π P π Y X X (y x x) = P Y X X(T y x x x x) T y x x, and (83) implies ( ( ( ) PY X X Ty x x x x ) ) L P Y X X = L (84) T y x x Now, observe that (10) implies P Y X X(T y x x x x) 1 maxd 2 (x, y), d 2 ( x, y) nd 2, (85) and since L(P Y X X) is affine in P Y X X we have ( ) ( ) L P Y X X L P Y X X, (86) where P is defined in (26) Hence, the rhs of (48) is upper bounded (in the exponential Y X X scale) by the rhs of (50) 22

23 Proof of Lemma 4 First note that α M ( ˆPux xy) Pr(u, x, x) T u,x, x y x, x y: maxd 2 (x,y),d 2 ( x,y) nd 2 = PU n 1 α M ( ˆPux xy) (u) T u η (u) 2 T y x x x, x T η(u) y: maxd 2 (x,y),d 2 ( x,y) nd 2 = PU n 1 T x xu (u) T u η (u) T η (u) x T η (u) T x ux T η (u) T y ux x T y x x α M( ˆP ux xy ) (87) T y ux x : maxd 2 (x,y),d 2 ( x,y) nd 2 Taking the imum over η N n (D 1 ), one notices that the imizer satisfies η(u ) = η(u) whenever u T u, because the quantity T x ux T η(u) T x xu T η (u) T y ux x T y x x α M( ˆP ux xy ) T y ux x : maxd 2 (x,y),d 2 ( x,y) nd 2 depends only on the joint type class of (u, x) thus the imizing η N n (D 1 ) belongs to Nn ex (D 1 ) which proves (51) Next note that since η N n (D 1 ) is a mapping from u to P(X U), one can switch the order between the summation over u U n and the imization over η N n (D 1 ), ie, η N n (D 1 ) u P n U (u) x T η (u) 1 T η (u) T x ux T η (u) T x xu T η (u) T y ux x T y x x α M( ˆP ux xy ) T y ux x : maxd 2 (x,y),d 2 ( x,y) nd 2 = PU n 1 T x xu (u) η(u): d 1 (u,x) nd 1 T u η (u) T η (u) x T η (u) T x ux T η (u) T y ux x T y x x α M( ˆP ux xy ) (88) T y ux x : maxd 2 (x,y),d 2 ( x,y) nd 2 23

24 Now we use the method of types to evaluate the rhs expression in the exponential scale PU n 1 T x xu (u) T η (u) T η (u) u η(u): d 1 (u,x) nd 1 x T η (u) T x ux T η (u) T y ux x T y x x α M( ˆP ux xy ) (89) T y ux x : maxd 2 (x,y),d 2 ( x,y) nd 2 = max max ˆP u e nd( ˆP u P U ) e nîx; x u η(u): d 1 (u,x) nd 1 T x ux T η (u) max e nîu;y x x α M ( ˆP ux xy ) ˆP y ux x : maxd 2 (x,y),d 2 ( x,y) nd 2 ( [ = exp n max D( ˆP u P U ) + Îx; x u + Îu;y x x 1 ]) n log α M( ˆP ux xy ) where the outmost imization is over ˆP u P n (U), the maximization is over ˆP x u P n (X, ˆP u ) and the inner imization is over ˆP xy ux P n (X Y, ˆP ux ) such that the marginal distributions satisfy ˆP x u = ˆP x u and maxd 1 (x, y), d 1 ( x, y) nd 2 The last step follows from the fact that ˆP x u = ˆP x u = η( ˆP u ) From (19), we have, D( ˆP u P U ) + Îx; x u + Îu;y x x = D( ˆP ux xy P U ˆP x u ˆP x u ˆPŷ x x ) (90) = D( ˆP ux xy, P U ) (91) Recall the definition of a (44), and note that a = max T x u xy, T x u xy T η = e n Îx; xy u,î x;xy u (u) We also have by (40), C M = e n(2r Î x;y u Îx;y u x) Hence, by (47), we have, = = 1 n log α M( ˆP ux xy ) = 1 n log 1, max Ma, C M 2 + Îx; xy u, Î x;xy u R, Î x;y u + Îx; xy u 2R + Îx; xy u R, Î x;xy u R, Î x;y u + Îx; xy u 2R (92) Thus (88)-(92) imply η N n(d 1 ) u,x, x = exp ( Pr(u, x, x) n PŨ max P X Ũ P XY Ũ,X α M ( ˆPux xy) T y x, x y: maxd 2 (x,y),d 2 ( x,y) nd 2 ) [D(PŨX XY, P U ) + ɛ a (PŨX XY, R)], (93) where the outmost imization is over PŨ P n (U), the maximization is over P X Ũ P n (X, PŨ) : Ed 1 (Ũ, X) D 1, and the innermost imization is over P XY Ũ,X P n(x Y, PŨ,X ) : st P X Ũ = P X Ũ and maxed 2(X, Y ), Ed 2 ( X, Y ) D 2 24

25 References [1] A Barg, G R Blakley, and G Kabatiansky, Good digital fingerprinting codes, Proc ISIT 2001, p 161, Washington, DC, June 2001 [2] A Barg, G R Blakley, and G Kabatiansky, Digital fingerprinting codes: problem statements, constructions, identification of traitors, IEEE Trans Inform Theory, vol 49, no 4, pp , Apr 2003 [3] D Boneh and J Shaw, Collusion secure fingerprinting for digital data, IEEE Trans Inform Theory, vol 44, no 5, pp , September 1998 [4] B Chen and GW Wornell, Quantization index modulation: a class of provably good methods for digital watermarking and information embedding, IEEE Trans Inform Theory, vol 47, no 4, pp , May 2001 [5] B Chor, A Fiat, M Naor, and B Pinkas, Tracing traitors, IEEE Trans Inform Theory, vol 46, no 3, pp , May 2000 [6] G Cohen, S Encheva, and G Zémor, Copyright protection for digital data, IEEE Communication Letters, vol 4, no 5, pp , May 2000 [7] A Cohen and A Lapidoth, The Gaussian watermarking game, IEEE Trans Inform Theory, vol 48, no 6, pp , June 2002 [8] A Cohen and A Lapidoth, The capacity of the vector Gaussian watermarking game, in Proc ISIT 2001, p 5, 2001 [9] GD Cohen and HG Schaathun, Upper bounds on separating codes, IEEE Trans Inform Theory, vol 50, no 6, pp , June 2004 [10] T M Cover, Enumerative source coding, IEEE Trans Inform Theory, vol IT-19, no 1, pp 73 77, Jan 1973 [11] I Csiszár and J Körner, Coding theorems for discrete memoryless systems New York: Academic, 1981 [12] J Dittmann, A Behr, M Stabenau, P Schmitt, J Schwenk, and J Ueberberg, Combining digital watermarks and collusion secure fingerprints for digital images, preprint, 2001 [13] R G Gallager, The random coding bound is tight for the average code, IEEE Trans Inform Theory, vol IT-19, pp , Mar 1973 [14] D Karakos, and A Papamarcou, A relationship between quantization and watermarking rates in the presence of additive Gaussian attacks Information Theory, IEEE Trans Inform Theory, vol 49, no 8, pp , Aug

The Method of Types and Its Application to Information Hiding

The Method of Types and Its Application to Information Hiding The Method of Types and Its Application to Information Hiding Pierre Moulin University of Illinois at Urbana-Champaign www.ifp.uiuc.edu/ moulin/talks/eusipco05-slides.pdf EUSIPCO Antalya, September 7,

More information

Generalized Writing on Dirty Paper

Generalized Writing on Dirty Paper Generalized Writing on Dirty Paper Aaron S. Cohen acohen@mit.edu MIT, 36-689 77 Massachusetts Ave. Cambridge, MA 02139-4307 Amos Lapidoth lapidoth@isi.ee.ethz.ch ETF E107 ETH-Zentrum CH-8092 Zürich, Switzerland

More information

Mismatched Multi-letter Successive Decoding for the Multiple-Access Channel

Mismatched Multi-letter Successive Decoding for the Multiple-Access Channel Mismatched Multi-letter Successive Decoding for the Multiple-Access Channel Jonathan Scarlett University of Cambridge jms265@cam.ac.uk Alfonso Martinez Universitat Pompeu Fabra alfonso.martinez@ieee.org

More information

Lecture 4 Noisy Channel Coding

Lecture 4 Noisy Channel Coding Lecture 4 Noisy Channel Coding I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw October 9, 2015 1 / 56 I-Hsiang Wang IT Lecture 4 The Channel Coding Problem

More information

The Poisson Channel with Side Information

The Poisson Channel with Side Information The Poisson Channel with Side Information Shraga Bross School of Enginerring Bar-Ilan University, Israel brosss@macs.biu.ac.il Amos Lapidoth Ligong Wang Signal and Information Processing Laboratory ETH

More information

An Achievable Error Exponent for the Mismatched Multiple-Access Channel

An Achievable Error Exponent for the Mismatched Multiple-Access Channel An Achievable Error Exponent for the Mismatched Multiple-Access Channel Jonathan Scarlett University of Cambridge jms265@camacuk Albert Guillén i Fàbregas ICREA & Universitat Pompeu Fabra University of

More information

Lecture 5 Channel Coding over Continuous Channels

Lecture 5 Channel Coding over Continuous Channels Lecture 5 Channel Coding over Continuous Channels I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw November 14, 2014 1 / 34 I-Hsiang Wang NIT Lecture 5 From

More information

EXPURGATED GAUSSIAN FINGERPRINTING CODES. Pierre Moulin and Negar Kiyavash

EXPURGATED GAUSSIAN FINGERPRINTING CODES. Pierre Moulin and Negar Kiyavash EXPURGATED GAUSSIAN FINGERPRINTING CODES Pierre Moulin and Negar iyavash Beckman Inst, Coord Sci Lab and ECE Department University of Illinois at Urbana-Champaign, USA ABSTRACT This paper analyzes the

More information

Exact Random Coding Error Exponents of Optimal Bin Index Decoding

Exact Random Coding Error Exponents of Optimal Bin Index Decoding Exact Random Coding Error Exponents of Optimal Bin Index Decoding Neri Merhav Department of Electrical Engineering Technion - Israel Institute of Technology Technion City, Haifa 32000, ISRAEL E mail: merhav@ee.technion.ac.il

More information

Generalized hashing and applications to digital fingerprinting

Generalized hashing and applications to digital fingerprinting Generalized hashing and applications to digital fingerprinting Noga Alon, Gérard Cohen, Michael Krivelevich and Simon Litsyn Abstract Let C be a code of length n over an alphabet of q letters. An n-word

More information

Multiaccess Channels with State Known to One Encoder: A Case of Degraded Message Sets

Multiaccess Channels with State Known to One Encoder: A Case of Degraded Message Sets Multiaccess Channels with State Known to One Encoder: A Case of Degraded Message Sets Shivaprasad Kotagiri and J. Nicholas Laneman Department of Electrical Engineering University of Notre Dame Notre Dame,

More information

Frans M.J. Willems. Authentication Based on Secret-Key Generation. Frans M.J. Willems. (joint work w. Tanya Ignatenko)

Frans M.J. Willems. Authentication Based on Secret-Key Generation. Frans M.J. Willems. (joint work w. Tanya Ignatenko) Eindhoven University of Technology IEEE EURASIP Spain Seminar on Signal Processing, Communication and Information Theory, Universidad Carlos III de Madrid, December 11, 2014 : Secret-Based Authentication

More information

On Function Computation with Privacy and Secrecy Constraints

On Function Computation with Privacy and Secrecy Constraints 1 On Function Computation with Privacy and Secrecy Constraints Wenwen Tu and Lifeng Lai Abstract In this paper, the problem of function computation with privacy and secrecy constraints is considered. The

More information

EE 4TM4: Digital Communications II. Channel Capacity

EE 4TM4: Digital Communications II. Channel Capacity EE 4TM4: Digital Communications II 1 Channel Capacity I. CHANNEL CODING THEOREM Definition 1: A rater is said to be achievable if there exists a sequence of(2 nr,n) codes such thatlim n P (n) e (C) = 0.

More information

IN [1], Forney derived lower bounds on the random coding

IN [1], Forney derived lower bounds on the random coding Exact Random Coding Exponents for Erasure Decoding Anelia Somekh-Baruch and Neri Merhav Abstract Random coding of channel decoding with an erasure option is studied By analyzing the large deviations behavior

More information

Lecture 6 I. CHANNEL CODING. X n (m) P Y X

Lecture 6 I. CHANNEL CODING. X n (m) P Y X 6- Introduction to Information Theory Lecture 6 Lecturer: Haim Permuter Scribe: Yoav Eisenberg and Yakov Miron I. CHANNEL CODING We consider the following channel coding problem: m = {,2,..,2 nr} Encoder

More information

Detection Games Under Fully Active Adversaries

Detection Games Under Fully Active Adversaries IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. X, NO. X, XXXXXXX XXXX 1 Detection Games Under Fully Active Adversaries Benedetta Tondi, Member, IEEE, Neri Merhav, Fellow, IEEE, Mauro Barni Fellow, IEEE

More information

EECS 750. Hypothesis Testing with Communication Constraints

EECS 750. Hypothesis Testing with Communication Constraints EECS 750 Hypothesis Testing with Communication Constraints Name: Dinesh Krithivasan Abstract In this report, we study a modification of the classical statistical problem of bivariate hypothesis testing.

More information

(each row defines a probability distribution). Given n-strings x X n, y Y n we can use the absence of memory in the channel to compute

(each row defines a probability distribution). Given n-strings x X n, y Y n we can use the absence of memory in the channel to compute ENEE 739C: Advanced Topics in Signal Processing: Coding Theory Instructor: Alexander Barg Lecture 6 (draft; 9/6/03. Error exponents for Discrete Memoryless Channels http://www.enee.umd.edu/ abarg/enee739c/course.html

More information

Chapter 4. Data Transmission and Channel Capacity. Po-Ning Chen, Professor. Department of Communications Engineering. National Chiao Tung University

Chapter 4. Data Transmission and Channel Capacity. Po-Ning Chen, Professor. Department of Communications Engineering. National Chiao Tung University Chapter 4 Data Transmission and Channel Capacity Po-Ning Chen, Professor Department of Communications Engineering National Chiao Tung University Hsin Chu, Taiwan 30050, R.O.C. Principle of Data Transmission

More information

A Novel Asynchronous Communication Paradigm: Detection, Isolation, and Coding

A Novel Asynchronous Communication Paradigm: Detection, Isolation, and Coding A Novel Asynchronous Communication Paradigm: Detection, Isolation, and Coding The MIT Faculty has made this article openly available. Please share how this access benefits you. Your story matters. Citation

More information

WATERMARKING (or information hiding) is the

WATERMARKING (or information hiding) is the 3264 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 55, NO. 7, JULY 2009 Rom-Coding Lower Bounds for the Error Exponent of Joint Quantization Watermarking Systems Yangfan Zhong, Fady Alajaji, Senior Member,

More information

Lecture 10: Broadcast Channel and Superposition Coding

Lecture 10: Broadcast Channel and Superposition Coding Lecture 10: Broadcast Channel and Superposition Coding Scribed by: Zhe Yao 1 Broadcast channel M 0M 1M P{y 1 y x} M M 01 1 M M 0 The capacity of the broadcast channel depends only on the marginal conditional

More information

Keyless authentication in the presence of a simultaneously transmitting adversary

Keyless authentication in the presence of a simultaneously transmitting adversary Keyless authentication in the presence of a simultaneously transmitting adversary Eric Graves Army Research Lab Adelphi MD 20783 U.S.A. ericsgra@ufl.edu Paul Yu Army Research Lab Adelphi MD 20783 U.S.A.

More information

Fingerprinting, traitor tracing, marking assumption

Fingerprinting, traitor tracing, marking assumption Fingerprinting, traitor tracing, marking assumption Alexander Barg University of Maryland, College Park ICITS 2011, Amsterdam Acknowledgment: Based in part of joint works with Prasanth A. (2009, '10) Prasanth

More information

Intermittent Communication

Intermittent Communication Intermittent Communication Mostafa Khoshnevisan, Student Member, IEEE, and J. Nicholas Laneman, Senior Member, IEEE arxiv:32.42v2 [cs.it] 7 Mar 207 Abstract We formulate a model for intermittent communication

More information

LECTURE 13. Last time: Lecture outline

LECTURE 13. Last time: Lecture outline LECTURE 13 Last time: Strong coding theorem Revisiting channel and codes Bound on probability of error Error exponent Lecture outline Fano s Lemma revisited Fano s inequality for codewords Converse to

More information

Lecture 5: Channel Capacity. Copyright G. Caire (Sample Lectures) 122

Lecture 5: Channel Capacity. Copyright G. Caire (Sample Lectures) 122 Lecture 5: Channel Capacity Copyright G. Caire (Sample Lectures) 122 M Definitions and Problem Setup 2 X n Y n Encoder p(y x) Decoder ˆM Message Channel Estimate Definition 11. Discrete Memoryless Channel

More information

Detection Games under Fully Active Adversaries. Received: 23 October 2018; Accepted: 25 December 2018; Published: 29 December 2018

Detection Games under Fully Active Adversaries. Received: 23 October 2018; Accepted: 25 December 2018; Published: 29 December 2018 entropy Article Detection Games under Fully Active Adversaries Benedetta Tondi 1,, Neri Merhav 2 and Mauro Barni 1 1 Department of Information Engineering and Mathematical Sciences, University of Siena,

More information

The Gallager Converse

The Gallager Converse The Gallager Converse Abbas El Gamal Director, Information Systems Laboratory Department of Electrical Engineering Stanford University Gallager s 75th Birthday 1 Information Theoretic Limits Establishing

More information

Lecture 4 Channel Coding

Lecture 4 Channel Coding Capacity and the Weak Converse Lecture 4 Coding I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw October 15, 2014 1 / 16 I-Hsiang Wang NIT Lecture 4 Capacity

More information

5958 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 56, NO. 12, DECEMBER 2010

5958 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 56, NO. 12, DECEMBER 2010 5958 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 56, NO. 12, DECEMBER 2010 Capacity Theorems for Discrete, Finite-State Broadcast Channels With Feedback and Unidirectional Receiver Cooperation Ron Dabora

More information

Error Exponent Region for Gaussian Broadcast Channels

Error Exponent Region for Gaussian Broadcast Channels Error Exponent Region for Gaussian Broadcast Channels Lihua Weng, S. Sandeep Pradhan, and Achilleas Anastasopoulos Electrical Engineering and Computer Science Dept. University of Michigan, Ann Arbor, MI

More information

Lecture 1: The Multiple Access Channel. Copyright G. Caire 12

Lecture 1: The Multiple Access Channel. Copyright G. Caire 12 Lecture 1: The Multiple Access Channel Copyright G. Caire 12 Outline Two-user MAC. The Gaussian case. The K-user case. Polymatroid structure and resource allocation problems. Copyright G. Caire 13 Two-user

More information

A Comparison of Superposition Coding Schemes

A Comparison of Superposition Coding Schemes A Comparison of Superposition Coding Schemes Lele Wang, Eren Şaşoğlu, Bernd Bandemer, and Young-Han Kim Department of Electrical and Computer Engineering University of California, San Diego La Jolla, CA

More information

A Framework for Optimizing Nonlinear Collusion Attacks on Fingerprinting Systems

A Framework for Optimizing Nonlinear Collusion Attacks on Fingerprinting Systems A Framework for Optimizing onlinear Collusion Attacks on Fingerprinting Systems egar iyavash Department of Electrical and Computer Engineering University of Illinois at Urbana-Champaign Coordinated Science

More information

Arimoto Channel Coding Converse and Rényi Divergence

Arimoto Channel Coding Converse and Rényi Divergence Arimoto Channel Coding Converse and Rényi Divergence Yury Polyanskiy and Sergio Verdú Abstract Arimoto proved a non-asymptotic upper bound on the probability of successful decoding achievable by any code

More information

Capacity Region of Reversely Degraded Gaussian MIMO Broadcast Channel

Capacity Region of Reversely Degraded Gaussian MIMO Broadcast Channel Capacity Region of Reversely Degraded Gaussian MIMO Broadcast Channel Jun Chen Dept. of Electrical and Computer Engr. McMaster University Hamilton, Ontario, Canada Chao Tian AT&T Labs-Research 80 Park

More information

Cut-Set Bound and Dependence Balance Bound

Cut-Set Bound and Dependence Balance Bound Cut-Set Bound and Dependence Balance Bound Lei Xiao lxiao@nd.edu 1 Date: 4 October, 2006 Reading: Elements of information theory by Cover and Thomas [1, Section 14.10], and the paper by Hekstra and Willems

More information

National University of Singapore Department of Electrical & Computer Engineering. Examination for

National University of Singapore Department of Electrical & Computer Engineering. Examination for National University of Singapore Department of Electrical & Computer Engineering Examination for EE5139R Information Theory for Communication Systems (Semester I, 2014/15) November/December 2014 Time Allowed:

More information

On Multiple User Channels with State Information at the Transmitters

On Multiple User Channels with State Information at the Transmitters On Multiple User Channels with State Information at the Transmitters Styrmir Sigurjónsson and Young-Han Kim* Information Systems Laboratory Stanford University Stanford, CA 94305, USA Email: {styrmir,yhk}@stanford.edu

More information

On Achievable Rates for Channels with. Mismatched Decoding

On Achievable Rates for Channels with. Mismatched Decoding On Achievable Rates for Channels with 1 Mismatched Decoding Anelia Somekh-Baruch arxiv:13050547v1 [csit] 2 May 2013 Abstract The problem of mismatched decoding for discrete memoryless channels is addressed

More information

LECTURE 10. Last time: Lecture outline

LECTURE 10. Last time: Lecture outline LECTURE 10 Joint AEP Coding Theorem Last time: Error Exponents Lecture outline Strong Coding Theorem Reading: Gallager, Chapter 5. Review Joint AEP A ( ɛ n) (X) A ( ɛ n) (Y ) vs. A ( ɛ n) (X, Y ) 2 nh(x)

More information

Hypothesis Testing with Communication Constraints

Hypothesis Testing with Communication Constraints Hypothesis Testing with Communication Constraints Dinesh Krithivasan EECS 750 April 17, 2006 Dinesh Krithivasan (EECS 750) Hyp. testing with comm. constraints April 17, 2006 1 / 21 Presentation Outline

More information

SHARED INFORMATION. Prakash Narayan with. Imre Csiszár, Sirin Nitinawarat, Himanshu Tyagi, Shun Watanabe

SHARED INFORMATION. Prakash Narayan with. Imre Csiszár, Sirin Nitinawarat, Himanshu Tyagi, Shun Watanabe SHARED INFORMATION Prakash Narayan with Imre Csiszár, Sirin Nitinawarat, Himanshu Tyagi, Shun Watanabe 2/40 Acknowledgement Praneeth Boda Himanshu Tyagi Shun Watanabe 3/40 Outline Two-terminal model: Mutual

More information

Optimal Power Control in Decentralized Gaussian Multiple Access Channels

Optimal Power Control in Decentralized Gaussian Multiple Access Channels 1 Optimal Power Control in Decentralized Gaussian Multiple Access Channels Kamal Singh Department of Electrical Engineering Indian Institute of Technology Bombay. arxiv:1711.08272v1 [eess.sp] 21 Nov 2017

More information

Superposition Encoding and Partial Decoding Is Optimal for a Class of Z-interference Channels

Superposition Encoding and Partial Decoding Is Optimal for a Class of Z-interference Channels Superposition Encoding and Partial Decoding Is Optimal for a Class of Z-interference Channels Nan Liu and Andrea Goldsmith Department of Electrical Engineering Stanford University, Stanford CA 94305 Email:

More information

EE5139R: Problem Set 7 Assigned: 30/09/15, Due: 07/10/15

EE5139R: Problem Set 7 Assigned: 30/09/15, Due: 07/10/15 EE5139R: Problem Set 7 Assigned: 30/09/15, Due: 07/10/15 1. Cascade of Binary Symmetric Channels The conditional probability distribution py x for each of the BSCs may be expressed by the transition probability

More information

Capacity of a channel Shannon s second theorem. Information Theory 1/33

Capacity of a channel Shannon s second theorem. Information Theory 1/33 Capacity of a channel Shannon s second theorem Information Theory 1/33 Outline 1. Memoryless channels, examples ; 2. Capacity ; 3. Symmetric channels ; 4. Channel Coding ; 5. Shannon s second theorem,

More information

Coding on Countably Infinite Alphabets

Coding on Countably Infinite Alphabets Coding on Countably Infinite Alphabets Non-parametric Information Theory Licence de droits d usage Outline Lossless Coding on infinite alphabets Source Coding Universal Coding Infinite Alphabets Enveloppe

More information

A Formula for the Capacity of the General Gel fand-pinsker Channel

A Formula for the Capacity of the General Gel fand-pinsker Channel A Formula for the Capacity of the General Gel fand-pinsker Channel Vincent Y. F. Tan Institute for Infocomm Research (I 2 R, A*STAR, Email: tanyfv@i2r.a-star.edu.sg ECE Dept., National University of Singapore

More information

Upper Bounds on the Capacity of Binary Intermittent Communication

Upper Bounds on the Capacity of Binary Intermittent Communication Upper Bounds on the Capacity of Binary Intermittent Communication Mostafa Khoshnevisan and J. Nicholas Laneman Department of Electrical Engineering University of Notre Dame Notre Dame, Indiana 46556 Email:{mhoshne,

More information

A Relationship between Quantization and Distribution Rates of Digitally Fingerprinted Data

A Relationship between Quantization and Distribution Rates of Digitally Fingerprinted Data A Relationship between Quantization and Distribution Rates of Digitally Fingerprinted Data Damianos Karakos karakos@eng.umd.edu Adrian Papamarcou adrian@eng.umd.edu Department of Electrical and Computer

More information

Variable Length Codes for Degraded Broadcast Channels

Variable Length Codes for Degraded Broadcast Channels Variable Length Codes for Degraded Broadcast Channels Stéphane Musy School of Computer and Communication Sciences, EPFL CH-1015 Lausanne, Switzerland Email: stephane.musy@ep.ch Abstract This paper investigates

More information

Separating codes: constructions and bounds

Separating codes: constructions and bounds Separating codes: constructions and bounds Gérard Cohen 1 and Hans Georg Schaathun 2 1 Ecole Nationale Supérieure des Télécommunications 46 rue Barrault F-75634 Paris Cedex, France cohen@enst.fr 2 Department

More information

Universal Incremental Slepian-Wolf Coding

Universal Incremental Slepian-Wolf Coding Proceedings of the 43rd annual Allerton Conference, Monticello, IL, September 2004 Universal Incremental Slepian-Wolf Coding Stark C. Draper University of California, Berkeley Berkeley, CA, 94720 USA sdraper@eecs.berkeley.edu

More information

1 Introduction to information theory

1 Introduction to information theory 1 Introduction to information theory 1.1 Introduction In this chapter we present some of the basic concepts of information theory. The situations we have in mind involve the exchange of information through

More information

Exercise 1. = P(y a 1)P(a 1 )

Exercise 1. = P(y a 1)P(a 1 ) Chapter 7 Channel Capacity Exercise 1 A source produces independent, equally probable symbols from an alphabet {a 1, a 2 } at a rate of one symbol every 3 seconds. These symbols are transmitted over a

More information

Notes 3: Stochastic channels and noisy coding theorem bound. 1 Model of information communication and noisy channel

Notes 3: Stochastic channels and noisy coding theorem bound. 1 Model of information communication and noisy channel Introduction to Coding Theory CMU: Spring 2010 Notes 3: Stochastic channels and noisy coding theorem bound January 2010 Lecturer: Venkatesan Guruswami Scribe: Venkatesan Guruswami We now turn to the basic

More information

Coding into a source: an inverse rate-distortion theorem

Coding into a source: an inverse rate-distortion theorem Coding into a source: an inverse rate-distortion theorem Anant Sahai joint work with: Mukul Agarwal Sanjoy K. Mitter Wireless Foundations Department of Electrical Engineering and Computer Sciences University

More information

SHARED INFORMATION. Prakash Narayan with. Imre Csiszár, Sirin Nitinawarat, Himanshu Tyagi, Shun Watanabe

SHARED INFORMATION. Prakash Narayan with. Imre Csiszár, Sirin Nitinawarat, Himanshu Tyagi, Shun Watanabe SHARED INFORMATION Prakash Narayan with Imre Csiszár, Sirin Nitinawarat, Himanshu Tyagi, Shun Watanabe 2/41 Outline Two-terminal model: Mutual information Operational meaning in: Channel coding: channel

More information

Equivalence for Networks with Adversarial State

Equivalence for Networks with Adversarial State Equivalence for Networks with Adversarial State Oliver Kosut Department of Electrical, Computer and Energy Engineering Arizona State University Tempe, AZ 85287 Email: okosut@asu.edu Jörg Kliewer Department

More information

Information Dimension

Information Dimension Information Dimension Mina Karzand Massachusetts Institute of Technology November 16, 2011 1 / 26 2 / 26 Let X would be a real-valued random variable. For m N, the m point uniform quantized version of

More information

Information Theory. Lecture 10. Network Information Theory (CT15); a focus on channel capacity results

Information Theory. Lecture 10. Network Information Theory (CT15); a focus on channel capacity results Information Theory Lecture 10 Network Information Theory (CT15); a focus on channel capacity results The (two-user) multiple access channel (15.3) The (two-user) broadcast channel (15.6) The relay channel

More information

Feedback Capacity of a Class of Symmetric Finite-State Markov Channels

Feedback Capacity of a Class of Symmetric Finite-State Markov Channels Feedback Capacity of a Class of Symmetric Finite-State Markov Channels Nevroz Şen, Fady Alajaji and Serdar Yüksel Department of Mathematics and Statistics Queen s University Kingston, ON K7L 3N6, Canada

More information

Soft Covering with High Probability

Soft Covering with High Probability Soft Covering with High Probability Paul Cuff Princeton University arxiv:605.06396v [cs.it] 20 May 206 Abstract Wyner s soft-covering lemma is the central analysis step for achievability proofs of information

More information

Solutions to Homework Set #3 Channel and Source coding

Solutions to Homework Set #3 Channel and Source coding Solutions to Homework Set #3 Channel and Source coding. Rates (a) Channels coding Rate: Assuming you are sending 4 different messages using usages of a channel. What is the rate (in bits per channel use)

More information

Reliable Computation over Multiple-Access Channels

Reliable Computation over Multiple-Access Channels Reliable Computation over Multiple-Access Channels Bobak Nazer and Michael Gastpar Dept. of Electrical Engineering and Computer Sciences University of California, Berkeley Berkeley, CA, 94720-1770 {bobak,

More information

THE mismatched decoding problem [1] [3] seeks to characterize

THE mismatched decoding problem [1] [3] seeks to characterize IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 64, NO. 4, APRIL 08 53 Mismatched Multi-Letter Successive Decoding for the Multiple-Access Channel Jonathan Scarlett, Member, IEEE, Alfonso Martinez, Senior

More information

A New Achievable Region for Gaussian Multiple Descriptions Based on Subset Typicality

A New Achievable Region for Gaussian Multiple Descriptions Based on Subset Typicality 0 IEEE Information Theory Workshop A New Achievable Region for Gaussian Multiple Descriptions Based on Subset Typicality Kumar Viswanatha, Emrah Akyol and Kenneth Rose ECE Department, University of California

More information

Solutions to Homework Set #1 Sanov s Theorem, Rate distortion

Solutions to Homework Set #1 Sanov s Theorem, Rate distortion st Semester 00/ Solutions to Homework Set # Sanov s Theorem, Rate distortion. Sanov s theorem: Prove the simple version of Sanov s theorem for the binary random variables, i.e., let X,X,...,X n be a sequence

More information

Joint Compression and Digital Watermarking: Information-Theoretic Study and Algorithms Development

Joint Compression and Digital Watermarking: Information-Theoretic Study and Algorithms Development Joint Compression and Digital Watermarking: Information-Theoretic Study and Algorithms Development by Wei Sun A thesis presented to the University of Waterloo in fulfillment of the thesis requirement for

More information

Information measures in simple coding problems

Information measures in simple coding problems Part I Information measures in simple coding problems in this web service in this web service Source coding and hypothesis testing; information measures A(discrete)source is a sequence {X i } i= of random

More information

An Achievable Rate Region for the 3-User-Pair Deterministic Interference Channel

An Achievable Rate Region for the 3-User-Pair Deterministic Interference Channel Forty-Ninth Annual Allerton Conference Allerton House, UIUC, Illinois, USA September 8-3, An Achievable Rate Region for the 3-User-Pair Deterministic Interference Channel Invited Paper Bernd Bandemer and

More information

Training-Based Schemes are Suboptimal for High Rate Asynchronous Communication

Training-Based Schemes are Suboptimal for High Rate Asynchronous Communication Training-Based Schemes are Suboptimal for High Rate Asynchronous Communication The MIT Faculty has made this article openly available. Please share how this access benefits you. Your story matters. Citation

More information

On Common Information and the Encoding of Sources that are Not Successively Refinable

On Common Information and the Encoding of Sources that are Not Successively Refinable On Common Information and the Encoding of Sources that are Not Successively Refinable Kumar Viswanatha, Emrah Akyol, Tejaswi Nanjundaswamy and Kenneth Rose ECE Department, University of California - Santa

More information

Guess & Check Codes for Deletions, Insertions, and Synchronization

Guess & Check Codes for Deletions, Insertions, and Synchronization Guess & Check Codes for Deletions, Insertions, and Synchronization Serge Kas Hanna, Salim El Rouayheb ECE Department, Rutgers University sergekhanna@rutgersedu, salimelrouayheb@rutgersedu arxiv:759569v3

More information

STRONG CONVERSE FOR GEL FAND-PINSKER CHANNEL. Pierre Moulin

STRONG CONVERSE FOR GEL FAND-PINSKER CHANNEL. Pierre Moulin STROG COVERSE FOR GEL FAD-PISKER CHAEL Pierre Moulin Beckman Inst., Coord. Sci. Lab and ECE Department University of Illinois at Urbana-Champaign, USA ABSTRACT A strong converse for the Gel fand-pinsker

More information

Sequential and Dynamic Frameproof Codes

Sequential and Dynamic Frameproof Codes Sequential and Dynamic Frameproof Codes Maura Paterson m.b.paterson@rhul.ac.uk Department of Mathematics Royal Holloway, University of London Egham, Surrey TW20 0EX Abstract There are many schemes in the

More information

Refined Bounds on the Empirical Distribution of Good Channel Codes via Concentration Inequalities

Refined Bounds on the Empirical Distribution of Good Channel Codes via Concentration Inequalities Refined Bounds on the Empirical Distribution of Good Channel Codes via Concentration Inequalities Maxim Raginsky and Igal Sason ISIT 2013, Istanbul, Turkey Capacity-Achieving Channel Codes The set-up DMC

More information

Lecture 14 February 28

Lecture 14 February 28 EE/Stats 376A: Information Theory Winter 07 Lecture 4 February 8 Lecturer: David Tse Scribe: Sagnik M, Vivek B 4 Outline Gaussian channel and capacity Information measures for continuous random variables

More information

Can Feedback Increase the Capacity of the Energy Harvesting Channel?

Can Feedback Increase the Capacity of the Energy Harvesting Channel? Can Feedback Increase the Capacity of the Energy Harvesting Channel? Dor Shaviv EE Dept., Stanford University shaviv@stanford.edu Ayfer Özgür EE Dept., Stanford University aozgur@stanford.edu Haim Permuter

More information

A Half-Duplex Cooperative Scheme with Partial Decode-Forward Relaying

A Half-Duplex Cooperative Scheme with Partial Decode-Forward Relaying A Half-Duplex Cooperative Scheme with Partial Decode-Forward Relaying Ahmad Abu Al Haija, and Mai Vu, Department of Electrical and Computer Engineering McGill University Montreal, QC H3A A7 Emails: ahmadabualhaija@mailmcgillca,

More information

Channels with cost constraints: strong converse and dispersion

Channels with cost constraints: strong converse and dispersion Channels with cost constraints: strong converse and dispersion Victoria Kostina, Sergio Verdú Dept. of Electrical Engineering, Princeton University, NJ 08544, USA Abstract This paper shows the strong converse

More information

Optimal Natural Encoding Scheme for Discrete Multiplicative Degraded Broadcast Channels

Optimal Natural Encoding Scheme for Discrete Multiplicative Degraded Broadcast Channels Optimal Natural Encoding Scheme for Discrete Multiplicative Degraded Broadcast Channels Bike ie, Student Member, IEEE and Richard D. Wesel, Senior Member, IEEE Abstract Certain degraded broadcast channels

More information

arxiv: v4 [cs.it] 17 Oct 2015

arxiv: v4 [cs.it] 17 Oct 2015 Upper Bounds on the Relative Entropy and Rényi Divergence as a Function of Total Variation Distance for Finite Alphabets Igal Sason Department of Electrical Engineering Technion Israel Institute of Technology

More information

Capacity of the Discrete Memoryless Energy Harvesting Channel with Side Information

Capacity of the Discrete Memoryless Energy Harvesting Channel with Side Information 204 IEEE International Symposium on Information Theory Capacity of the Discrete Memoryless Energy Harvesting Channel with Side Information Omur Ozel, Kaya Tutuncuoglu 2, Sennur Ulukus, and Aylin Yener

More information

for some error exponent E( R) as a function R,

for some error exponent E( R) as a function R, . Capacity-achieving codes via Forney concatenation Shannon s Noisy Channel Theorem assures us the existence of capacity-achieving codes. However, exhaustive search for the code has double-exponential

More information

Universal A Posteriori Metrics Game

Universal A Posteriori Metrics Game 1 Universal A Posteriori Metrics Game Emmanuel Abbe LCM, EPFL Lausanne, 1015, Switzerland emmanuel.abbe@epfl.ch Rethnakaran Pulikkoonattu Broadcom Inc, Irvine, CA, USA, 92604 rethna@broadcom.com arxiv:1004.4815v1

More information

Appendix B Information theory from first principles

Appendix B Information theory from first principles Appendix B Information theory from first principles This appendix discusses the information theory behind the capacity expressions used in the book. Section 8.3.4 is the only part of the book that supposes

More information

An Information Theoretic View of Watermark Embedding Detection and Geometric Attacks

An Information Theoretic View of Watermark Embedding Detection and Geometric Attacks An Information Theoretic View of Watermark Embedding Detection and Geometric Attacks Neri Merhav Department of Electrical Engineering Technion - Israel Institute of Technology Technion City, Haifa 32000,

More information

EE376A: Homework #3 Due by 11:59pm Saturday, February 10th, 2018

EE376A: Homework #3 Due by 11:59pm Saturday, February 10th, 2018 Please submit the solutions on Gradescope. EE376A: Homework #3 Due by 11:59pm Saturday, February 10th, 2018 1. Optimal codeword lengths. Although the codeword lengths of an optimal variable length code

More information

Simultaneous Nonunique Decoding Is Rate-Optimal

Simultaneous Nonunique Decoding Is Rate-Optimal Fiftieth Annual Allerton Conference Allerton House, UIUC, Illinois, USA October 1-5, 2012 Simultaneous Nonunique Decoding Is Rate-Optimal Bernd Bandemer University of California, San Diego La Jolla, CA

More information

Joint Source-Channel Coding for the Multiple-Access Relay Channel

Joint Source-Channel Coding for the Multiple-Access Relay Channel Joint Source-Channel Coding for the Multiple-Access Relay Channel Yonathan Murin, Ron Dabora Department of Electrical and Computer Engineering Ben-Gurion University, Israel Email: moriny@bgu.ac.il, ron@ee.bgu.ac.il

More information

ProblemsWeCanSolveWithaHelper

ProblemsWeCanSolveWithaHelper ITW 2009, Volos, Greece, June 10-12, 2009 ProblemsWeCanSolveWitha Haim Permuter Ben-Gurion University of the Negev haimp@bgu.ac.il Yossef Steinberg Technion - IIT ysteinbe@ee.technion.ac.il Tsachy Weissman

More information

Distributed Functional Compression through Graph Coloring

Distributed Functional Compression through Graph Coloring Distributed Functional Compression through Graph Coloring Vishal Doshi, Devavrat Shah, Muriel Médard, and Sidharth Jaggi Laboratory for Information and Decision Systems Massachusetts Institute of Technology

More information

Information Theory and Hypothesis Testing

Information Theory and Hypothesis Testing Summer School on Game Theory and Telecommunications Campione, 7-12 September, 2014 Information Theory and Hypothesis Testing Mauro Barni University of Siena September 8 Review of some basic results linking

More information

UCSD ECE 255C Handout #12 Prof. Young-Han Kim Tuesday, February 28, Solutions to Take-Home Midterm (Prepared by Pinar Sen)

UCSD ECE 255C Handout #12 Prof. Young-Han Kim Tuesday, February 28, Solutions to Take-Home Midterm (Prepared by Pinar Sen) UCSD ECE 255C Handout #12 Prof. Young-Han Kim Tuesday, February 28, 2017 Solutions to Take-Home Midterm (Prepared by Pinar Sen) 1. (30 points) Erasure broadcast channel. Let p(y 1,y 2 x) be a discrete

More information

On The Binary Lossless Many-Help-One Problem with Independently Degraded Helpers

On The Binary Lossless Many-Help-One Problem with Independently Degraded Helpers On The Binary Lossless Many-Help-One Problem with Independently Degraded Helpers Albrecht Wolf, Diana Cristina González, Meik Dörpinghaus, José Cândido Silveira Santos Filho, and Gerhard Fettweis Vodafone

More information

Solutions to Homework Set #2 Broadcast channel, degraded message set, Csiszar Sum Equality

Solutions to Homework Set #2 Broadcast channel, degraded message set, Csiszar Sum Equality 1st Semester 2010/11 Solutions to Homework Set #2 Broadcast channel, degraded message set, Csiszar Sum Equality 1. Convexity of capacity region of broadcast channel. Let C R 2 be the capacity region of

More information