Covert Communication with Channel-State Information at the Transmitter Si-Hyeon Lee Joint Work with Ligong Wang, Ashish Khisti, and Gregory W. Wornell July 27, 2017 1 / 21
Covert Communication Transmitter and receiver wish to communicate reliably, while ensuring that their communication is not detected by the warden. 2 / 21
Covert Communication Hypothesis test by the warden Probability of false alarm: α Probability of missed detection: β Covertness guaranteed if D(P Z,Sending P Z,Not sending ) is small Blind test: α + β = 1 Optimal test: α + β 1 D(P Z,Sending P Z,Not sending ) 3 / 21
Square Root Law N Y Transmitter X Y Z Receiver Warden N Z Maximum amount of information scales like the square root of the blocklength AWGN channel [Bash-Goekel-Towsley 13] A broad class of DMC [Bloch 16], [Wang-Wornell-Zheng 16] 4 / 21
When We Beat the Square Root Law? Unknown channel statistics at the warden BSCs with unknown cross over probability [Che-Bakshi-Chan-Jaggi 14] AWGN with unknown noise power [Lee-Baxley-Weitnauer-Walkenhorst 15] This talk: State-dependent channel with CSI at the transmitter 5 / 21
Binary Symmetric Channel X 0 1 p 0 Z 1 p fixed and known: Zero capacity [Bloch 16], [Wang-Wornell-Zheng 16] p random and unknown: Positive capacity [Che-Bakshi-Chan-Jaggi 14] p fixed and known, realizations known to the transmitter Our model Positive capacity by pre-cancellation 6 / 21
Binary Symmetric Channel X 0 1 p 0 Z 1 p fixed and known: Zero capacity [Bloch 16], [Wang-Wornell-Zheng 16] p random and unknown: Positive capacity [Che-Bakshi-Chan-Jaggi 14] p fixed and known, realizations known to the transmitter Our model Positive capacity by pre-cancellation 6 / 21
Model K M Transmitter X n p(y, z s, x) Y n Z n Receiver Warden ˆM S i or S n State-dependent DMC (X, S, Y, Z, P S, P Y,Z S,X ) Transmitter and receiver share a secret key K [1 : 2 nr 0 ] Input cost function b(x n ) = 1 n n i=1 b(x i ) State sequence known to the transmitter causally or noncausally (R, R K, n) code for causal CSI Encoding function at time i: (M, K, S i ) X i Decoding function (Y n, K) ˆM 7 / 21
Model K M Transmitter X n p(y, z s, x) Y n Z n Receiver Warden ˆM S i or S n State-dependent DMC (X, S, Y, Z, P S, P Y,Z S,X ) Transmitter and receiver share a secret key K [1 : 2 nr 0 ] Input cost function b(x n ) = 1 n n i=1 b(x i ) State sequence known to the transmitter causally or noncausally (R, R K, n) code for noncausal CSI Encoding function: (M, K, S n ) X n Decoding function (Y n, K) ˆM 7 / 21
Model K M Transmitter X n p(y, z s, x) Y n Z n Receiver Warden ˆM S i or S n Covert communication: x 0 X : no input symbol, Q 0 ( ) = s S P S (s)p Z S,X ( s, x 0 ) Small D( P Z n Q n 0 ) ensures the warden s best hypothesis test performs not much better than a blind test. A covert rate of R is achievable if a sequence of (R, R K, n) codes that simultaneously satisfies input cost constraint lim sup n E M,K,S n [b(x n )] B reliability constraint lim n P( ˆM M) = 0 covertness constraint lim n D( P Z n Q n 0 ) = 0 Covert capacity C: Supremum of all achievable covert rates 8 / 21
Main Results: Causal CSI Theorem 1 (Upper Bound) For R K 0 and B 0, the covert capacity is upper-bounded as C max P V, x(v,s): P Z =Q 0, E[b(X )] B I (V ; Y ). Theorem 2 (Lower Bound) For R K 0 and B 0, the covert capacity is lower-bounded as C max I (V ; Y ). P V, x(v,s): P Z =Q 0, E[b(X )] B, I (V ;Z) I (V ;Y )<R K 9 / 21
Main Results: Causal CSI Theorem 1 (Upper Bound) For R K 0 and B 0, the covert capacity is upper-bounded as C max P V, x(v,s): P Z =Q 0, E[b(X )] B I (V ; Y ). Theorem 2 (Lower Bound) For R K 0 and B 0, the covert capacity is lower-bounded as C max I (V ; Y ). P V, x(v,s): P Z =Q 0, E[b(X )] B, I (V ;Z) I (V ;Y )<R K 9 / 21
Main Results: Causal CSI Theorem 1 (Upper Bound) For R K 0 and B 0, the covert capacity is upper-bounded as C max P V, x(v,s): P Z =Q 0, E[b(X )] B I (V ; Y ). Theorem 2 (Lower Bound) For R K 0 and B 0, the covert capacity is lower-bounded as C max I (V ; Y ). P V, x(v,s): P Z =Q 0, E[b(X )] B, I (V ;Z) I (V ;Y )<R K 9 / 21
Main Results: Noncausal CSI Theorem 3 (Upper Bound) For R K 0 and B 0, the covert capacity is upper-bounded as C max P U S, x(u,s): P Z =Q 0, E[b(X )] B I (U; Y ) I (U; S). Theorem 4 (Lower Bound) For R K 0 and B 0, the covert capacity is lower-bounded as C max I (U; Y ) I (U; S). P U S, x(u,s): P Z =Q 0, E[b(X )] B, I (U;Z) I (U;Y )<R K 10 / 21
Main Results: Noncausal CSI Theorem 3 (Upper Bound) For R K 0 and B 0, the covert capacity is upper-bounded as C max P U S, x(u,s): P Z =Q 0, E[b(X )] B I (U; Y ) I (U; S). Theorem 4 (Lower Bound) For R K 0 and B 0, the covert capacity is lower-bounded as C max I (U; Y ) I (U; S). P U S, x(u,s): P Z =Q 0, E[b(X )] B, I (U;Z) I (U;Y )<R K 10 / 21
Main Results: Noncausal CSI Theorem 3 (Upper Bound) For R K 0 and B 0, the covert capacity is upper-bounded as C max P U S, x(u,s): P Z =Q 0, E[b(X )] B I (U; Y ) I (U; S). Theorem 4 (Lower Bound) For R K 0 and B 0, the covert capacity is lower-bounded as C max I (U; Y ) I (U; S). P U S, x(u,s): P Z =Q 0, E[b(X )] B, I (U;Z) I (U;Y )<R K 10 / 21
Example: Binary Symmetric Channels 0 0 X p Y = Z S Bern(p) 1 R K > 0, x 0 = 0 1 X Y = Z Without CSI at the transmitter: C = 0 With CSI at the transmitter: C = H b (p) for both causal and noncausal cases X = V S, V Bern(p) independent with S 11 / 21
Example: Binary Symmetric Channels 0 0 X p Y = Z S Bern(p) 1 R K > 0, x 0 = 0 1 X Y = Z Without CSI at the transmitter: C = 0 With CSI at the transmitter: C = H b (p) for both causal and noncausal cases X = V S, V Bern(p) independent with S 11 / 21
Example: Binary Symmetric Channels 0 0 X p Y = Z S Bern(p) 1 R K > 0, x 0 = 0 1 X = V S Y = Z Without CSI at the transmitter: C = 0 With CSI at the transmitter: C = H b (p) for both causal and noncausal cases X = V S, V Bern(p) independent with S 11 / 21
Example: Binary Symmetric Channels 0 0 X p Y = Z 1 R K > 0, x 0 = 0 1 V = Y = Z Without CSI at the transmitter: C = 0 With CSI at the transmitter: C = H b (p) for both causal and noncausal cases X = V S, V Bern(p) independent with S 11 / 21
Example: AWGN Channels N Y N (0, 1) Y X E[X 2 ] P S N (0, T ) Z N Z N (0, σ 2 ) Without CSI at the transmitter: C = 0 With CSI at the transmitter: C > 0 Make room for message transmission by reducing the interference power, i.e., X = X γ S Choose P and γ to simultaneously satisfy { the covertness } constraint and 1, P, T 2T = (1 γ ) 2 T, the input power constraint, i.e., γ = min P = T T 12 / 21
Example: AWGN Channels N Y N (0, 1) Y X E[X 2 ] P S N (0, T ) Z N Z N (0, σ 2 ) Without CSI at the transmitter: C = 0 With CSI at the transmitter: C > 0 Make room for message transmission by reducing the interference power, i.e., X = X γ S Choose P and γ to simultaneously satisfy { the covertness } constraint and 1, P, T 2T = (1 γ ) 2 T, the input power constraint, i.e., γ = min P = T T 12 / 21
Example: AWGN Channels N Y N (0, 1) Y X = X γ S E[X 2 ] P S N (0, T ) Z N Z N (0, σ 2 ) Without CSI at the transmitter: C = 0 With CSI at the transmitter: C > 0 Make room for message transmission by reducing the interference power, i.e., X = X γ S Choose P and γ to simultaneously satisfy { the covertness } constraint and 1, P, T 2T = (1 γ ) 2 T, the input power constraint, i.e., γ = min P = T T 12 / 21
Example: AWGN Channels N Y N (0, 1) Y X E[X 2 ] P (1 γ )S N (0, (1 γ ) 2 T ) Z N Z N (0, σ 2 ) Without CSI at the transmitter: C = 0 With CSI at the transmitter: C > 0 Make room for message transmission by reducing the interference power, i.e., X = X γ S Choose P and γ to simultaneously satisfy { the covertness } constraint and 1, P, T 2T = (1 γ ) 2 T, the input power constraint, i.e., γ = min P = T T 12 / 21
Example: AWGN Channels N Y N (0, 1) Y X E[X 2 ] P S N (0, T ) Z N Z N (0, σ 2 ) Without CSI at the transmitter: C = 0 With CSI at the transmitter: C > 0 Make room for message transmission by reducing the interference power, i.e., X = X γ S Choose P and γ to simultaneously satisfy { the covertness } constraint and 1, P, T 2T = (1 γ ) 2 T, the input power constraint, i.e., γ = min P = T T 12 / 21
Example: AWGN Channels N Y N (0, 1) Y X E[X 2 ] P S N (0, T ) Z N Z N (0, σ 2 ) Causal CSI at the transmitter Choose V = X and treat interference as noise at the receiver 13 / 21
Example: AWGN Channels γ = min { } 1, P 2T, T = (1 γ ) 2 T, P = T T Theorem 5 (Causal CSI) If R K > 1 ( ) 2 log 1 + P 12 ) (1 T + σ log + P, 2 T + 1 the covert capacity with causal CSI at the transmitter is lower-bounded as C c 1 ( ) 2 log 1 + P. T + 1 If the warden s channel is degraded, i.e., σ 2 > 1, a secret key is not needed to achieve the rate 14 / 21
Example: AWGN Channels N Y N (0, 1) Y X E[X 2 ] P S N (0, T ) Z N Z N (0, σ 2 ) Noncausal CSI at the transmitter Choose U as in the dirty paper coding with input power constraint P and interference power T 15 / 21
Example: AWGN Channels γ = min { } 1, P 2T, T = (1 γ ) 2 T, P = T T Theorem 6 (Noncausal CSI) If R K > 1 2 log (1 + 1 2 log (1 + (P + P T ) ) 2 P +1 (P + ( P P +1 )2 T )(P + T + σ 2 ) (P + P T P +1 ) 2 (P + P T ) 2 P +1 (P + ( P P +1 )2 T )(P + T + 1) (P + P T P +1 ) 2 the covert capacity with noncausal CSI at the transmitter is given by C nc = 1 2 log (1 + P ). ), If the warden s channel is degraded, i.e., σ 2 > 1, a secret key is not needed to achieve the capacity 16 / 21
Example: AWGN Channels For R K sufficiently large, C c 1 ( ) 2 log 1 + P T + 1 C nc = 1 2 log (1 + P ). If T = 0, i.e., T P, Cc = Cnc. 2 As T, P P and hence C nc approaches to the capacity without a covertness constraint 17 / 21
Achievability with Noncausal CSI Key idea: Gelfend-Pinsker coding, except that likelihood encoding by [Song-Cuff-Poor 16], [Goldfeld-Kramer-Permuter-Cuff 17] is used instead of joint-typicality encoding. Codebook generation: Fix P U S and x(u, s) such that P Z = Q 0 and E[b(X )] B 1+ɛ. For each k [1 : 2 nr K ] and m [1 : 2 nr ], randomly generate a subcodebook C(k, m) = {u n (k, m, l) : l [1 : 2 nr ]} according to n i=1 P U(u i ). Encoding: Given state sequence s n, secret key k, and message m, evaluate the likelihood g(l s n, k, m) = P n S U (sn u n (k, m, l)) l [1:2 nr ] P n S U (sn u n (k, m, l )). The encoder randomly generates l according to the above and transmits x n where x i = x(u i (k, m, l), s i ). 18 / 21
Achievability with Noncausal CSI Decoding: Upon receiving y n, with access to the secret key k, the decoder declares that ˆm is sent if it is the unique message such that (u n (k, ˆm, l), y n ) T (n) ɛ for some l [1 : 2 nr ]; otherwise it declares an error. Covertness analysis: Γ: Joint distribution when the codeword index in the subcodebook is uniformly chosen and then the state sequence is generated in an iid manner according to P S U. PZ n Γ Z n, if R > I (U; S) Γ Z n Q n 0, if R K + R + R > I (U; Z) Reliability analysis: Done by packing lemma R + R < I (U; Y ) 19 / 21
Converse with Noncausal CSI Step (a): Bounding techniques for channels with noncausal CSI without covertness constraint, where U i := (M, K, Y i 1, S n i+1 ). Step (b): Characterization of capacity function C(A, B) := max P U S,P X U,S : E[b(X )] B, D(P Z Q 0 ) A (I (U; Y ) I (U; S)) The function C(A, B) is non-decreasing in each of A and B, and concave and continuous in (A, B). Step (c): Application of covertness and input cost constraints R (a) 1 n (b) 1 n (b) C n (I (U i ; Y i ) I (U i ; S i )) + ɛ n i=1 n C(D( P Zi Q 0 ), E[b(X i )]) + ɛ n i=1 ( 1 n (c) C (0, B) ) n D( P Zi Q 0 ), 1 n E[b(X i )] + ɛ n n i=1 i=1 20 / 21
Conclusion Considered state dependent channel with state information at the transmitter Characterized the covert capacity when a sufficiently long secret key is shared between the transmitter and the receiver Derived lower bounds on the rate of the secret key that is needed to achieve the covert capacity For certain channel models, showed that the covert capacity is positive with CSI at the transmitter, but is zero without CSI Full version will be uploaded on arxiv soon 21 / 21