Secure Transmission with Multiple Antennas: The MIMOME Channel

Size: px
Start display at page:

Download "Secure Transmission with Multiple Antennas: The MIMOME Channel"

Transcription

1 1 Secure Transmission with Multiple Antennas: The MIMOME Channel Ashish Khisti and Gregory Wornell. Abstract The Gaussian wiretap channel model is studied when there are multiple antennas at the sender, the receiver and the eavesdropper and the channel matrices are fixed and known to all the terminals. The secrecy capacity is characterized using a Sato type upper bound. A computable characterization of the capacity is provided. The secrecy capacity is studied in several interesting regimes. The high SNR secrecy capacity is attained by simultaneously diagonalizing the channel matrices using the generalized singular value decomposition and independently coding across the resulting parallel channels. An explicit closed form solution in terms of the generalized singular values of the channel matrices is provided in this regime. In addition to the capacity achieving scheme we also study achievable rates from a synthetic noise transmission strategy. Interestingly this rate can also be expressed in terms of the generalized singular values in the high SNR regime. Necessary and sufficient conditions for the secrecy capacity to be zero are provided and further studied when the entries in the channel matrices are sampled i.i.d. CN0, 1, and the dimensions of the matrices go to infinity. In this regime, we show rather interestingly that an asymmetric allocation of dividing the antennas between the sender and receiver in the ratio 2 : 1 maximizes the number of antennas required by the eavesdropper for the secrecy capacity to be zero. I. INTRODUCTION Multiple antennas are a valuable resource in wireless communications. Recently there has been a significant activity in exploring both the theoretical and practical aspects of wireless systems with multiple antennas. In this work we explore the role of multiple antennas for physical layer security, which is an emerging area of interest. The wiretap channel [2] is an information theoretic model for physical layer security. The setup has three terminals one sender, one receiver and one eavesdropper. The goal is to exploit the structure of the underlying broadcast channel to transmit a message reliably to the intended receiver, while leaking asymptotically no information to the eavesdropper. A single letter characterization of the secrecy capacity, when the underlying channel is a discrete memoryless broadcast channel, has been obtained by Csiszár and Körner [3]. An explicit solution for the scalar Gaussian case is obtained in [4], where the optimality of Gausian codebooks is established. In this paper we consider the case where all the three terminals have multiple antennas and naturally refer to it as multiple input, multiple output, multiple eavesdropper MI- MOME channel. In this setup we assume that the channel This work was supported in part by NSF under Grant No. CCF Part of the material in this work appeared in Allerton Conference on Communications, Control and Signal Processing, 2007 [1]. The authors are with the Dept. EECS, MIT, Cambridge, MA, {khisti,gww}@mit.edu matrices are fixed and known to all the three terminals. While the assumption that the eavesdropper s channel is known to both the sender and the receiver is obviously a strong assumption, we remark in advance that our solution provides ultimate limits on secure transmission with multiple antennas and could be a starting point for other formulations. The problem of secure communication with multiple antennas has been extensively studied recently. We summarize some of the literature here. The case when the channel matrices of intended receiver and eavesdropper are square and diagonal follows from the results in [5] [8] that consider secure transmission over fading channels. These works establish that for the special case of independent parallel channels, it suffices to use independent codebooks across the channels. The MIMOME channel is a non-degraded broadcast channel to which the Csiszár and Körner capacity expression [3] applies in principle. Nevertheless, computing the explicit capacity via [3] appears difficult as already noted in e.g., [9] [13]. To our knowledge, the first computable upper bound for the secrecy capacity of the multi-antenna wiretap channel has been reported in our earlier works [14], [15] and has been used to establish the secrecy capacity when the intended receiver has a single antenna MISOME case. This approach involves revealing the output of the eavesdropper s channel to the legitimate receiver to create a fictitious degraded broadcast channel and results in a minimax expression for the upper bound, analogous to the Sato technique used to upper bound the sum capacity of the multi-antenna broadcast channel see e.g., [16]. This minimax upper bound is analytically simplified for the MISOME case and a closed form expression for the secrecy capacity is obtained in [14], [15]. In addition, a number of useful insights are developed into the behavior of the secrecy capacity. In the high-signal-to-noise-ratio SNR regime, a simple masked beamforming scheme, first studied in [10], is shown to be near optimal. Also the scaling behavior of the secrecy capacity in the limit of many antennas is studied. We note that this upper bounding approach has been independently conceived by Ulukus et. al. [17] and further applied to the case [18]. Subsequently, this minimax upper bound has been shown to be tight for the MIMOME case in independent works in [1] and [19] see also [20]. Both works use the minimax upper bound in [15] as a starting point and work with the optimality conditions to establish that the saddle value is achievable with Gaussian inputs in the Csiszár and Körner expression. Subsequently, T. Liu and S. Shamai [21] use a different approach, based on channel enhancement techniques of [22], to also establish the secrecy capacity. In our opinion these two approaches shed complimentary insights into the problem. The minimax

2 upper bounding approach in [1], [19] provides a computable characterization for the capacity expression and identifies a hidden convexity in optimizing the Csiszár and Körner expression with Gaussian inputs, whereas the techniques in [21] do not provide a computable characterization of the capacity. On the other hand [21] establishes the capacity for any covariance constraint on the input distribution, not just the sum power constraint as considered in [1], [19]. Finally, the diversity-multiplexing tradeoff of the multi-antenna wiretap channel has been recently studied in [23]. This paper is organized as follows. We describe the channel model in section II and the main results of the paper are summarized in III. The proof of the secrecy capacity of the MIMOME channel is presented in section IV. We further study the capacity in the high signal-to-noise-ratio SNR regime in Section V. In the high SNR regime, a capacity achieving scheme involves simultaneously diagonalizing both the channel matrices, using the generalized singular value decomposition, and using independent codebooks across the resulting parallel channels. In addition to the capacity achieving scheme, a synthetic noise transmission scheme is also analyzed. This scheme is semi-blind it selects the transmit directions only based on the channel of the legitimate receiver, but needs the knowledge of the eavesdropper s channel for selecting the rate. Interestingly, the high SNR rate of this scheme can also be expressed in terms of the generalized singular values of the channel matrices. In section VI, necessary and sufficient conditions on the channel matrices for the secrecy capacity to be zero are stated. For i.i.d. Rayleigh fading some scaling laws in the limit of many antennas are also developed. The conclusions are provided in section VII II. CHANNEL MODEL We denote the number of antennas at the sender, the receiver and the eavesdropper by n t, n r and n e respectively. y r t = H r xt + z r t y e t = H e xt + z e t, where H r C nr nt and H e C ne nt are channel matrices associated with the receiver and the eavesdropper. The channel matrices are fixed for the entire transmission period and known to all the three terminals. The additive noise z r t and z e t are circularly-symmetric and complexvalued Gaussian random variables. The input satisfies a power constraint E [ 1 n n t=1 xt 2] P. A rate R is achievable if there exists a sequence of length n codes, such that the error probability at the intended receiver and 1 n Iw;yn e both approach zero as n. The secrecy capacity is the supremum of all achievable rates. A. Notation Bold upper and lower case characters are used for matrices and vectors, respectively. Random variables are distinguished from realizations by the use of san-serif fonts for the former and seriffed fonts for the latter. And we generally reserve the symbols I for mutual information, H for entropy, and 1 h for differential entropy. All logarithms are base-2 unless otherwise indicated. The set of all n-dimensional complex-valued vectors is denoted by C n, and the set of m n-dimensional matrices is denoted using C m n. Matrix transposition is denoted using the superscript T, and the Hermitian i.e., conjugate transpose of a matrix is denoted using the superscript, the Moore-Penrose pseudo-inverse is denoted by, while the projection matrix onto the null space is denoted by. Moreover, Null denotes the null space of its matrix argument, and tr and det denote the trace and determinant of a matrix, respectively. The notation A 0 means that A is a positive semidefinite matrix. I denotes the identity matrix and 0 denotes the matrix with all zeros. The dimensions of these matrices will be suppressed and will be clear from the context. III. SUMMARY OF MAIN RESULTS In this section we summarize the main results in this paper. The proof A. MIMOME Secrecy Capacity The secrecy capacity of the MIMOME channel is stated in the theorem below. Theorem 1: The secrecy capacity of the MIMOME wiretap channel is C = min K Φ K Φ max R +,K Φ, 2 where R +,K Φ = Ix;y r y e with x CN0, and { } 0, tr P and where [z r,z e ] CN0,K Φ, with { [ ] } Inr Φ K Φ K Φ K Φ =, K Φ 0 Φ I ne { [ Inr Φ = K Φ K Φ = Φ I ne, 3 ] }, σ max Φ 1. Furthermore, the minimax problem in 2 has a saddle point solution, K Φ and the secrecy capacity can also be expressed as, 4 C = R +, K Φ = log deti + H r H r deti + H e KP H e. 5 1 Connection with Csiszár and Körner Capacity: A characterization of the secrecy capacity for the non-degraded discrete memoryless broadcast channel p yr,y e x is provided by Csiszár and Körner [3], C = max p u,p x u Iu; y r Iu; y e, 6 where u is an auxiliary random variable over a certain alphabet with bounded cardinality that satisfies u x y r, y e. As remarked in [3], the secrecy capacity 6 can be extended in principle to incorporate continuous-valued

3 inputs. However, directly identifying the optimal u for the MIMOME case is not straightforward. Theorem 1 indirectly establishes an optimal choice of u in 6. Suppose that, K Φ is a saddle point solution to the minimax problem in 2. From 5 we have where R +, K Φ = R, 7 R log deti + H r H r deti + H e KP H e is the achievable rate obtained by evaluating 6 for u = x CN0,. This choice of p u, p x u thus maximizes 6. Furthermore note that argmax log deti + H r H r deti + H e H e where the set is defined in 3. Unlike the minimax problem 2 the maximization problem 8 is not a convex optimization problem since the objective function is not a concave function of. Even if one verifies that KP satisfies the optimality conditions associated with 8, this will only establish that KP is a locally optimal solution. The capacity expression 2 provides a convex reformulation of 8 and establishes that is a globally optimal solution in Structure of the optimal solution: The saddle point solution, K Φ satisfies a certain necessary condition that admits an intuitive interpretation. In particular, in the proof of Theorem 1, we show the following: Let S be any matrix that has a full column rank matrix and satisfies = SS and let Φ be the cross-covariance matrix between the noise random variables in 2, c.f. 4, then 8 H e S = Φ H r S. 9 Note that Φ is a contraction matrix i.e., all its singular values are less than or equal to unity. The column space of S is the subspace in which the sender transmits information. So 9 states that no information is transmitted along any direction where the eavesdropper observes a stronger signal than the intended receiver. The effective channel of the eavesdropper, H e S, is a degraded version of the effective channel of the intended receiver, H r S even though the channel matrices may not be ordered a-priori. This condition explains why the genie upper bound, which provides y e to the legitimate receiver c.f. Lemma 1 does not increase the capacity of the fictitious channel. B. Capacity analysis in the High SNR Regime 1 Capacity Achieving Scheme: While the capacity expression in Theorem 1 can be computed numerically, it does not admit a closed form solution. In this section, we develop a closed form expression for the capacity in the high signalto-noise-ratio SNR regime, in terms of the generalized 1 The high SNR case of this problem i.e., max K K log dethrkh r is known as the multiple-discriminantfunction in multivariate statistics and is well-studied; see, e.g., deth ekh e [24]. singular values of the channel matrices H r and H e. The main message here is that in the high SNR regime, an optimal scheme involves simultaneously diagonalizing the channel matrices H r and H e using the generalized singular value decomposition GSVD transform. This creates a set of parallel channels independent channels between the sender and the receivers, and it suffices to use independent Gaussian codebooks across these channels. This architecture for the case of channel is shown in Fig. 1. The reader is referred to section The reader is referred to Appendix VIII for the definition and properties of the GSVD transform. Theorem 2: Let, σ 1 σ 2... σ s, be the generalized singular values of the channel matrices H r and H e. The high SNR secrecy capacity is given as follows. If then else, CP= j:σ j 1 NullH e NullH r = { } 10 lim CP = log σj 2, 11 P j:σ j 1 log σj 2 + log det I + P p H rh e H r o P 1, 12 where p is a constant that depends on H r and H e defined via 138, and o P 1 0 as P, and H e C nt nt is the projection matrix see 147 onto the null space of H e. 2 Synthetic noise transmission strategy: In addition to the capacity achieving scheme, we consider a suboptimal strategy where the choice of transmit vectors only depends on the knowledge of H r. The strategy is only semi-blind the rate allocated does depend on both H r,h e. The performance of a similar strategy was first studied via monte carlo simulations for the MISOME case in by Negi and Goel [10], [11]. Subsequently, several analytical properties and conditions of optimality are reported in [15]. Here we extend that framework for the MIMOME channel. For simplicity, we limit the discussion to the case when rankh r = n r and rankh e = n t. This case normally happens when n r n t and n e n t and the channel matrices have a full row or column rank. The transmission scheme can be described as imposing a particular choice of x, u in the binning scheme 6. Let b 1,...,b nt be independent Gaussian random variables sampled according to CN0, P t, where P t = P n t. Let H r = UΛV r, be the compact SVD of H r. Since rankh r = n r, note that U C nr nr is a unitary matrix and Λ C nr nr is a diagonal matrix. Let V = [v 1,...,v nr ] C nt nr and let {v j } nt j=1 constitute an orthogonal basis in Cnt. Our choice of parameters is, n t x = b j v j, u = b 1,..., b nr. 13 j=1 Here the symbols in u are the information bearing symbols from a corresponding codeword, while the symbols b nr+1,..., b nt are synthetic noise symbols transmitted in

4 Tx H r Rx Tx Ω 1 Σ r Ψ r Rx H e Ev Σ e Ψ e Ev Fig. 1. Simultaneous diagonalization via the GSVD transform. The left figure show the original channel model with 2 2 channel matrices H r and H e. The right figure shows the GSVD transform applied to the channel matrices i.e., H r = Ψ rσ rω 1 and H e = Ψ eσ eω 1, where Ψ r and Ψ e are unitary matrices and Σ r and Σ e are diagonal matrices. the null space of the legitimate receiver s channel in order to confuse a potential eavesdropper. In the high SNR regime the rate expression 83, can be expressed in terms of the generalized singular values of H r,h e. In particular, lim R SNP = P n t j=1 log σ 2 j 14 It is interesting to compare the expression 14 with the high SNR capacity expression 11. While the capacity expression involves summation over only those generalized singular values that exceed unity, the synthetic noise transmission scheme involves summation over all the singular values and hence is sub-optimal. Rather surprisingly, both the capacity achieving scheme and the synthetic noise scheme can be characterized using just the generalized singular values of H r,h e in the high SNR regime. C. Zero-Capacity Condition and Scaling Laws Under what conditions is the secrecy capacity zero? We develop some sharp insights into these conditions in the limit of many antennas. Corollary 1: Suppose that H r and H e have i.i.d. CN0, 1 entries. Suppose that n r, n e, n t, while keeping n r /n e = γ and n t /n e = β fixed. The secrecy capacity 2 CH r,h e converges almost surely to zero if and only if 0 β 1/2, 0 γ 1, and γ 1 2β Figs. 2 and 3 provide further insight into the asymptotic analysis for the capacity achieving scheme. In Fig. 2, we show the values of γ, β where the secrecy rate is zero. If the eavesdropper increases its antennas at a sufficiently high rate so that the point γ, β lies below the solid curve, then secrecy capacity is zero. The MISOME case corresponds to the vertical intercept of this plot. The secrecy capacity is zero, if β 1/2, i.e., the eavesdropper has at least twice the number of antennas as the sender. The 2 We assume that the channels are sampled once, then stay fixed for the entire period of transmission, and are revealed to all the terminals. single transmit antenna SIMOME case corresponds to the horizontal intercept. In this case the secrecy capacity is zero if γ 1, i.e., the eavesdropper has more antennas than the receiver. In Fig. 3, we consider the scenario where a total of T 1 antennas are divided between the sender and the receiver. The horizontal axis plots the ratio n r /n t, while the vertical axis plots the minimum number of antennas at the eavesdropper normalized by T for the secrecy capacity to be zero. We note that the optimal allocation of antennas, that maximizes the number of eavesdropper antennas happens at n r /n t = 1/2. This can be explicitly obtained from the following minimization minimize β + γ subject to, γ 1 2β 2, β 0, γ The optimal solution can be easily verified to be β, γ = 2/9, 1/9. In this case, the eavesdropper needs 3T antennas for the secrecy capacity to be zero. We remark that the objective function in 16 is not sensitive to variations in the optimal solution. If fact even if we allocate equal number of antennas to the sender and the receiver, the eavesdropper needs T T antennas for the secrecy capacity to be zero. IV. MIMOME SECRECY CAPACITY In this section we provide our proof of the secrecy capacity of the MIMOME channel i.e.,theorem 1. Our proof involves two main parts. First we note that the right hand side in 2 is an upper bound on the secrecy capacity. Then we examine the optimality conditions associated with the saddle point solution to establish 7, which completes the proof since C R +, K Φ = R C. We begin with an upper bound on the secrecy capacity of the multi-antenna wiretap channel that was established in [15]. Lemma 1 Upper Bound [15]: An upper bound on the secrecy capacity is given by CP R UB P = min K Φ K Φ max R +,K Φ, 17

5 β= n t /n e C s > 0 n e /n r + n t C s = γ = n /n r e n /n r t Fig. 2. Zero-capacity condition in the γ, β plane. The capacity is zero for any Fig. 3. The minimum number of eavesdropping antennas per sender plus point below the curve, i.e., the eavesdropper has sufficiently many antennas to receiver antenna for the secrecy capacity to be zero, plotted as a function get non-vanishing fraction of the message, even when the sender and receiver of n r/n t. fully exploit the knowledge of H e. K Φ arg min K Φ R +,K Φ Saddle Point:, K Φ arg max R +, K Φ arg max hy r Θy e Φ H rs = H es R +, K Φ = R Fig. 4. Key steps in the Proof of Theorem 1. The existence of a saddle point, K Φ is first established. Thereafter the KKT conditions associated with the minimax expressions are used to simplify the saddle value to show that it matches the lower bound. where R +,K Φ Ix;y r y e 18 is the conditional mutual information expression evaluated with x CN0,, and [z r,z e ] CN0,K Φ, and the domain sets and K Φ are defined via 3 and 4 respectively. It remains to establish that this upper bound expression satisfies 7, which we do in the remainder of this section. We divide the proof into several steps, which are outlined in Fig. 4. A. Convexity of the upper bound We first show that the minimax upper bound is a convexconcave problem with a saddle point solution. Lemma 2 Existence of a saddle point solution: The function R +,K Φ in 18 has the following properties: 1 For each fixed K Φ K Φ, the function R +,K Φ is concave in the variable. 2 For each fixed, the function R +, is convex in the variable K Φ K Φ. 3 There exists a saddle point solution to 17 i.e., and K Φ K Φ, such that R +, K Φ R +, K Φ R +,K Φ 19 holds for each, and each K Φ K Φ. Proof: To establish 1 above, with a slight abuse in notation, let us define R + p x,k Φ = Ix;y r y e, to be the conditional mutual information evaluated when the noise random variables are jointly Gaussian random variables with a covariance K Φ, and with input distribution of p x. As before, R + Q,K Φ denotes the conditional mutual information, evaluated when the noise random variables are jointly Gaussian with covariance K Φ and the input distribution is Gaussian with a covariance Q. Let p 1 x = CN0,Q 1, p 2 x = CN0,Q 2 and p θ x = θp1 x + 1 θp2 x, Qθ = θq θq 2, for some θ [0, 1] and p G x = CN0,Q θ. It suffices to show that R + Q θ,k Φ θr + Q 1,K Φ + 1 θr + Q 2,K Φ, which we do below: R + Q θ,k Φ = R + p G x,k Φ R + p θ x,k Φ 20 θr + p 1 x,k Φ + 1 θr + p 2 x,k Φ 21 = θr + Q 1,K Φ + 1 θr + Q 2,K Φ, where 20 follows from the fact that, as shown in Appendix I, a Gaussian distribution maximizes function R + p θ x,k Φ, among all distributions with a fixed covariance, and 21 from the fact that for each fixed p yr,y e x, the function Ix;y r y e is a concave function in the input distribution see e.g., [7, Appendix I]. To establish the 2, we note that for each x CN0,, the function Ix;y r,y e is convex in the noise covariance

6 K Φ see e.g., [25, Lemma II-3, pg. 3076] for an information theoretic proof. Finally, since the constraint sets and K Φ are convex and compact the existence of a saddle point solution, K Φ as stated in 3 follows from the above properties and the basic theorem in Game theory [26]. B. Saddle Point Properties In this subsection we examine the optimality properties associated with the saddle point solution to establish certain technical conditions which will be used to simplify the saddle value. In the sequel, let, K Φ denote a saddle point solution in 17, and define Φ and Θ via, [ ] Inr Φ K Φ = Φ, 22 I ne Θ = H r KP H e + ΦI + H e KP H e Lemma 3 Properties of saddle-point: The saddle point solution, K Φ to 17 satisfies the following 1 H r ΘH e Φ H r H e = Suppose that S is a full rank square root matrix of, i.e., KP = SS and S has a full column rank. Then provided H r ΘH e 0, the matrix M = H r ΘH e S 25 has a full column rank 3. Proof: The conditions 1 and 2 are established by examining the optimality conditions satisfied by the saddlepoint in 17 i.e., and K Φ arg min K Φ K Φ R +,K Φ 26 argmax R +, K Φ. 27 We first consider the optimality condition in 26 and establish 24. The derivation is most direct when K Φ is nonsingular. The extension to the case when K Φ is singular is provided in Appendix III. The Lagrangian associated with the minimization 26 is L Φ K Φ,Υ = R +,K Φ + trυk Φ, 28 where the dual variable nr ne [ ] n r Υ Υ = 1 0 n e 0 Υ 2 29 is a block diagonal matrix corresponding to the constraint that the noise covariance K Φ must have identity matrices on 3 A matrix M has a full column rank if, for any vector a, Ma = 0 if and only if a = 0. its diagonal. The associated Kuhn-Tucker KKT conditions yield KΦ L Φ K Φ,Υ KΦ = KΦ R +,K Φ 30 KΦ + Υ = 0, where, KΦ R +,K Φ KΦ 31 ] = KΦ [log detk Φ + H t KP H t logdetk Φ KΦ = K Φ + H t KP H t 1 1 K Φ 32 and where we have used H t = [ ] Hr. 33 H e Substituting 32 in 30, and simplifying, we obtain, H t KP H t = K Φ Υ K Φ + H t KP H t, 34 and the relation in 24 follows from 34 through a straightforward computation as shown in Appendix II. To establish 2 above, we use the optimality condition associated with i.e., 27 As in establishing 1, the proof is most direct when K Φ is non-singular. Hence this case is treated first, while the case when K Φ is singular is treated in Appendix VI. arg max R +, K Φ = argmax hy r y e = argmax h y r Θ y e, 35 where Θ = H r H e + ΦH e H e + I 1 is the linear minimum mean squared estimation coefficient of y r given y e. Directly working with the Kuhn-Tucker conditions associated with 35 appears difficult. Nevertheless it turns out that we can replace the objecive function above, with a simpler objective function as described below. First, note that since is an optimum solution to 35, in general arg max h y r Θy e argmax h y r Θ y e 36 holds, since substituting = in the objective function on the left hand side, attains the maximum on the right hand side. Somewhat surprisingly, it turns out that the inequality above is in fact an equality, i.e., the left hand side also attains the maximum when =. This observation is stated formally below, and allows us to replace the objective function in 35 with a simpler objective function on the left hand side in 36. Claim 1: Suppose that K Φ 0 and define H hy r Θy e. 37 Then, arg maxh. 38 The proof involves showing that KP, satisfies the Kuhn- Tucker conditions which we do in Appendix IV.

7 Finally, to establish 2, we note that, arg max H 39 = argmax log deti+j 1 2 Hr ΘH e H r ΘH e J 1 2, where J I + Θ Θ Θ Φ Φ Θ 0 40 is an invertible matrix. We can interpret 40 as stating that is an optimal input covariance for a MIMO channel with white noise and matrix H eff J 1 2H r ΘH e. The fact that H eff S is a full rank matrix, then a consequence of the so called water-filling conditions. The proof is provided in Appendix V. which can be used to establish the second case in 41 as we now do. In particular, we show that R = R +, K Φ R equals zero. Indeed, hy e y r = Ix;y r y e {Ix;y r Ix;y e } = Ix;y e y r = hy e y r hz e z r, = log deti + H e KP H e H e KP H r + Φ H r KP H r + I 1 H r KP H e + Φ = log deti + H e KP H e Φ H r KP H r + I Φ = log deti Φ Φ = hze z r, 47 C. Simplified Saddle Value The conditions in Lemma 3 can be used in turn to establish the tightness of the upper bound in 17. Lemma 4: The saddle value in 17 can be expressed as follows, { 0, H r R UB P = ΘH e = 0, R K 41 P, otherwise, where, R log deti + H r KP H r log deti + H e H e. 42 Proof: The proof is most direct when we assume that the saddle point solution is such that KΦ 0 i.e., when Φ 2 < 1. The extension when K Φ is singular is provided in Appendix VII. First consider the case when H r ΘH e = 0. From 23, it follows that Θ = Φ, using which one can establish the first part in 41: R +, K Φ = Ix;y r y e 43 = hy r y e hz r z e = hy r Θy e hz r Φz e 44 = hz r Θz e hz r Φz e 45 = 0, where 44 follows from the fact that Θ in 23 is the linear minimum mean squared estimation LMMSE coefficient in estimation y r given y e and Φ is the LMMSE coefficient in estimating z r given z e and 45 follows via the relation H r = ΘH e, so that, y r Θy e = z r Θz e. When H r ΘH e 0, combining parts 1 and 2 in Lemma 3, it follows that, Φ H r S = H e S, 46 where we have used the relation 46 in simplifying 47. This establishes the second half of 41. D. Proof of Theorem 1 The proof of Theorem 1 is a direct consequence of Lemma 4. If R +, K Φ = 0, the capacity is zero, otherwise R +, K Φ = R, and the latter expression is an achievable rate as can be seen by setting p u = p x = CN0, in the Csiszár-Körner expression 6. V. CAPACITY ANALYSIS IN THE HIGH SNR REGIME A. High SNR capacity when H e has a column full rank We first prove Theorem 2 when H e has a full column rank. In this case, it is clear that the condition in 10 is satisfied and accordingly we establish Achievability: The achievability part follows by simultaneously diagonalizing the channel matrices H r and H e using the GSVD transform. This reduces the system into a set of parallel independent channels and independent codebooks are used across these channels. More specifically, recall that in the case of interest, the transform is given in 150. Let σ 1 σ 2... σ s be the ordered set of singular values and suppose that σ i > 1 for i ν. We select the following choices for x and u in the Csiszár and Körner expression 6 x = A [ 0nt s u ],u = [0,...,0, u ν, u ν+1,..., u s ], 48 and the random variables u i are sampled i.i.d. according 1 to CN0, αp. Here α = n tσ maxa is selected so that the average power constraint is satisfied. Substituting 48 and 150 into the channel model 1 yields, [ ] 0nt s y r = Ψ r + z D r u r, y e = Ψ e 0 n t s D e u + z e ne n t Since Ψ r and Ψ e are unitary, and D r and D e are diagonal, the system of equations 49 indeed represents a parallel channel model. See Fig. 1 for an illustration of the case.

8 The achievable rate obtained by substituting 49 and 48 into 6, is R = Iu;y r Iu;y e 50 n t = log 1 + αpr2 j 1 + αpe 2 j=ν j = log σj 2 o P 1, 51 j:σ j>1 where o P 1 0 as P. 2 Converse: For the converse we begin with a more convenient upper bound expression to the secrecy capacity 17, R UB = min Φ: Φ 2 1 Θ C nr n t max R ++,Θ,Φ R ++ = log deth eff H eff + I + ΘΘ ΘΦ ΦΘ deti ΦΦ, H eff = H r ΘH e. 52 This expression, as an upper bound, was suggested to us by Y. Eldar and A. Wiesel and was first used in establishing the secrecy capacity of the MISOME channel in [14]. To establish 52, first note that the objective function R +,K Φ in 17 can be upper bounded as follows: R +,K Φ = Ix;y r y e = hy r y e hz r z e = hy r y e log deti ΦΦ = min Θ hy r Θy e log deti ΦΦ = min Θ R ++,Θ,Φ. Thus, we have from 17 that R + P = min K Φ max R +,K Φ 53 = min max min ++,Θ,Φ K Φ Θ 54 min min R ++,Θ,Φ, K Φ Θ 55 as required. To establish the capacity, we show that the upper bound in 52 above, reduces to the capacity expression 11, for a specific choice of Θ and Φ as stated below. Our choice of parameters in the minimization of 52 is as follows Θ = H r H e, where, Φ = Ψ r = diag{δ 1, δ 2,...,δ s }, nt s s ne nt [ n r s s 0 0 ] Ψ e, 56 δ i = min σ i, 1σi, 57 and H e denotes the Moore-Penrose pesudo-inverse of H e c.f Note that with these choice of parameters, H eff = 0. So the maximization over in 52 is not effective. Simplifying 52 with these choice of parameters the upper bound expression reduces to R ++ log deti + D rd 1 e 2 2D r D 1 e deti 2 = log σ j. 2 j:σ j>1 as in 11. B. High SNR capacity when H e is not full column rank When H e is not a full column rank matrix, the capacity result in 12 will now be established. 1 Achievability: To show the achievability, we identify the subspaces S z = NullH e NullH r = span{ψ k p+1,..., ψ k } S s =NullH e NullH r =span{ψ k p s+1,..., ψ k p }. 58 We will use most of the power for transmission in the subspace S z and a small fraction of power for transmissoin in the subspace S s. More specifically, by selecting, we have, x = Ψ t 0 k p s Ω 2 u v 0 nt k, 59 0 nr p s y r = Ψ r D r u + z r, T 32 Ω 1 2 u + Ω 1 3 v y e = Ψ e 0 k p s D e u + z e. 0 ne+p k 60 In 59, we select v = [v 1, v 2,...,v p ] T to be a vector of i.i.d. Gaussian random variables with a distribution CN 0, P P p and u = [0,...,0, u ν,...,u s ] T to be a vector of independent Gausian random variables. Here ν is the smallest integer such that σ j > 1 for all j ν and σ j 1 otherwise. Each u j CN0, α 1 P, where α = n, tσ maxω 2 is chosen to meet the power constraint. An achievable rate for this choice of parameters is R = Iu,v;y r Iu,v;y e 61 = Iu;y r Iu;y e + Iv;y r u, 62 where the last step follows from the fact that v is independent of y e,u c.f. 60. Following 51, we have that Iu;y r Iu;y e = log σj 2 o P1 63 and Iv;y r u = log det = log det = log det j:σ j>1 I + P P p I + P p Ω 1 3 Ω 3 I + P p H rh e H r Ω 1 3 Ω 3 64 o P 1 65 o P 1, 66

9 where 65 follows from the fact that log1+x is a continuous function of x and log deti +X = log1 + λ i X and the last step follows from Converse: To establish the converse, we use the following choices for Θ and Φ in 52. k s p s n e+p k Θ = Ψ r and Φ = Ψ r n r s p 0 s D r D 1 e p F 31 F 32 0 k s p s n e+p k n r s p 0 s p 0 where is defined in 57, and the matrices are selected such that H r ΘH e Ψ e, 67 Ψ e 68 F 32 = T 32 Ω 2 D 1 e F 31 = T 31 F 32 D e T 21 Ω 1 69 = Ψ r [Σ r Ω 1,0 nr n t k] Ψ r ΘΨ e[σ e Ω 1,0 ne n t k]ψ t 70 k p s s p n t k n r s p 0 = Ψ r s 0 Ψ p Ω 1 t The upper bound expression 52 can now be simplified as follows. H eff H eff = H r ΘH e H r ΘH e n r p s s p n r p s 0 = Ψ r s 0 Ψ r, p Ω 1 3 QΩ 3 where Q is related to by, Ψ t Ψ t = k p s s p n t k k p s s p Q n t k and satisfies trq P. From 72, 68 and 67, we have that the numerator in the upper bound expression 52 simplifies as in 74. Using 74 and the Hardamard inequality, we have log deti + H eff ΘH eff + ΘΘ ΘΦ ΦΘ log deti + D r D 1 e 2 2D r D 1 e + log deti + F 31 F 31 + F 32F 32 + Ω 1 3 QΩ 3 75 Substituting this relation in 52, the upper bound reduces to, R + P log deti + D rd 1 e 2 2D r D 1 e deti 2 + max log deti + F 31 F 31 + F 32F 32 + Q 0: Ω 1 3 QΩ 3 trq P 76 Substituting for D r and D e from 139 and for from 57, we have that log deti + D rd 1 e 2 2D r D 1 e deti 2 = log σj 2. j:σ j>1 77 It remains to establish that max log deti + F 31 F 31 + F 32F 32 + Q 0: Ω 1 3 QΩ 3 trq P log det I + P p H rh e H r + o P 1, 78 which we now do. Let γ = σ max F 31 F 31 + F 32F 32, 79 denote the largest singular value of the matrix F 31 F 31 + F 32 F 32. Since log-det is increasing on the cone of positive semidefinite matrices, we have that, max log deti + F 31 F 31 + F 32F 32 + Q 0: Ω 1 3 QΩ 3 trq P max Q 0: trq P = log det = log det = log det log det1 + γi + Ω 1 3 QΩ γi + P p Ω 1 3 Ω 3 I + P p Ω 1 3 Ω 3 I + P p H rh e H r + o P o P 1 + o P 1 82 where 80 follows from the fact that F 31 F 31 +F 32F 32 γi, and 81 follows from the fact that water-filling provides a vanishingly small gain over flat power allocation when the channel matrix has a full rank see e.g., [27] and 82 follows via 149. C. Analysis of synthetic noise transmission scheme We first show, via straightforward computation, that this choice of parameters, results in a rate of R SN P = log det I + ε t Λ 2 + log deth r ε t I + H e H e 1 H r. 83

10 I + H eff ΘH eff + ΘΘ ΘΦ ΦΘ n r s p I = Ψ r s p n r s p s p I + D r D 1 e 2 2D r D 1 e D r D 1 e F 32 F 32 D r D 1 e I + F 31 F 31 + F 32F 32 + Ω 1 3 QΩ 3 Ψ r 74 where ε t = 1 P t. First note that Iu;y e = log deti + P t H r H r = log deti + P t Λ 2 84 In the following, let V n = [v nr+1,...,v nt ] denote the vectors in the null space of H r. Iu;y e = hy e hy e u = log deti + P t H e H e log deti + P t H e V n V nh e = log deti + P t H e H e log deti + P th e I VV H e = log deti + P t H rh r log deti + P t I VV H eh e = logdeti P t I + P t H e H e 1 VV H e H e = logdeti P t V H e H ei + P t H e H e 1 V = logdetv I + P t H eh e 1 V Where we have repeatedly used the fact that deti+ab = deti + BA for any two matrices A and B of compatible dimensions. R SN P = log deti + P t Λ 2 + log detv I + P t H e H e 1 V. Since U and Λ are square and invertible, R SN P = log deti + ε t Λ 2 + log detuλv ε t I + H eh e 1 VΛU = log deti + ε t Λ 2 + log deth r ε t I + H e H e 1 H r, as required. To establish 14, we use the following facts Fact 1 Taylor Series Expansion [28]: Let M be an invertible matrix. Then εi + M 1 = M 1 + Oε, 85 where Oε represents a function that goes to zero as ε 0. Fact 2: Suppose that H r and H e be the channel matrices as in 1, and suppose that rankh r = n r and rankh e = n t and n r n t n e. Let σ 1, σ 2..., σ s denote the generalized singular values of H r,h e c.f Then det H r H eh e 1 H s r = σj 2 86 j=1 Finally, to establish 14, we take the limit ε t 0 in 83 R SN P = log det I + ε t Λ 2 + log deth r ε t I + H eh e 1 H r = log deth r H e H e 1 + Oε t H r + Oε t 87 = log deth r H eh e 1 H r = + log deti + H e H e 1/2 OεH e H e /2 s log σj 2 + Oε t. 88 j=1 where we use Facts 1 and 2 above in 87 and 88 above and the fact that log deti + X = j log1 + λ jx is continuous in the entries of X. VI. ZERO-CAPACITY CONDITION AND SCALING LAWS First we develop a general condition under which the secrecy capacity is zero. Lemma 5: The secrecy capacity of the MIMOME channel is zero if and only if H r v σ max H r,h e sup v C n t H e v Proof: When NullH r NullH e {}, clearly, σ max H r,h e =. Otherwise, it is known see e.g., [29] that σ max is the largest generalized singular value of H r,h e as defined in 140. To establish that the capacity is zero, whenever σ max H r,h e 1, it suffices to consider the high SNR secrecy capacity in 11 in Theorem 2, which is clearly zero whenever σ max 1. If σ max > 1, let v there exists a vector v such that H r v > H e v. Select x = u CN0, Pvv in 6. Clearly CP R P > 0 for all P > 0. By combining Lemma 5 and Fact 3 below which is established in [30, Pg. 642], one can deduce the following condition for the zero-capacity condition in Corollary 1. Fact 3 [30] [31]: Suppose that H r and H e have i.i.d. CN0, 1 entries. Let n r, n e, n t, while keeping n r /n e = γ and n t /n e = β fixed. If β < 1, then the largest generalized singular value of H r,h e converges almost surely to 2 σ max H r,h e a.s β 1 β γ γ 1 β. The proof follows by direct substitution of the GSVD expansion 136 and will be omitted. 90

11 VII. CONCLUSION We establish the secrecy capacity of the MIMOME channel as a saddle point solution to a minimax problem. Our capacity result establishes that a Gaussian input maximizes the secrecy capacity expression by Csiszár and Körner for the MIMOME channel. Our proof uses upper bounding ideas from the MIMO broadcast channel literature and the analysis of optimality conditions provides insight into the structure of the optimal solution. Next, we develop an explicit expression for the secrecy capacity in the high SNR regime in terms of the generalized singular value decomposition GSVD and show that in this case, an optimal scheme involves simultaneous diagonalization of the channel matrices to create a set of independent parallel channel and using independent codebooks across these channels. We also study a synthetic noise transmission scheme that is semi-blind as it selects the transmit directions based on the legitimate receiver s channel only and compare its performance with the capacity achieving scheme. Finally, we study the conditions under which the secrecy capacity is zero and study its scaling laws in the limit of many antennas. ACKNOWLEDGEMENT We thank Ami Wiesel for interesting discussions and help with numerical optimization of the saddle point expression in Theorem 1. APPENDIX I OPTIMALITY OF GAUSSIAN INPUTS We show that a Gaussian input maximizes the conditional mutual information term Ix;y r y e when the noise distribution [z r,z e] CN0,K Φ. Recall that K Φ has the form, [ ] Inr Φ K Φ = 91 Φ and K Φ 0 if and only if Φ 2 < 1. In this case we show that among all distributions p x with a covariance of, a Gaussian distribution maximizes Ix;y r y e. Note that I ne Ix;y r y e = hy r y e hz r z e 92 where = hy r y e log2πe nr deti nr ΦΦ log detλ log deti nr ΦΦ, 93 Λ I + H r H r Φ + H r H e I + H e H e 1 Φ + H e H r 94 is the linear minimum mean squared error in estimating y r given y e and the last inequality is satisfied with equality if p x = CN0,. When K Φ is singular, the expansion 92 is not well defined. Nevertheless, we can circumvent this step by defining an appropriately reduced channel. In particular, let Φ = [ U 1 U 2 ] [ I 0 0 ] [ V 1 V 2 ] 95 be the singular value decomposition of Φ, where σ max < 1 then we have the following Claim 2: Suppose that the singular value decomposition of Φ is given as in 95 and that for the input distribution p x, we have that Ix;y r y e <, then, U 1 z r a.s. = V 1 z e 96a Ix;y r y e = Ix;U 2 y r y e 96b The optimality of Gaussian inputs now follows since the term Ix;U 2 y r y e can be expanded in the same manner as The proof of Claim 2 is provided below. Proof: To establish 96a, we simply note that E[U 1 z rz ev 1 ] = U 1 ΦV 1 = I, i.e., the Gaussian random variables U 1 z r and V 1 z e are perfectly correlated. Next note that R +, K Φ = Ix;y r y e = Ix;U 1 y r,u 2 y r y e 97 = Ix;U 2 y r,u 1 y r V 1 y e y e = Ix;U 2 y r,u 1 H rx V 1 H ex y e. 98 Since by hypothesis, Ix;y r y e <, we have that U 1 H r V 1 H ex = 0, and Ix;y r y e = Ix;U 2 y r y e, establishing 96b. Finally if p x is such that Ix;y r y e =, then from 98, U 1 H r V 1 H e U 1 H r V 1 H e 0 and hence the choice of a Gaussian p x = CN0, also results in Ix;y r y e =. APPENDIX II MATRIX SIMPLIFICATIONS FOR ESTABLISHING 24 FROM 34 Substituting for K Φ and H t in 34 and carrying out the block matrix multiplication gives H r KP H r = Υ 1I + H r KP H r + ΦΥ 2 Φ + H e KP H r H r KP H e = Υ 1 Φ + H r KP H e + ΦΥ 2 I + H e KP H e H e KP H r = Φ Υ 1 I + H r KP H r + Υ 2 Φ + H e KP H r H e KP H e = Φ Υ 1 Φ + H r KP H e + Υ 2I + H e KP H e. 99 Eliminating Υ 1 from the first and third equation above, we have Φ H r H e H r = Φ Φ IΥ2 Φ + H e KP H r. 100 Similarly eliminating Υ 1 from the second and fourth equations in 99 we have Φ H r H e H e = Φ Φ IΥ2 I+H e KP H e. 101 Finally, eliminating Υ 2 from 100 and 101 we obtain Φ H r H e H r = Φ H r H e H e I + H e H e 1 Φ + H e KP H r = Φ H r H e H e Θ 102 which reduces to 24.

12 APPENDIX III DERIVATION OF 24 WHEN THE NOISE COVARIANCE IS SINGULAR Consider the compact singular value decomposition of K Φ : K Φ = W ΩW, 103 where W is a matrix with orthogonal columns, i.e., W W = I and Ω is a non-singular matrix. We first note that it must also be the case that H t = WG, 104 i.e., the column space of H t is a subspace of the column space of W. If this were not the case then clearly Ix;y r,y e = whenever the covariance matrix has a component in the null space of W which implies that, max R +, K Φ =. 105 Since, K Φ is a saddle point, we must have that R +, K Φ R +,I <, and hence 104 must hold. Also note that since R +, K Φ = log deti + H e KP H e + log detg G + Ω detω 106 it follows that Ω in 103 is a solution to the following minimization problem, min Ω K Ω R Ω Ω, R Ω Ω = log detg G + Ω, detω { [ ] } Inr Φ K Ω = Ω WΩW = Φ 0. I ne The Kuhn-Tucker conditions for 107 yield, Ω 1 G G + Ω 1 = W ΥW, G G = ΩW ΥW Ω + G G where Υ has the block diagonal form in 29. Multiplying the left and right and side of 108 with W and W respectively and using 103 and 104 we have that H t KP H t = K Φ Υ K Φ + H t KP H t, 109 establishing 34. Finally note that the derivation in Appendix II does not require the non-singularity assumption on K Φ. APPENDIX IV PROOF OF CLAIM 1 To establish 38 note that since H is a concave function in and differentiable over, the optimality conditions associated with the Lagrangian L Θ, λ,ψ = H + trψ λtr P, 110 are both necessary and sufficient. Thus is an optimal solution to 38 if and only if there exists a λ 0 and Ψ 0 such that H r ΘH e [Γ ] 1 H r ΘH e + Ψ = λi, trψ = 0, λtr P = 0, where Γ is defined via Γ I + Θ Θ Θ Φ Φ Θ H r ΘH e H r ΘH e. 112 To obtain these parameters note that since, K Φ constitutes a saddle point solution, argmax R +, K Φ. 113 Since R +, K Φ is differentiable at each whenever K Φ 0, KP satisfies the associated KKT conditions there exists a λ 0 0 and Ψ 0 0 such that KP R, K Φ +Ψ 0 = λ 0 I KP 114 λ 0 tr P = 0, trψ 0 KP = 0. As we show below, KP R, K Φ =H r ΘH e [Λ ] 1 H r ΘH e, KP where Λ I + H r H r 115 Φ + H r H ei + H e H e 1 Φ + H e H r 116 Λ, satisfies 4 Λ = Γ. Hence the first condition in 114 reduces to H r ΘH e [Γ ] 1 H r ΘH e + Ψ 0 = λ 0 I. 117 Comparing 114 and 117 with 111, we note that, λ 0,Ψ 0 satisfy the conditions in 111, thus establishing 38. It thus remains to establish 115, which we do below. KP R +, K Φ = H th t H t + K Φ 1 H t H ei + H e H e 1 H e. 118 Substituting for H t and K Φ from 33 and 22, K Φ + H t KP H t 1 [ I + Hr KP H r Φ + H r KP H ] 1 e = Φ + H r KP H e I + H e KP H e [ ΛKP 1 Λ 1 ] Θ = Θ Λ 1 I+H e KP H e 1 + Θ Λ 1, Θ 4 To verify this relation, note that Γ is the variance of y r Θy e. When =, note that Θye is the MMSE estimate of y r given y e and Γ is the associated MMSE estimation error.

13 where we have used the matrix inversion lemma e.g., [28], and Λ is defined in 94, and Θ is as defined in 23. Substituting into 118 and simplifying gives KP R +, K Φ KP where the F is of the form [ ν nt ν ν F 0 F 1 F = n t ν F 1 F 2 ]. 128 =H t K Φ + H t KP H t 1 H t H ei + H e KP H e 1 H e as required. = H r ΘH e [Λ ] 1 H r ΘH e APPENDIX V FULL RANK CONDITION FOR OPTIMAL SOLUTION Claim 3: Suppose that KΦ 0 and ˆ be any optimal solution to ˆ arg maxlog deti+j 1 2 Hr ΘH e H r ΘH e J for some J 0 and Θ is defined in 23. Suppose that S P is a matrix with a full column rank such that ˆ = S P S P 120 then H r ΘH e S P has a full column rank. Define H eff J 1 2 Hr ΘH e. It suffices to prove that H eff S P has a full column rank, which we now do. Let rankh eff = ν and let H eff = AΣB 121 be the singular value decomposition of H eff where A and B are unitary matrices, and We now note that ˆF 1 = 0 and ˆF 2 = 0. Indeed if ˆF 2 0, then trˆf 2 > 0. This contradicts the optimality claim in 127, since the objective function only depends on ˆF 0 and one can strictly increase the objective function by increasing the trace of ˆF 0. Finally since ˆF 0 and ˆF 2 = 0, it follows that ˆF 1 = 0. APPENDIX VI FULL RANK CONDITION WHEN K Φ IS SINGULAR In this section we establish 2 in Lemma 3 when K Φ is singular. We map this case to another channel when the saddle point noise covariance is non-singular and apply the results for non-singular noise covariance. When K Φ is singular, we have that Φ has d 1 singular values equal to unity and hence we express its SVD in 95, where σ max < 1. Following Claim 2 in Appendix I we have that U 1 z r a.s. = V 1 z e 129a U 1 H r = V 1 H e, 129b R +, K Φ = Ix;U 2 y r y e,. 129c Thus with Ĥr = U 2 H r, and ẑ r = U 2 z r and Σ = [ ν nt ν ν Σ 0 0 n r ν 0 0 Note that it suffices to show that the matrix has the form Since, ˆF = ]. 122 ˆF B ˆKP B 123 [ ν nt ν ν F 0 0 n t ν 0 0 ˆ arg max log deti + H eff H eff ]. 124 = argmax log deti + AΣB BΣ A = argmax log deti + ΣB BΣ, 125 and if and only if B B, observe that, ˆF arg maxlog deti + ΣFΣ 126 = arg maxlog deti + Σ 0 F 0 Σ 0, 127 we have from 129c, that ŷ r = U 2 y r = Ĥrx + ẑ r, 130 argmaxix;ŷ r y e. 131 Since the associated cross-covariance matrix ˆΦ = E[ẑ r z e ] has all its singular values strictly less than unity, it follows from Claim 1 that where argmax Ĥ 132 Ĥ = hŷ r ˆΘy e, ˆΘ = U 2 H r H e + ΦI + H e KP H e 1. Following the proof of Claim 3 in Appendix V we then have that Ĥr ˆΘH e S = U 2 H r ΘH e S has a full column rank. This in turn implies that H r ΘH e S has a full column rank.

14 APPENDIX VII PROOF OF LEMMA 4 WHEN K Φ IS SINGULAR When K Φ is singular, we assume that the singular value decomposition of Φ is given in 95. First let us consider the case that H r = ΘH e and show that R +, K Φ = 0. Indeed following claim 2 in Appendix I we have that R +, K Φ = Ix;U 2 y r y e and expanding this expression in the same manner as 43-45, we establish the desired result. When H r ΘH e 0, we show that the difference between the upper and lower bounds is zero. R = R +, K Φ R = Ix;y e y r = Ix;V 2 y e y r, 133 where the last step follows from the fact that U 1 z a.s. r = V 1 z e and U 1 H r = V 1 H e c.f. 129a, 129b. Next, note that, hv 2 y e y r = log deti + V 2 H e H e V 2 V 2 H e H r + U 2 I + H r KP H r 1 H r KP H ev 2 + U 2 = log deti + U 2 H r H r U 2 U 2 I + H r H ru 2 = log deti 134 = hv 2 z e U 2 z r = hv 2 z e z r, 135 where we have used c.f. 46 that V 2 Φ H r S = V 2 H es U 2 H rs = V 2 H es, in simplifying 134 and the equality in 135 follows from the fact that U 1 z r is independent of U 2 z r,v 2 z e. APPENDIX VIII GSVD TRANSFORM We begin with a definition of the generalized singular value decomposition [32], [33]. Definition 1 GSVD Transform: Given two matrices H r C nr nt and H e C ne nt, there exist unitary matrices Ψ r C nr nr, Ψ e C ne ne and Ψ t C nt nt, a non-singular, lower triangular matrix Ω C k k, and two matrices Σ r R nr k and Σ e R ne k, such that Ψ r H rψ t = Σ r [ Ω 1,0 k nt k], 136a Ψ eh e Ψ t = Σ e [ Ω 1,0 k nt k], 136b where the matrices Σ r and Σ e have the following structure, Σ r = Σ e = k p s s p n r p s 0 s D r p I k p s s p k p s s D e n e+p k 0 I, 137a, 137b and the constants k = rank [ Hr H e ], p = dim NullH e NullH r, 138 and s depend on the matrices H r and H e. The matrices D r = diag{r 1,...,r s }, D e = diag{e 1,...,e s }, 139 are diagonal matrices with strictly positive entries, and the generalized singular values are given by σ i = r i e i, i = 1, 2,...,s. 140 We provide a few properties of the GSVD-transform that are used in the sequel. 1 The GSVD transform provides a characterization of the null space of H e. Let Ψ t = [ψ 1,..., ψ nt ], 141 where Ψ t is defined via 136. Then S n = NullH e NullH r = span{ψ k+1,...,ψ nt } 142a S z = NullH e NullH r = span{ψ k p+1,...,ψ k } 142b Indeed, it can be readily verified from 136 that H r ψ j = H e ψ j = 0, j = k + 1,...,n t, 143 which establishes 142a. To establish 142b, we will show that for each j such that k p + 1 j k, H e ψ j = 0 and {H r ψ j } are linearly independent. It suffices to show that the last p columns of Σ r Ω 1 are linearly independent and the last p columns of Σ e Ω 1 are zero. Note that since Ω 1 in 136 is a lower triangular matrix, we can express it as Ω 1 = k p s k p s s p Ω 1 1 s T 21 Ω 1 2 p T 31 T 32 Ω By direct block multiplication with 137a and 137b, we have, Σ r Ω 1 = Σ e Ω 1 = k s p s p n r s p 0 s D r T 21 D r Ω 1 2 p T 31 T 32 Ω 1 k p s k s p s p Ω 1 s D e T 21 D e Ω 1 2 n e+p k a 145b Since Ω 3 is invertible, the last p columns of Σ r Ω 1 are linearly independent and clearly the last p columns of Σ e Ω 1 are zero establishing 142b. Furthermore, NullH e = span{ψ k p+1,..., ψ nt }. 146

Secret Key Agreement Using Asymmetry in Channel State Knowledge

Secret Key Agreement Using Asymmetry in Channel State Knowledge Secret Key Agreement Using Asymmetry in Channel State Knowledge Ashish Khisti Deutsche Telekom Inc. R&D Lab USA Los Altos, CA, 94040 Email: ashish.khisti@telekom.com Suhas Diggavi LICOS, EFL Lausanne,

More information

Lecture 7 MIMO Communica2ons

Lecture 7 MIMO Communica2ons Wireless Communications Lecture 7 MIMO Communica2ons Prof. Chun-Hung Liu Dept. of Electrical and Computer Engineering National Chiao Tung University Fall 2014 1 Outline MIMO Communications (Chapter 10

More information

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 2, FEBRUARY Uplink Downlink Duality Via Minimax Duality. Wei Yu, Member, IEEE (1) (2)

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 2, FEBRUARY Uplink Downlink Duality Via Minimax Duality. Wei Yu, Member, IEEE (1) (2) IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 2, FEBRUARY 2006 361 Uplink Downlink Duality Via Minimax Duality Wei Yu, Member, IEEE Abstract The sum capacity of a Gaussian vector broadcast channel

More information

A Proof of the Converse for the Capacity of Gaussian MIMO Broadcast Channels

A Proof of the Converse for the Capacity of Gaussian MIMO Broadcast Channels A Proof of the Converse for the Capacity of Gaussian MIMO Broadcast Channels Mehdi Mohseni Department of Electrical Engineering Stanford University Stanford, CA 94305, USA Email: mmohseni@stanford.edu

More information

Secure Degrees of Freedom of the MIMO Multiple Access Wiretap Channel

Secure Degrees of Freedom of the MIMO Multiple Access Wiretap Channel Secure Degrees of Freedom of the MIMO Multiple Access Wiretap Channel Pritam Mukherjee Sennur Ulukus Department of Electrical and Computer Engineering University of Maryland, College Park, MD 074 pritamm@umd.edu

More information

Single-User MIMO systems: Introduction, capacity results, and MIMO beamforming

Single-User MIMO systems: Introduction, capacity results, and MIMO beamforming Single-User MIMO systems: Introduction, capacity results, and MIMO beamforming Master Universitario en Ingeniería de Telecomunicación I. Santamaría Universidad de Cantabria Contents Introduction Multiplexing,

More information

12.4 Known Channel (Water-Filling Solution)

12.4 Known Channel (Water-Filling Solution) ECEn 665: Antennas and Propagation for Wireless Communications 54 2.4 Known Channel (Water-Filling Solution) The channel scenarios we have looed at above represent special cases for which the capacity

More information

ACOMMUNICATION situation where a single transmitter

ACOMMUNICATION situation where a single transmitter IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 50, NO. 9, SEPTEMBER 2004 1875 Sum Capacity of Gaussian Vector Broadcast Channels Wei Yu, Member, IEEE, and John M. Cioffi, Fellow, IEEE Abstract This paper

More information

On the Impact of Quantized Channel Feedback in Guaranteeing Secrecy with Artificial Noise

On the Impact of Quantized Channel Feedback in Guaranteeing Secrecy with Artificial Noise On the Impact of Quantized Channel Feedback in Guaranteeing Secrecy with Artificial Noise Ya-Lan Liang, Yung-Shun Wang, Tsung-Hui Chang, Y.-W. Peter Hong, and Chong-Yung Chi Institute of Communications

More information

Lecture 8: MIMO Architectures (II) Theoretical Foundations of Wireless Communications 1. Overview. Ragnar Thobaben CommTh/EES/KTH

Lecture 8: MIMO Architectures (II) Theoretical Foundations of Wireless Communications 1. Overview. Ragnar Thobaben CommTh/EES/KTH MIMO : MIMO Theoretical Foundations of Wireless Communications 1 Wednesday, May 25, 2016 09:15-12:00, SIP 1 Textbook: D. Tse and P. Viswanath, Fundamentals of Wireless Communication 1 / 20 Overview MIMO

More information

Multiuser Capacity in Block Fading Channel

Multiuser Capacity in Block Fading Channel Multiuser Capacity in Block Fading Channel April 2003 1 Introduction and Model We use a block-fading model, with coherence interval T where M independent users simultaneously transmit to a single receiver

More information

On the Optimality of Multiuser Zero-Forcing Precoding in MIMO Broadcast Channels

On the Optimality of Multiuser Zero-Forcing Precoding in MIMO Broadcast Channels On the Optimality of Multiuser Zero-Forcing Precoding in MIMO Broadcast Channels Saeed Kaviani and Witold A. Krzymień University of Alberta / TRLabs, Edmonton, Alberta, Canada T6G 2V4 E-mail: {saeed,wa}@ece.ualberta.ca

More information

Lecture 9: Diversity-Multiplexing Tradeoff Theoretical Foundations of Wireless Communications 1. Overview. Ragnar Thobaben CommTh/EES/KTH

Lecture 9: Diversity-Multiplexing Tradeoff Theoretical Foundations of Wireless Communications 1. Overview. Ragnar Thobaben CommTh/EES/KTH : Diversity-Multiplexing Tradeoff Theoretical Foundations of Wireless Communications 1 Rayleigh Wednesday, June 1, 2016 09:15-12:00, SIP 1 Textbook: D. Tse and P. Viswanath, Fundamentals of Wireless Communication

More information

Degrees of Freedom Region of the Gaussian MIMO Broadcast Channel with Common and Private Messages

Degrees of Freedom Region of the Gaussian MIMO Broadcast Channel with Common and Private Messages Degrees of Freedom Region of the Gaussian MIMO Broadcast hannel with ommon and Private Messages Ersen Ekrem Sennur Ulukus Department of Electrical and omputer Engineering University of Maryland, ollege

More information

Optimum Power Allocation in Fading MIMO Multiple Access Channels with Partial CSI at the Transmitters

Optimum Power Allocation in Fading MIMO Multiple Access Channels with Partial CSI at the Transmitters Optimum Power Allocation in Fading MIMO Multiple Access Channels with Partial CSI at the Transmitters Alkan Soysal Sennur Ulukus Department of Electrical and Computer Engineering University of Maryland,

More information

Decomposing the MIMO Wiretap Channel

Decomposing the MIMO Wiretap Channel Decomposing the MIMO Wiretap Channel Anatoly Khina, Tel Aviv University Joint work with: Yuval Kochman, Hebrew University Ashish Khisti, University of Toronto ISIT 2014 Honolulu, Hawai i, USA June 30,

More information

On the Secrecy Capacity of Fading Channels

On the Secrecy Capacity of Fading Channels On the Secrecy Capacity of Fading Channels arxiv:cs/63v [cs.it] 7 Oct 26 Praveen Kumar Gopala, Lifeng Lai and Hesham El Gamal Department of Electrical and Computer Engineering The Ohio State University

More information

AN INTRODUCTION TO SECRECY CAPACITY. 1. Overview

AN INTRODUCTION TO SECRECY CAPACITY. 1. Overview AN INTRODUCTION TO SECRECY CAPACITY BRIAN DUNN. Overview This paper introduces the reader to several information theoretic aspects of covert communications. In particular, it discusses fundamental limits

More information

Transmit Directions and Optimality of Beamforming in MIMO-MAC with Partial CSI at the Transmitters 1

Transmit Directions and Optimality of Beamforming in MIMO-MAC with Partial CSI at the Transmitters 1 2005 Conference on Information Sciences and Systems, The Johns Hopkins University, March 6 8, 2005 Transmit Directions and Optimality of Beamforming in MIMO-MAC with Partial CSI at the Transmitters Alkan

More information

On the Capacity and Degrees of Freedom Regions of MIMO Interference Channels with Limited Receiver Cooperation

On the Capacity and Degrees of Freedom Regions of MIMO Interference Channels with Limited Receiver Cooperation On the Capacity and Degrees of Freedom Regions of MIMO Interference Channels with Limited Receiver Cooperation Mehdi Ashraphijuo, Vaneet Aggarwal and Xiaodong Wang 1 arxiv:1308.3310v1 [cs.it] 15 Aug 2013

More information

Lecture 9: Diversity-Multiplexing Tradeoff Theoretical Foundations of Wireless Communications 1

Lecture 9: Diversity-Multiplexing Tradeoff Theoretical Foundations of Wireless Communications 1 : Diversity-Multiplexing Tradeoff Theoretical Foundations of Wireless Communications 1 Rayleigh Friday, May 25, 2018 09:00-11:30, Kansliet 1 Textbook: D. Tse and P. Viswanath, Fundamentals of Wireless

More information

Appendix B Information theory from first principles

Appendix B Information theory from first principles Appendix B Information theory from first principles This appendix discusses the information theory behind the capacity expressions used in the book. Section 8.3.4 is the only part of the book that supposes

More information

MIMO Multiple Access Channel with an Arbitrarily Varying Eavesdropper

MIMO Multiple Access Channel with an Arbitrarily Varying Eavesdropper MIMO Multiple Access Channel with an Arbitrarily Varying Eavesdropper Xiang He, Ashish Khisti, Aylin Yener Dept. of Electrical and Computer Engineering, University of Toronto, Toronto, ON, M5S 3G4, Canada

More information

Lecture 6 Channel Coding over Continuous Channels

Lecture 6 Channel Coding over Continuous Channels Lecture 6 Channel Coding over Continuous Channels I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw November 9, 015 1 / 59 I-Hsiang Wang IT Lecture 6 We have

More information

Multiple Antennas in Wireless Communications

Multiple Antennas in Wireless Communications Multiple Antennas in Wireless Communications Luca Sanguinetti Department of Information Engineering Pisa University luca.sanguinetti@iet.unipi.it April, 2009 Luca Sanguinetti (IET) MIMO April, 2009 1 /

More information

Tight Lower Bounds on the Ergodic Capacity of Rayleigh Fading MIMO Channels

Tight Lower Bounds on the Ergodic Capacity of Rayleigh Fading MIMO Channels Tight Lower Bounds on the Ergodic Capacity of Rayleigh Fading MIMO Channels Özgür Oyman ), Rohit U. Nabar ), Helmut Bölcskei 2), and Arogyaswami J. Paulraj ) ) Information Systems Laboratory, Stanford

More information

Transmitter optimization for distributed Gaussian MIMO channels

Transmitter optimization for distributed Gaussian MIMO channels Transmitter optimization for distributed Gaussian MIMO channels Hon-Fah Chong Electrical & Computer Eng Dept National University of Singapore Email: chonghonfah@ieeeorg Mehul Motani Electrical & Computer

More information

Dirty Paper Coding vs. TDMA for MIMO Broadcast Channels

Dirty Paper Coding vs. TDMA for MIMO Broadcast Channels TO APPEAR IEEE INTERNATIONAL CONFERENCE ON COUNICATIONS, JUNE 004 1 Dirty Paper Coding vs. TDA for IO Broadcast Channels Nihar Jindal & Andrea Goldsmith Dept. of Electrical Engineering, Stanford University

More information

Optimal Power Control in Decentralized Gaussian Multiple Access Channels

Optimal Power Control in Decentralized Gaussian Multiple Access Channels 1 Optimal Power Control in Decentralized Gaussian Multiple Access Channels Kamal Singh Department of Electrical Engineering Indian Institute of Technology Bombay. arxiv:1711.08272v1 [eess.sp] 21 Nov 2017

More information

Signaling Design of Two-Way MIMO Full-Duplex Channel: Optimality Under Imperfect Transmit Front-End Chain

Signaling Design of Two-Way MIMO Full-Duplex Channel: Optimality Under Imperfect Transmit Front-End Chain DRAFT 1 Signaling Design of Two-Way MIMO Full-Duplex Channel: Optimality Under Imperfect Transmit Front-End Chain Shuqiao Jia and Behnaam Aazhang, arxiv:1506.00330v1 [cs.it] 1 Jun 2015 Abstract We derive

More information

Under sum power constraint, the capacity of MIMO channels

Under sum power constraint, the capacity of MIMO channels IEEE TRANSACTIONS ON COMMUNICATIONS, VOL 6, NO 9, SEPTEMBER 22 242 Iterative Mode-Dropping for the Sum Capacity of MIMO-MAC with Per-Antenna Power Constraint Yang Zhu and Mai Vu Abstract We propose an

More information

Parallel Additive Gaussian Channels

Parallel Additive Gaussian Channels Parallel Additive Gaussian Channels Let us assume that we have N parallel one-dimensional channels disturbed by noise sources with variances σ 2,,σ 2 N. N 0,σ 2 x x N N 0,σ 2 N y y N Energy Constraint:

More information

Fading Wiretap Channel with No CSI Anywhere

Fading Wiretap Channel with No CSI Anywhere Fading Wiretap Channel with No CSI Anywhere Pritam Mukherjee Sennur Ulukus Department of Electrical and Computer Engineering University of Maryland, College Park, MD 7 pritamm@umd.edu ulukus@umd.edu Abstract

More information

Nearest Neighbor Decoding in MIMO Block-Fading Channels With Imperfect CSIR

Nearest Neighbor Decoding in MIMO Block-Fading Channels With Imperfect CSIR IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 58, NO. 3, MARCH 2012 1483 Nearest Neighbor Decoding in MIMO Block-Fading Channels With Imperfect CSIR A. Taufiq Asyhari, Student Member, IEEE, Albert Guillén

More information

Wideband Fading Channel Capacity with Training and Partial Feedback

Wideband Fading Channel Capacity with Training and Partial Feedback Wideband Fading Channel Capacity with Training and Partial Feedback Manish Agarwal, Michael L. Honig ECE Department, Northwestern University 145 Sheridan Road, Evanston, IL 6008 USA {m-agarwal,mh}@northwestern.edu

More information

Simultaneous SDR Optimality via a Joint Matrix Decomp.

Simultaneous SDR Optimality via a Joint Matrix Decomp. Simultaneous SDR Optimality via a Joint Matrix Decomposition Joint work with: Yuval Kochman, MIT Uri Erez, Tel Aviv Uni. May 26, 2011 Model: Source Multicasting over MIMO Channels z 1 H 1 y 1 Rx1 ŝ 1 s

More information

IN this paper, we show that the scalar Gaussian multiple-access

IN this paper, we show that the scalar Gaussian multiple-access 768 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 50, NO. 5, MAY 2004 On the Duality of Gaussian Multiple-Access and Broadcast Channels Nihar Jindal, Student Member, IEEE, Sriram Vishwanath, and Andrea

More information

Optimal Transmit Strategies in MIMO Ricean Channels with MMSE Receiver

Optimal Transmit Strategies in MIMO Ricean Channels with MMSE Receiver Optimal Transmit Strategies in MIMO Ricean Channels with MMSE Receiver E. A. Jorswieck 1, A. Sezgin 1, H. Boche 1 and E. Costa 2 1 Fraunhofer Institute for Telecommunications, Heinrich-Hertz-Institut 2

More information

Lecture 2. Capacity of the Gaussian channel

Lecture 2. Capacity of the Gaussian channel Spring, 207 5237S, Wireless Communications II 2. Lecture 2 Capacity of the Gaussian channel Review on basic concepts in inf. theory ( Cover&Thomas: Elements of Inf. Theory, Tse&Viswanath: Appendix B) AWGN

More information

MIMO Wiretap Channel with ISI Heterogeneity Achieving Secure DoF with no CSI

MIMO Wiretap Channel with ISI Heterogeneity Achieving Secure DoF with no CSI MIMO Wiretap Channel with ISI Heterogeneity Achieving Secure DoF with no CSI Jean de Dieu Mutangana Deepak Kumar Ravi Tandon Department of Electrical and Computer Engineering University of Arizona, Tucson,

More information

Lecture 4 Noisy Channel Coding

Lecture 4 Noisy Channel Coding Lecture 4 Noisy Channel Coding I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw October 9, 2015 1 / 56 I-Hsiang Wang IT Lecture 4 The Channel Coding Problem

More information

Capacity of Block Rayleigh Fading Channels Without CSI

Capacity of Block Rayleigh Fading Channels Without CSI Capacity of Block Rayleigh Fading Channels Without CSI Mainak Chowdhury and Andrea Goldsmith, Fellow, IEEE Department of Electrical Engineering, Stanford University, USA Email: mainakch@stanford.edu, andrea@wsl.stanford.edu

More information

On Gaussian MIMO Broadcast Channels with Common and Private Messages

On Gaussian MIMO Broadcast Channels with Common and Private Messages On Gaussian MIMO Broadcast Channels with Common and Private Messages Ersen Ekrem Sennur Ulukus Department of Electrical and Computer Engineering University of Maryland, College Park, MD 20742 ersen@umd.edu

More information

Energy State Amplification in an Energy Harvesting Communication System

Energy State Amplification in an Energy Harvesting Communication System Energy State Amplification in an Energy Harvesting Communication System Omur Ozel Sennur Ulukus Department of Electrical and Computer Engineering University of Maryland College Park, MD 20742 omur@umd.edu

More information

Capacity Region of the Two-Way Multi-Antenna Relay Channel with Analog Tx-Rx Beamforming

Capacity Region of the Two-Way Multi-Antenna Relay Channel with Analog Tx-Rx Beamforming Capacity Region of the Two-Way Multi-Antenna Relay Channel with Analog Tx-Rx Beamforming Authors: Christian Lameiro, Alfredo Nazábal, Fouad Gholam, Javier Vía and Ignacio Santamaría University of Cantabria,

More information

Shannon meets Wiener II: On MMSE estimation in successive decoding schemes

Shannon meets Wiener II: On MMSE estimation in successive decoding schemes Shannon meets Wiener II: On MMSE estimation in successive decoding schemes G. David Forney, Jr. MIT Cambridge, MA 0239 USA forneyd@comcast.net Abstract We continue to discuss why MMSE estimation arises

More information

Optimal Power Allocation for Parallel Gaussian Broadcast Channels with Independent and Common Information

Optimal Power Allocation for Parallel Gaussian Broadcast Channels with Independent and Common Information SUBMIED O IEEE INERNAIONAL SYMPOSIUM ON INFORMAION HEORY, DE. 23 1 Optimal Power Allocation for Parallel Gaussian Broadcast hannels with Independent and ommon Information Nihar Jindal and Andrea Goldsmith

More information

Feasibility Conditions for Interference Alignment

Feasibility Conditions for Interference Alignment Feasibility Conditions for Interference Alignment Cenk M. Yetis Istanbul Technical University Informatics Inst. Maslak, Istanbul, TURKEY Email: cenkmyetis@yahoo.com Tiangao Gou, Syed A. Jafar University

More information

ELEC546 Review of Information Theory

ELEC546 Review of Information Theory ELEC546 Review of Information Theory Vincent Lau 1/1/004 1 Review of Information Theory Entropy: Measure of uncertainty of a random variable X. The entropy of X, H(X), is given by: If X is a discrete random

More information

PERFECTLY secure key agreement has been studied recently

PERFECTLY secure key agreement has been studied recently IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 45, NO. 2, MARCH 1999 499 Unconditionally Secure Key Agreement the Intrinsic Conditional Information Ueli M. Maurer, Senior Member, IEEE, Stefan Wolf Abstract

More information

2318 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 54, NO. 6, JUNE Mai Vu, Student Member, IEEE, and Arogyaswami Paulraj, Fellow, IEEE

2318 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 54, NO. 6, JUNE Mai Vu, Student Member, IEEE, and Arogyaswami Paulraj, Fellow, IEEE 2318 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 54, NO. 6, JUNE 2006 Optimal Linear Precoders for MIMO Wireless Correlated Channels With Nonzero Mean in Space Time Coded Systems Mai Vu, Student Member,

More information

Sum-Power Iterative Watefilling Algorithm

Sum-Power Iterative Watefilling Algorithm Sum-Power Iterative Watefilling Algorithm Daniel P. Palomar Hong Kong University of Science and Technolgy (HKUST) ELEC547 - Convex Optimization Fall 2009-10, HKUST, Hong Kong November 11, 2009 Outline

More information

MULTI-INPUT multi-output (MIMO) channels, usually

MULTI-INPUT multi-output (MIMO) channels, usually 3086 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 57, NO. 8, AUGUST 2009 Worst-Case Robust MIMO Transmission With Imperfect Channel Knowledge Jiaheng Wang, Student Member, IEEE, and Daniel P. Palomar,

More information

ELEC546 MIMO Channel Capacity

ELEC546 MIMO Channel Capacity ELEC546 MIMO Channel Capacity Vincent Lau Simplified Version.0 //2004 MIMO System Model Transmitter with t antennas & receiver with r antennas. X Transmitted Symbol, received symbol Channel Matrix (Flat

More information

ELEC E7210: Communication Theory. Lecture 10: MIMO systems

ELEC E7210: Communication Theory. Lecture 10: MIMO systems ELEC E7210: Communication Theory Lecture 10: MIMO systems Matrix Definitions, Operations, and Properties (1) NxM matrix a rectangular array of elements a A. an 11 1....... a a 1M. NM B D C E ermitian transpose

More information

SPARSE signal representations have gained popularity in recent

SPARSE signal representations have gained popularity in recent 6958 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 10, OCTOBER 2011 Blind Compressed Sensing Sivan Gleichman and Yonina C. Eldar, Senior Member, IEEE Abstract The fundamental principle underlying

More information

Lecture 5: Antenna Diversity and MIMO Capacity Theoretical Foundations of Wireless Communications 1. Overview. CommTh/EES/KTH

Lecture 5: Antenna Diversity and MIMO Capacity Theoretical Foundations of Wireless Communications 1. Overview. CommTh/EES/KTH : Antenna Diversity and Theoretical Foundations of Wireless Communications Wednesday, May 4, 206 9:00-2:00, Conference Room SIP Textbook: D. Tse and P. Viswanath, Fundamentals of Wireless Communication

More information

Achieving the Full MIMO Diversity-Multiplexing Frontier with Rotation-Based Space-Time Codes

Achieving the Full MIMO Diversity-Multiplexing Frontier with Rotation-Based Space-Time Codes Achieving the Full MIMO Diversity-Multiplexing Frontier with Rotation-Based Space-Time Codes Huan Yao Lincoln Laboratory Massachusetts Institute of Technology Lexington, MA 02420 yaohuan@ll.mit.edu Gregory

More information

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column

More information

Approximately achieving the feedback interference channel capacity with point-to-point codes

Approximately achieving the feedback interference channel capacity with point-to-point codes Approximately achieving the feedback interference channel capacity with point-to-point codes Joyson Sebastian*, Can Karakus*, Suhas Diggavi* Abstract Superposition codes with rate-splitting have been used

More information

Linear Regression and Its Applications

Linear Regression and Its Applications Linear Regression and Its Applications Predrag Radivojac October 13, 2014 Given a data set D = {(x i, y i )} n the objective is to learn the relationship between features and the target. We usually start

More information

LECTURE 18. Lecture outline Gaussian channels: parallel colored noise inter-symbol interference general case: multiple inputs and outputs

LECTURE 18. Lecture outline Gaussian channels: parallel colored noise inter-symbol interference general case: multiple inputs and outputs LECTURE 18 Last time: White Gaussian noise Bandlimited WGN Additive White Gaussian Noise (AWGN) channel Capacity of AWGN channel Application: DS-CDMA systems Spreading Coding theorem Lecture outline Gaussian

More information

Optimal Sequences, Power Control and User Capacity of Synchronous CDMA Systems with Linear MMSE Multiuser Receivers

Optimal Sequences, Power Control and User Capacity of Synchronous CDMA Systems with Linear MMSE Multiuser Receivers Optimal Sequences, Power Control and User Capacity of Synchronous CDMA Systems with Linear MMSE Multiuser Receivers Pramod Viswanath, Venkat Anantharam and David.C. Tse {pvi, ananth, dtse}@eecs.berkeley.edu

More information

Schur-convexity of the Symbol Error Rate in Correlated MIMO Systems with Precoding and Space-time Coding

Schur-convexity of the Symbol Error Rate in Correlated MIMO Systems with Precoding and Space-time Coding Schur-convexity of the Symbol Error Rate in Correlated MIMO Systems with Precoding and Space-time Coding RadioVetenskap och Kommunikation (RVK 08) Proceedings of the twentieth Nordic Conference on Radio

More information

Sum Capacity of Gaussian Vector Broadcast Channels

Sum Capacity of Gaussian Vector Broadcast Channels Sum Capacity of Gaussian Vector Broadcast Channels Wei Yu, Member IEEE and John M. Cioffi, Fellow IEEE Abstract This paper characterizes the sum capacity of a class of potentially non-degraded Gaussian

More information

On Comparability of Multiple Antenna Channels

On Comparability of Multiple Antenna Channels On Comparability of Multiple Antenna Channels Majid Fozunbal, Steven W. McLaughlin, and Ronald W. Schafer School of Electrical and Computer Engineering Georgia Institute of Technology Atlanta, GA 30332-0250

More information

Physical-Layer MIMO Relaying

Physical-Layer MIMO Relaying Model Gaussian SISO MIMO Gauss.-BC General. Physical-Layer MIMO Relaying Anatoly Khina, Tel Aviv University Joint work with: Yuval Kochman, MIT Uri Erez, Tel Aviv University August 5, 2011 Model Gaussian

More information

Morning Session Capacity-based Power Control. Department of Electrical and Computer Engineering University of Maryland

Morning Session Capacity-based Power Control. Department of Electrical and Computer Engineering University of Maryland Morning Session Capacity-based Power Control Şennur Ulukuş Department of Electrical and Computer Engineering University of Maryland So Far, We Learned... Power control with SIR-based QoS guarantees Suitable

More information

PCA with random noise. Van Ha Vu. Department of Mathematics Yale University

PCA with random noise. Van Ha Vu. Department of Mathematics Yale University PCA with random noise Van Ha Vu Department of Mathematics Yale University An important problem that appears in various areas of applied mathematics (in particular statistics, computer science and numerical

More information

Exploiting Partial Channel Knowledge at the Transmitter in MISO and MIMO Wireless

Exploiting Partial Channel Knowledge at the Transmitter in MISO and MIMO Wireless Exploiting Partial Channel Knowledge at the Transmitter in MISO and MIMO Wireless SPAWC 2003 Rome, Italy June 18, 2003 E. Yoon, M. Vu and Arogyaswami Paulraj Stanford University Page 1 Outline Introduction

More information

Group Secret Key Agreement over State-Dependent Wireless Broadcast Channels

Group Secret Key Agreement over State-Dependent Wireless Broadcast Channels Group Secret Key Agreement over State-Dependent Wireless Broadcast Channels Mahdi Jafari Siavoshani Sharif University of Technology, Iran Shaunak Mishra, Suhas Diggavi, Christina Fragouli Institute of

More information

Capacity optimization for Rician correlated MIMO wireless channels

Capacity optimization for Rician correlated MIMO wireless channels Capacity optimization for Rician correlated MIMO wireless channels Mai Vu, and Arogyaswami Paulraj Information Systems Laboratory, Department of Electrical Engineering Stanford University, Stanford, CA

More information

Group secret key agreement over state-dependent wireless broadcast channels

Group secret key agreement over state-dependent wireless broadcast channels 1 Group secret ey agreement over state-dependent wireless broadcast channels Mahdi Jafari Siavoshani, Shauna Mishra, Christina Fragouli, Suhas N. Diggavi Sharif University of Technology, Tehran, Iran University

More information

WITH PERFECT channel information at the receiver,

WITH PERFECT channel information at the receiver, IEEE JOURNA ON SEECTED AREAS IN COMMUNICATIONS, VO. 25, NO. 7, SEPTEMBER 2007 1269 On the Capacity of MIMO Wireless Channels with Dynamic CSIT Mai Vu, Member, IEEE, and Arogyaswami Paulraj, Fellow, IEEE

More information

MIMO Capacities : Eigenvalue Computation through Representation Theory

MIMO Capacities : Eigenvalue Computation through Representation Theory MIMO Capacities : Eigenvalue Computation through Representation Theory Jayanta Kumar Pal, Donald Richards SAMSI Multivariate distributions working group Outline 1 Introduction 2 MIMO working model 3 Eigenvalue

More information

Optimal Sequences and Sum Capacity of Synchronous CDMA Systems

Optimal Sequences and Sum Capacity of Synchronous CDMA Systems Optimal Sequences and Sum Capacity of Synchronous CDMA Systems Pramod Viswanath and Venkat Anantharam {pvi, ananth}@eecs.berkeley.edu EECS Department, U C Berkeley CA 9470 Abstract The sum capacity of

More information

Clean relaying aided cognitive radio under the coexistence constraint

Clean relaying aided cognitive radio under the coexistence constraint Clean relaying aided cognitive radio under the coexistence constraint Pin-Hsun Lin, Shih-Chun Lin, Hsuan-Jung Su and Y.-W. Peter Hong Abstract arxiv:04.3497v [cs.it] 8 Apr 0 We consider the interference-mitigation

More information

The properties of L p -GMM estimators

The properties of L p -GMM estimators The properties of L p -GMM estimators Robert de Jong and Chirok Han Michigan State University February 2000 Abstract This paper considers Generalized Method of Moment-type estimators for which a criterion

More information

Stat 159/259: Linear Algebra Notes

Stat 159/259: Linear Algebra Notes Stat 159/259: Linear Algebra Notes Jarrod Millman November 16, 2015 Abstract These notes assume you ve taken a semester of undergraduate linear algebra. In particular, I assume you are familiar with the

More information

Rate Region of the Quadratic Gaussian Two-Encoder Source-Coding Problem

Rate Region of the Quadratic Gaussian Two-Encoder Source-Coding Problem Rate Region of the Quadratic Gaussian Two-Encoder Source-Coding Problem Aaron B Wagner, Saurabha Tavildar, and Pramod Viswanath June 9, 2007 Abstract We determine the rate region of the quadratic Gaussian

More information

Interactive Interference Alignment

Interactive Interference Alignment Interactive Interference Alignment Quan Geng, Sreeram annan, and Pramod Viswanath Coordinated Science Laboratory and Dept. of ECE University of Illinois, Urbana-Champaign, IL 61801 Email: {geng5, kannan1,

More information

Optimal Data and Training Symbol Ratio for Communication over Uncertain Channels

Optimal Data and Training Symbol Ratio for Communication over Uncertain Channels Optimal Data and Training Symbol Ratio for Communication over Uncertain Channels Ather Gattami Ericsson Research Stockholm, Sweden Email: athergattami@ericssoncom arxiv:50502997v [csit] 2 May 205 Abstract

More information

ECE 275A Homework #3 Solutions

ECE 275A Homework #3 Solutions ECE 75A Homework #3 Solutions. Proof of (a). Obviously Ax = 0 y, Ax = 0 for all y. To show sufficiency, note that if y, Ax = 0 for all y, then it must certainly be true for the particular value of y =

More information

On Capacity Under Received-Signal Constraints

On Capacity Under Received-Signal Constraints On Capacity Under Received-Signal Constraints Michael Gastpar Dept. of EECS, University of California, Berkeley, CA 9470-770 gastpar@berkeley.edu Abstract In a world where different systems have to share

More information

CDMA Systems in Fading Channels: Admissibility, Network Capacity, and Power Control

CDMA Systems in Fading Channels: Admissibility, Network Capacity, and Power Control 962 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 46, NO. 3, MAY 2000 CDMA Systems in Fading Channels: Admissibility, Network Capacity, and Power Control Junshan Zhang, Student Member, IEEE, and Edwin

More information

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2. APPENDIX A Background Mathematics A. Linear Algebra A.. Vector algebra Let x denote the n-dimensional column vector with components 0 x x 2 B C @. A x n Definition 6 (scalar product). The scalar product

More information

Ergodic and Outage Capacity of Narrowband MIMO Gaussian Channels

Ergodic and Outage Capacity of Narrowband MIMO Gaussian Channels Ergodic and Outage Capacity of Narrowband MIMO Gaussian Channels Yang Wen Liang Department of Electrical and Computer Engineering The University of British Columbia April 19th, 005 Outline of Presentation

More information

Upper Bounds on MIMO Channel Capacity with Channel Frobenius Norm Constraints

Upper Bounds on MIMO Channel Capacity with Channel Frobenius Norm Constraints Upper Bounds on IO Channel Capacity with Channel Frobenius Norm Constraints Zukang Shen, Jeffrey G. Andrews, Brian L. Evans Wireless Networking Communications Group Department of Electrical Computer Engineering

More information

ROBUST SECRET KEY CAPACITY FOR THE MIMO INDUCED SOURCE MODEL. Javier Vía

ROBUST SECRET KEY CAPACITY FOR THE MIMO INDUCED SOURCE MODEL. Javier Vía ROBUST SECRET KEY CAPACITY FOR THE MIMO INDUCED SOURCE MODEL Javier Vía University of Cantabria, Spain e-mail: jvia@gtas.dicom.unican.es web: gtas.unican.es ABSTRACT This paper considers the problem of

More information

Detecting Parametric Signals in Noise Having Exactly Known Pdf/Pmf

Detecting Parametric Signals in Noise Having Exactly Known Pdf/Pmf Detecting Parametric Signals in Noise Having Exactly Known Pdf/Pmf Reading: Ch. 5 in Kay-II. (Part of) Ch. III.B in Poor. EE 527, Detection and Estimation Theory, # 5c Detecting Parametric Signals in Noise

More information

A Randomized Algorithm for the Approximation of Matrices

A Randomized Algorithm for the Approximation of Matrices A Randomized Algorithm for the Approximation of Matrices Per-Gunnar Martinsson, Vladimir Rokhlin, and Mark Tygert Technical Report YALEU/DCS/TR-36 June 29, 2006 Abstract Given an m n matrix A and a positive

More information

Capacity Theorems for Relay Channels

Capacity Theorems for Relay Channels Capacity Theorems for Relay Channels Abbas El Gamal Department of Electrical Engineering Stanford University April, 2006 MSRI-06 Relay Channel Discrete-memoryless relay channel [vm 7] Relay Encoder Y n

More information

Capacity Pre-Log of SIMO Correlated Block-Fading Channels

Capacity Pre-Log of SIMO Correlated Block-Fading Channels Capacity Pre-Log of SIMO Correlated Block-Fading Channels Wei Yang, Giuseppe Durisi, Veniamin I. Morgenshtern, Erwin Riegler 3 Chalmers University of Technology, 496 Gothenburg, Sweden ETH Zurich, 809

More information

A Single-letter Upper Bound for the Sum Rate of Multiple Access Channels with Correlated Sources

A Single-letter Upper Bound for the Sum Rate of Multiple Access Channels with Correlated Sources A Single-letter Upper Bound for the Sum Rate of Multiple Access Channels with Correlated Sources Wei Kang Sennur Ulukus Department of Electrical and Computer Engineering University of Maryland, College

More information

Bounds on Capacity and Minimum Energy-Per-Bit for AWGN Relay Channels

Bounds on Capacity and Minimum Energy-Per-Bit for AWGN Relay Channels Bounds on Capacity and Minimum Energy-Per-Bit for AWG Relay Channels Abbas El Gamal, Mehdi Mohseni and Sina Zahedi Information Systems Lab Department of Electrical Engineering Stanford University, Stanford,

More information

Lecture Notes 1: Vector spaces

Lecture Notes 1: Vector spaces Optimization-based data analysis Fall 2017 Lecture Notes 1: Vector spaces In this chapter we review certain basic concepts of linear algebra, highlighting their application to signal processing. 1 Vector

More information

High SNR Analysis for MIMO Broadcast Channels: Dirty Paper Coding vs. Linear Precoding

High SNR Analysis for MIMO Broadcast Channels: Dirty Paper Coding vs. Linear Precoding High SNR Analysis for MIMO Broadcast Channels: Dirty Paper Coding vs. Linear Precoding arxiv:cs/062007v2 [cs.it] 9 Dec 2006 Juyul Lee and Nihar Jindal Department of Electrical and Computer Engineering

More information

Error Exponent Region for Gaussian Broadcast Channels

Error Exponent Region for Gaussian Broadcast Channels Error Exponent Region for Gaussian Broadcast Channels Lihua Weng, S. Sandeep Pradhan, and Achilleas Anastasopoulos Electrical Engineering and Computer Science Dept. University of Michigan, Ann Arbor, MI

More information

Diversity Performance of a Practical Non-Coherent Detect-and-Forward Receiver

Diversity Performance of a Practical Non-Coherent Detect-and-Forward Receiver Diversity Performance of a Practical Non-Coherent Detect-and-Forward Receiver Michael R. Souryal and Huiqing You National Institute of Standards and Technology Advanced Network Technologies Division Gaithersburg,

More information

Optimal Power Allocation for Achieving Perfect Secrecy Capacity in MIMO Wire-Tap Channels

Optimal Power Allocation for Achieving Perfect Secrecy Capacity in MIMO Wire-Tap Channels Optimal Power Allocation for Achieving Perfect Secrecy Capacity in MIMO Wire-Tap Channels Jia Liu, Y. Thomas Hou, and Hanif D. Sherali Bradley Department of Electrical and Computer Engineering Grado Department

More information