On the Gaussian Z-Interference channel
|
|
- Stephany Webster
- 5 years ago
- Views:
Transcription
1 On the Gaussian Z-Interference channel Max Costa, Chandra Nair, and David Ng University of Campinas Unicamp, The Chinese University of Hong Kong CUHK Abstract The optimality of the Han and Kobayashi achievable region with Gaussian signaling remains an open problem for Gaussian interference channels. In this paper we focus on the Gaussian Z-interference channel. We first show that using correlated over time Gaussian signals does not improve on the Han and Kobayashi achievable rate region. Secondly we compute the slope of the Han and Kobayashi achievable region with Gaussian signaling around the Sato s corner point. I. INTRODUCTION Gaussian interference channel is one of the most basic multiuser settings whose capacity region is as yet undetermined. The best known-achievable region for a two-receiver interference channel is due to Han and Kobayashi []. Recently it has been shown that there are two-receiver interference channels with discrete alphabets where the Han and Kobayashi region is strictly inside the capacity region []. However for the Gaussian setting, the optimality of the Han and Kobayashi region with Gaussian auxiliaries remains an open challenge. In the discrete memoryless setting, it was shown that a -letter extension coding in blocks of two symbols of the Han and Kobayashi scheme strictly outperforms the single-letter traditional scheme. There has been some attempts, for instance, see [3], at using correlated Gaussians to improve on the Han and Kobayashi scheme for the interference channel. Remark. It is worth mentioning that the authors in [3] claim that the rates they achieve outperform state-of-the-art coding schemes. This claim fails to hold if a comparison is made with the rates of the Han and Kobayashi scheme with Gaussian signals and the use of a time-sharing variable, Q, to do power control. A scheme employing the concept of noisebergs has been shown by one of the authors [4] to strictly improve the rate region with power control. The sub-optimality of the region without power control can also be shown by using perturbations along Hermite polynomials [5]. In this paper, the first result is a proof that correlated Gaussian signaling does not improve on the traditional single-letter scheme for the Gaussian interference channel. The second result concerns evaluation of the slope of the Han and Kobayashi region with Gaussian signaling around the corner point known as Sato s corner point. A. Preliminaries An interference channel is a model for communication where two point-to-point communications occur over a shared medium causing interference. The particular channel model that we study in this paper is called the Gaussian Z-interference setting, and the channel is described by: Y = X + Z Y = X + ax + Z. Here Z and Z are Gaussian variables, each distributed as N 0, and independent of X and X. We further assume power constraints P, P on X, X and that a 0,. Note that if a = 0 or a, then the capacity region is fully determined; hence this regime is the only open case. Z N 0, X a Y X Y Z N 0, Fig. : Gaussian Z interference channel The capacity region for this setting is defined in the usual sense see [6] for details and background work.
2 The Han and Kobayashi achievable region [] for the interference channel can be found in Section 6.5 of [6]. However when the interference is one-sided as in the Z-channel, the achievable region simplifies to the following. Theorem Han and Kobayashi Region. The union of rate pairs R, R satisfying the constraints R IX ; Y Q R IX ; Y U, Q R + R IU, X ; Y Q + IX ; Y U, Q over distributions p Q qp U,X Qu, x qp X Qx q is achievable for a discrete memoryless Z-interference channel. To achieve this region, it suffices to consider Q 3. The above region will also yield an achievable region in the Gaussian Z-interference channel defined by 7. Further, for every Q = q, if X = U + V, where U and V are zero-mean independent Gaussian random variables, and X is also an independent Gaussian random variable, then we call such a region as Han and Kobayashi region with Gaussian signaling. Theorem Han and Kobayashi region with Gaussian signaling. The union of rate pairs R, R satisfying the constraints R E Q log + P Q R E Q log P Q + + a α Q P Q R + R E Q log + α QP Q + log + P Q + a α Q P Q + a α Q P Q over α Q [0, ], P Q, P Q 0 satisfying E Q P Q P and E Q P Q P is achievable. By Bunt s extension of Caratheodory s theorem, it suffices to consider Q 5. The region described by Theorem will be referred to as R HK. By computing the Han and Kobayashi region with Gaussian signaling of the multi-letter extension of the Gaussian interference channel, the following region is achievable. Theorem 3 k-letter Han and Kobayashi region with Gaussian signaling. The union of rate pairs R, R satisfying the constraints R k E Q log I + K Q R k E Q log I + a K vq + K Q I + a K vq R + R k E Q log I + K vq + log I + a K Q + K Q I + a K vq over K q, K q 0, K vq K q satisfying k E QtrK Q P and k E QtrK Q P is achievable. As before it suffices to consider Q 5. The relation denotes the positive semi-definite partiarder among real symmetric matrices; while A and tra denotes the determinant and trace of matrix A, respectively. The region described by Theorem 3 will be referred to as R k HK. Clearly R HK R k HK C, the capacity region. Known results about the capacity region: In this section we summarize the previously known results about the capacity region of the Gaussian Z-interference channel. i It is known from [7], [8], [9] that the rate-pair R = log + P and R = log + P is a Pareto-optimal point on the boundary of the capacity region. Further it is also known that the above point maximizes the rate-sum R + R. Since C = log + P is the maximum achievable rate to receiver Y, the capacity region contains a +a P line-segment that starts at C, 0 and ends at C, log + P +a P. We call this extremal point corner point as Sato-point. The outer bounds in [9] and [0] shows that the Sato-point also maximizes R + R for any +a P a +P. This provides an outer bound to the slope of the capacity region around the Sato s corner point. ii It is known from [7], [], [] that the rate-pair R = log + a P +P and R = C = log + P, is another Pareto-optimal point on the boundary of the capacity region of the Gaussian Z interference channel. Hence the capacity region contains a line-segment that starts at 0, C and ends at log + a P +P, C. This recently established extremal point is the second corner point of the capacity region of the Z Gaussian interference channel. The outer bound to the capacity region of the interference channel does not yield any finite such that the maximum of R + R over
3 3 achievable rate pairs passes through this corner point. Two of the authors computed the slope of Han and Kobayashi region with Gaussian signaling around this new corner point [3]. B. Summary of our results In this article we establish the following results. Theorem 4. R k HK = R HK, k. We show that the k-letter extension of the Han and Kobayashi region with Gaussian signaling does not improve on the single-letter scheme. Theorem 5. The largest value of such that the maximum of R + R with R, R R HK occurs at the Sato point is given by a cr HK,s + P + a } P = min a, P + P where is the unique positive solution of γ = 0, where P γ := log + + a P + log a P + a P + a P + P. a P + P + a P + a P + P II. CORRELATED GAUSSIANS DO NOT IMPROVE THE REGION In this section we establish Theorem 4. Towards this end we make the following observations: both R k HK and R HK are convex regions, hence they can be characterized by the intersection of supporting hyperplanes. Further the hyperplane R + R to the capacity region passes through the Sato-point, which is present in both R k HK and R HK. Hence Theorem 4 is equivalent to showing that for every >, max R + R = max R + R. R,R R k R,R R HK HK The above condition can be re-expressed in terms of matrices as max E Q K q,k q 0,K vq K q: k E QtrK Q P, k E QtrK Q P + k log K vq + I k log a K vq + I = max E Q P q,p q 0,α q [0,]: log P Q + a P Q + + E Q trp Q P,E Q trp Q P + log α QP Q + log a α Q P Q +. The key to establishing is the following Theorem. Theorem 6. The maximum of the expression k log K Q + a K Q + I + log K Q + a K vq + I k log K + a K v + a K u + I + log K + a K v + I + log K v + I log a K v + I log P Q + a α Q P Q + where the k k Hermitian matrices satisfy the constraints: K, K u, K v 0, trk u + K v P, trk P can be attained by restricting K, K u, K v to be diagonal matrices. Remark. Identify K uq := K q K vq. It is immediate that equation will follow from Theorem 6 by the following reasoning: for every Q = q, Theorem 6 allows us to replace the matrices inside the expression by diagonal matrices, which then is just sum of terms of the form appearing on the right-hand-side. Note the extra factor k on the left hand-side changes this sum into a convex combination and the equality is immediate. The following notations will be used in the proof: i Given a k k matrix A, let λa denote the unordered set of eigenvalues of A, and let λ A λ A denote the k-tuple of eigenvalues of A arranged in decreasing increasing order respectively.
4 4 ii Given two vectors v, w, we say v w if v majorizes w, i.e. if v [] v [] v [k] is a non-increasing arrangement of v and w [] w [] w [k] is a non-increasing arrangement of w; then m m v [l] w [l], m k with equality at m = k. l= l= Proof of Theorem 6. Suppose we fix the matrices K and K v satisfying the trace constraint; then we must choose K u so as to maximize K + a K v + a K u + I subject to trk u P trk v. If one further fixes the eigenvalues λk u then Fiedler s bound [4] says that K + a K v + a K u + I k i= λ i K + a K v + I + a λ i K u, and clearly equality is achieved if the matrices K + a K v + I and K u share the same eigenvectors with eigenvalues λ i K + a K v + I and λ i K u, respectively. Hence we seek to maximize k i= λ i K + a K v + I + a λ i K u, subject to k i= λ i K u P trk v and λ i K u 0. The optimal choice of this problem is the water-filling solution. Denote the optimal choice as K w u ; then it is immediate that λ i K + a K v + I + a K u = λ i K + a K v + I + a λ i Kw u, i =,.., k. Let K be a diagonal matrix with entries ordered as λ i K, and K v be a diagonal matrix with entries ordered as λ i K v. Applying Fiedler s bound [4] we see that K + a K v + I k i= λ i K + λ i K v + = K + a Kv + I. 3 We now invoke the celebrated Lidskii-Wielandt inequality see.6 in survey article [5] that establishes the majorization, Let K,w u Lemma yields λ K + a K v + I = λ K + λ a K v + I λ K + a K v + I. denotes the diagonal water filling matrix corresponding to K + a K v + I. Lemma implies that λ K + a K v + I + K,w u λ K + a K v + I + K w u. K + a K v + I + K,w u K + a K v + I + K w u. 4 Since K v + I = k i= + λ ik v and a K v + I = k i= + a λ i K v, we see that for a fixed choice of λk v and λk, the diagonal matrices K, Kv, and Ku,w maximize the expression term-by-term in Theorem 6. Varying over the choices of λk v and λk that satisfies the trace constraint establishes the theorem. A. Lemmas regarding majorization and its applications The following Lemma must be well-known but we cannot find an immediate reference, so we establish it below. Lemma. [Waterfilling preserves majorization] Let v, u be vectors such that v u. Let v and u denote the vectors obtained after water-filling operation with a quantity of water W > 0. Then v u. Proof. W.l.o.g. let v and u be arranged in decreasing order. After waterfilling operation note that v and u is also in decreasing order; and further they satisfy v i v i i m =, c m + i k u u i i n i =. n + i k Further, v m c v m+, u n c u n+, and W = k i=m+ c v i = k i=n+ c u i. c
5 5 We divide the proof into two cases: m < n and m n. Case : m < n. Note that v i + W = u i + W = i= i= m v i + k mc = i= n m u i + k nc u i + k mc i= i= m v i + k mc. i= The first inequality is due to u i c, m+ i n, and the second inequality is from v u. Hence c c. Thus to establish that v u, it suffices to show that l l v i u i, m + l n, as the rest of the choices of l are immediate from v u, c c, and that k i= v i = k We establish it by contradiction. Suppose is the first index in [m + : n] such that i= m v i + i= i=m+ i= c = i= v i > i= u i = i= u i. i= u i. Hence it must be that c > u lo = u, and since u i is decreasing, c u i, i. This would imply that v i = i= i= v i + a contradiction. This completes the proof in this case. Case : m n. Note that W = i=m+ c v i = i=n+ i=+ c > i= c u i u i + i=m+ i=+ u i = c u i u i, i= i=m+ c v i, where the first inequality is due to c u i 0, n + i m, and the second inequality is from v u, taif v has larger partial sum than taif u. Thus c c, as before. Similar to previous case, to establish that v u, it suffices to show that l v i i= l u i, n + l m, as the rest of the choices of l are immediate from v u, c c, and that k i= v i = k contradiction. Suppose is the first index in [n + : m] such that i= v i = i= i= v i > l 0 i= u i = n u i + i= l 0 i=n+ c. i= u i Hence it must be that v l0 > c, and since v i is decreasing, v i c, i l 0. This would imply that v i = i= i= v i + i=+ a contradiction. This completes the proof of the lemma. v i > i= u i + i=+ c = u i, i=. The proof follows again by Lemma. see A..d, page 66 in [6] Let A, B 0 and λa λb. Then k i= λ ia = A B = k i= λ ib. It basically follows from the concavity of log and the Hardy-Littlewood-Polya majorization inequality.
6 6 A. Han and Kobayashi region with Gaussian signaling III. SLOPE AT SATO S CORNER POINT For, the maximum of the weighted sum-rate R + R of the Han and Kobayashi region with Gaussian signaling for a Z-interference channel can be computed as max E Q P q,p q 0,α q [0,]: log P Q + a P Q + + log P Q + a α Q P Q + E Q trp Q P,E Q trp Q P + log α QP Q + log a α Q P Q +. 5 We know that when =, the maximum sum-rate for the Han and Kobayashi region as well as the capacity region is given by log + P + log P + a P + and is achieved at the rate pair R = log + P, and R = log + P a P +, which is referred to as Sato s corner point. We define } = sup : +, HK,s cr max R,R R HK R + R = log + P + log P a P + the largest such that the line R + R passing through Sato s corner point is a supporting hyperplane to R HK. Using the concave envelope interpretation see [7] for E Q we see that for any the value of max R,R R HK R +R is the upper concave envelope of f Q, Q evaluated at P, P, where f is defined by f Q, Q := log + a Q + Q + max α [0,] log + a αq + Q + a αq + log + αq + a αq + Q By taking derivative with respect to α, the optimal α = α satisfies: = a + Q a a + a Q + α Q We define the regions R = Q, Q : } a + Q a Q R = Q, Q : a + Q + a } Q a 6 Q + Q R 3 = Q, Q : a + Q + a Q a < < } a + Q Q + Q a Q where R, R, R 3 correspond to the cases α = 0, α = and 0 < α < respectively. This gives an explicit expression for f, log + a Q + Q + log + Q Q, Q R Q f Q, Q = log + +a Q + log + Q Q, Q R log +a Q +Q a Q + log log + log a +Q a + log a Q, Q R 3 Note that the hyperplane R + R passes through Sato s corner point if and only if P, P R and Cf P, P = f P, P. From Lemma in Appendix, this is equivalent to requiring that P, P R and g Q, Q attains global maximum at P, P, where g is defined by g Q, Q := f Q, Q a + a a P + P + a + P + P Q + a P + P } Q. Thus to establish Theorem 5, we need to show that g Q, Q attains a global maximum at P, P if and only if cr HK,s where HK,s cr where is the solution to h = 0, with h := log + P + a P a + P + a } P = min a, P + P a P + a P + a P + P } + log a } P + P + a P + a P + P. The proof of Theorem 5 is completed by analysing the local behaviour of g and isolating the local maxima.
7 7 Interior analysis: Lemma 3. HK,s cr min a + P + a P + a } P a, P + P a + P Proof. This is the condition that says P, P R and it is a local maximum. HK,s a P +P since P, P R ; else the optimizing α is not see 6. The second condition, cr HK,s +a P a +P follows from one of the second order conditions for P, P being a local maximum, namely det Hf P, P 0 see Hessian calculation in Appendix. +a Lemma 4. There is no local maximum of g in the interior of R for any P. a +P cr a +P +a P Proof. Since g is concave in R, there is at most one local maximum in the interior of R. The first order condition yields a + a = a Q + Q + a a P + P + a + P + P + a + = Q + Q + Q + a P + P Solving for Q gives But in R we have a +Q a Q. Substituting for Q yields Q = + a P + P + P + a + a P a 4 + P. +a But we also have P, a +P implying a. This gives a contradiction. Lemma 5. There are at most local maxima of g in the interior of R. The value of g at both points is bounded from +a above by g P, P, when P. a +P Proof. The first order condition for local-maximum yields The solutions are a + a Q + Q a + a Q + = + Q + a = Q + Q a + a a P + P + a + P + a P + P a Q = P or k Q = P + a P Q + P where k := +a P a +P. If k, there is only one solution in R + at P, P, so assume that k <. If Q, Q is any solution to above, then where ϕ x := log x + x. g Q, Q = f Q, Q a Q + Q + a + a Q Q + Q + a Q Q + Q = f Q, Q + + a Q + Q + a + Q + Q = ϕ + a Q + Q ϕ + a Q + ϕ + Q = ϕ + a P + P ϕ + a Q + ϕ + Q
8 8 Now let Q, Q to be the solution other than P, P. Then, g P, P g Q, Q = ϕ + a Q ϕ + a P ϕ + Q ϕ + P = ϕ a ϕ a k k k a k a ϕ a ϕ k a k Differentiating with respect to and simplifying gives g P, P g Q, Q = [ ] log + k k 0 k k k k +a since P a +P = k and x 0, log + x x. So g P, P g Q, Q g =k P, P g =k Q, Q = 0 and hence g P, P g Q, Q. +a Lemma 6. There is no local maximum of g in the interior of R 3 when < P. a +P Proof. The first order condition for g having a local maximum yields + a Q + Q Q + a + a Q + Q = a + Q = a + a a P + P + a + P + a P + P. By substituting the first equation into the second one, and then writing = a +Q a Q θ, where θ Q = a + P From the second order condition see Appendix det Hf Q, Q 0, or equivalently, a + Q +a which contradicts with that < P. a +P = Q + a P a + P + P +a Q +Q,, we get Thus the interior analysis yields that the value of g Q, Q at any interior local maximum is upper bounded by that of g P, P and P, P R if and only if a + P + a P + a } P min a, P + P a. + P The necessity comes from Lemmas 3, while the sufficiency comes from Lemmas 4, 5, and 6. Boundary analysis: The remaining cases are the boundaries Q = 0 and Q = 0. In this part, we first establish that g P, P g Q, Q for Q, Q on the boundaries if and only if is smaller than or equal to the upper bound in Lemma 3 and in Theorem 5. Then in Lemma 0 we reduce the minimum of three terms to that of two of them. This gives the critical in Theorem 5. Lemma 7. On the boundary Q = 0 we have that if and only if log+p+ +P log+a P + +a P. Proof. When Q = 0, we have g P, P max Q 0 g 0, Q f Q, Q = log + Q, g Q, Q = log + Q + a P + P Q.
9 9 Note that g 0, Q is concave in Q, and maximized at Q = a P + P. Since P, P R, min g P, P g 0, Q Q 0 = g P, P g 0, a P + P = log + a P + + a P 0 if and only if log+p+ +P log+a P + +a P. Lemma 8. + log + P + + P log + P + +P + a log + a P + +a P P a + P and hence, by Lemma 3 and Lemma 7, g P, P g Q, Q on the boundary Q = 0. Proof. This is equivalent to where ϕ x = + x log + x + xx. Let Note that Hence a 4 ϕ P ϕ a P 0 ψx = a 4 ϕx ϕa x. ϕ x = + x log + x x ϕ x = log + x + x ψx = a 4 ϕ x a ϕ a x x ψx = a 4 log + x + a x 0. So ψx is convex when x 0 and ψ 0 = 0, implying ψ is convex and increasing on x 0. Thus ψp ψ0 = 0. Lemma 9. On the boundary Q = 0 we have that if and only if, where as in Theorem 5. Proof. When Q = 0, we have f Q, Q = log + Q g Q, Q = log + Q g Q, 0 is concave in Q, maximized when g P, P max Q 0 g Q, 0 a + a a P + P + a + Q P + P a = + Q + a a P + P + a + P + P [0, ] since a + P + a P a P + P That is, there is always a maximizing Q 0. Note that, after some manipulations, we can express min g P, P g Q, 0 Q 0 [ log + = = h P + a P a } P + a P + a + log P + P a }] P + P + a P + a P + P which is concave in, equals 0 when = 0, the derivative with respect to is non-negative at = 0. Here h as in Theorem 5. Hence g P, P max Q 0 g Q, 0 if and only if. Thus combining the interior and boundary analysis we see that cr HK,s is the minimum of three quantities, two of them given by Lemma 3 and one given by Lemma 9. The proof is completed by showing that one of the three quantities is redundant.
10 0 Lemma 0. The following holds: a + P + a } P min a, a + P + a P + a } P = min P + P a, P + P a, + P where as in Theorem 5. Proof. It suffices to show that, if a +P +a P +a a P +P P, a +P or equivalently P a +a +P, then P a +P. +a That is, γ P a +P 0, where γ is defined in Theorem 5. Write P = + a P θ and k := +a P a +P. Then θ k and k. We would like to show hk 0, that is, k θ log + θ kk θ + + log kθ + θ 0 k log + θ + kθ + θ kθ + + log + θ + θ 0 The derivative of left hand side with respect to θ is equal to So k log + θ + +θ + kθ +θ This completes the proof of Theorem 5. B. Outerbound to the slope at Sato s point θ + θ + k kθ + θ kθ + θ = k θ + θ + kθ + θ = k θ k + θ + θ kθ 0 k + log kθ +θ is decreasing in θ, and = 0 when θ = 0. We are done. The outer bound the the capacity region by Sato [8] and Kramer [0] see also [] from which this form is taken states that any achievable rate-pair R, R must satisfy + a P e R + P e R a +. a A simple calculus argument shows that the largest such that the maximum of R + R occurs at Sato s corner point is given by Remark 3. Note that the inner bound for the slope HK,s cr OB,s cr := + a P a + P. coincides with the outer bound OB,s cr in the limit P. CONCLUSION In this paper we show that correlated Gaussians over time do not improve on the single-letter Han and Kobayashi achievable region with Gaussian signaling. We also computed the slope of the Han and Kobayashi region around the Sato s corner point. ACKNOWLEDGEMENT M. Costa would like to acknowledge the partial support by FAPESP. Chandra Nair wishes to acknowledge the support from the various GRF grants from the RGC, Hong Kong.
11 REFERENCES [] T. Han and K. Kobayashi, A new achievable rate region for the interference channel, Information Theory, IEEE Transactions on, vol. 7, no., pp , jan 98. [] C. Nair, L. Xia, and M. Yazdanpanah, Sub-optimality of the Han Kobayashi Achievable Region for Interference Channels, ArXiv e-prints, Feb. 05. [3] W. Huleihel and N. Merhav, Codewords With memory improve achievable rate regions of the memorylessgaussian interference channel, CoRR, vol. abs/ , 05. [Online]. Available: [4] M. H. M. Costa, Noisebergs in Z Gaussian interference channels, Information Theory and Applications Workshop ITA, 0. [5] E. Abbe and L. Zheng, A coordinate system for Gaussian networks, Information Theory, IEEE Transactions on, vol. 58, no., pp , Feb 0. [6] A. El Gamal and Y.-H. Kim, Network Information Theory. Cambridge University Press, 0. [7] M. H. M. Costa, On the Gaussian interference channel, Information Theory, IEEE Transactions on, vol. 3, no. 5, pp , Sept 985. [8] H. Sato, The capacity of the Gaussian interference channel under strong interference corresp., IEEE Transactions on Information Theory, vol. 7, no. 6, pp , Nov 98. [9], On degraded Gaussian two-user channels corresp., Information Theory, IEEE Transactions on, vol. 4, no. 5, pp , Sep 978. [0] G. Kramer, Outer bounds on the capacity of Gaussian interference channels, Information Theory, IEEE Transactions on, vol. 50, no. 3, pp , March 004. [] Y. Polyanskiy and Y. Wu, Wasserstein continuity of entropy and outer bounds for interference channels, CoRR, vol. abs/ , 05. [Online]. Available: [], Converse bounds for interference channels via coupling and proof of Costa s conjecture, in Proc. of IEEE ISIT, Barcelona, Spain, July 06. [3] M. H. M. Costa and C. Nair, Gaussian Z-interference channel: around the corner, Information Theory and Applications Workshop ITA, 06. [4] M. Fiedler, Bounds for the determinant of the sum of Hermitian matrices, Proceedings of the American Mathematical Society, vol. 30, no., pp. 7 3, 97. [Online]. Available: [5] T. Ando, Special issue honoring Ingram Olkin majorizations and inequalities in matrix theory, Linear Algebra and its Applications, vol. 99, pp. 7 67, 994. [Online]. Available: [6] A. Marshall, I. Olkin, and B. Arnold, Inequalities: Theory of Majorization and Its Applications, ser. Springer Series in Statistics. Springer New York, 00. [Online]. Available: [7] C. Nair, Upper concave envelopes and auxiliary random variables, International Journaf Advances in Engineering Sciences and Applied Mathematics, vol. 5, no., pp. 0, 03. [Online]. Available: [8] H. Sato, An outer bound to the capacity region of broadcast channels, IEEE Trans. Info. Theory, vol. IT-4, pp , May, 978. [9] M. H. M. Costa, A third critical point in the achievable region of the Z-Gaussian interference channel, Information Theory and Applications Workshop ITA, 04. A. Calculation via the Noiseberg region APPENDIX One of the authors proposed [4] that the Han and Kobayashi region with Gaussian signaling for the Z-interference channel is equivalent to the noiseberg region. This is based on the equivalence between the Gaussian Z-interference channel and the Gaussian degraded interference channel, demonstrated in [7]. The Gaussian degraded interference channel is described by Y = X + Z 7 Y = X + Y + Z where Z and Z are Gaussian variables distributed as N 0, and N 0, N, respectively, and independent of X and X. In addition we assume power constraints P, P on X, X. The equivalence between the Z and the degraded Gaussian interference channels holds if their three parameters are related by P = P, P = P a and N = a a. Applying the notation in Theorem, the noiseberg scheme a particular choice of parameters corresponds to Q = with: PQ = = λ, PQ = = λ, α =, P = 0. This leads to the constraints R λ R λ R + R λ log + P + λ log + P P log + + a α P log + α P + log + P + a α P + a + λ α P log + P. We have the constraints P = P λ, P = P λp λ. Therefore there are three free variables in the above expression: P, α, λ. Therefore the maximum weighted sum-rate of the noiseberg region is given by R + R = max λ α,p,λ log + α P + log + a α P + λ + a α P + log + P λp P λ λ P λ λ + log + + a. α P
12 Fig. : Slope of normal at Sato s point for P = P =. The noiseberg scheme applied to the degraded interference channel uses two parameters, namely the multiplex band λ and the noiseberg height h. The rates R and R associated with this scheme are given by R = λ log + P A λ + log + max0, h N + λ + N + P A log + P B λ R = λ log + P λ + N + P A λ where the partial powers P A and P A are given by λ + P λ P A = λ [P λ minh, N max0, h N ] λp P A = λ [P + P + λ minh, N ]. To simplify notation we restricted the primed power to P. The parameters λ and h are defined in a certain admissible region see details in [4]. The corner point that corresponds to Sato s point is R = log + P, R = log + P +P +N. To find the contour of the Gaussian Han and Kobayashi region at this point we evaluate the gradient of R h, λ + R h, λ, for λ close to zero and 0 h N. Equating the gradient to zero for the outermost rate contour we find Note that is the slope of the normal to the rate contour of the achievable region. As calculated in [9], the first of this derivative ratios can be expressed as P +h +P log + P +h +P dr /dλ dr /dλ = dr /dλ dr /dλ = dr /dh =. 8 dr /dh P +P +N log + P +P +N + 9 hp +P +P +P+P +N The second ratio of derivatives is dr /dh dr /dh = P + h + P + N + P + P + N P + P + P + P + h 0
13 3 Example plots of these derivative ratios are given in [9] for P =, P = 4 and N = 3, i. e., P =, P = and a = 0.5. To find the point of intersection we can equate these two expressions and do some algebraic manipulation. Equivalently, we can take the derivative of Eq. 9 with respect to h and equate it to zero. We then get the following equation. + P + P P + N G + P + N N = + P + P + hg P + h + P, where the function Gx is defined as log + x/x. Solving this equation numerically or graphically for h we obtain a side product of this approach which is the optimal initial height of the noiseberg, as the rate point departs from Sato s point, if h N. Let this optimal value be denoted by h. Then we can use Eqs. 9 or 0 with h = h, and then Eq. 8 to get the normal slope at Sato s point. For example, if we take the case P =, P = 4 and N = 3, equivalent to P = P = and a = 0.5, we get h =.545, and = If the h solution to Eq. turns out to be greater than N, then there is no need for multiplexing with a noiseberg band, and the optimal strategy to exit the Sato s point is pure superposition, that is moving along the path with λ = 0 and h varying from N to N + P in the admissible λ, h parameter region. In this case the normal slope at Sato s point is easily found to be = P + N + P + N P + P. In Fig. we plot the slope of the normal to the achievable region at Sato s point for a in the interval [0, ] with P = P =. We show two curves which correspond to the optimal multiplexing strategy bottom curve and to the pure superposition scheme top curve. Note that for values of a below approximately 0.4, the two curves coincide, indicating that for these values of the interference gain, there is no advantage in multiplexing See To mux and not to mux in [9]. Remark 4. To illustrate the need for multiplexing for certain values of the channel parameters, we observe that the linear combination of rates R and R given by R + R = log + P + log + P +P +N is not concave in certain regions of the P, P plane and certain values of. In these cases the best rate combinations happen above the surface of the function, in its concave envelope, and require the multiplexing of two superposition schemes. As an example, in Fig.3 we show a surface plot of this function in the plane P, P with = and N = 3, for 0 P, P 8. The shading of the surface indicate the non-concavity. B. Gradient and Hessian of the function f In R, In R, In R 3, Q f = a + a Q + Q Q f = + a + Q + Q + Q Hf = [ a 4 +a Q +Q a a +a Q +Q +a Q +Q +a Q +Q Q f = a + a a Q + Q + a + Q + Q Q f = + a Q + Q Hf = a 4 a +a Q +Q + 4 +a Q +Q a +a Q +Q +Q ] a +a Q +Q +a Q +Q Q f = a + a Q + Q Q f = + a + Q + Q Q a + Q Hf = a 4 a +a Q +Q a +a Q +Q +a Q +Q +a Q +Q + Q a +Q
14 4 Fig. 3: Surface plot of R + R as a function of P and P with = and N = 3. By checking the values f and f at the boundaries, one can see that f is continuously differentiable on R >0. Lemma. Let f be a real-valued function differentiable at x. Then Cfx = fx if and only if f fx, attains global maximum at x. Here Cf and f denotes the concave envelope and gradient of f, respectively. Proof. It suffices to show that Cfx fx if and only if for all h we have fx fx + h fx, h. The if part is immediate, by taking concave envelope with respect to h and then putting h = 0. For the only if part, suppose on the contrary that there is ɛ > 0 and h 0 such that By differentiability of f at x, for ζ small enough. Now, for any δ 0,, fx + ɛ fx + h fx, h fx + ζ fx fx, ζ ɛ h ζ fx Cfx δ Cfx + h + δ Cfx δ δ h δfx + h + δfx δ δ h δɛ + δfx + fx, δh + δfx δ δ h
15 5 Rearranging gives for δ small enough. This gives a contradiction. fx δ δ ɛ + fx δ δ h fx, δ δ h δ δ ɛ + fx ɛ h δ δ h = fx + ɛ δ δ
A Comparison of Two Achievable Rate Regions for the Interference Channel
A Comparison of Two Achievable Rate Regions for the Interference Channel Hon-Fah Chong, Mehul Motani, and Hari Krishna Garg Electrical & Computer Engineering National University of Singapore Email: {g030596,motani,eleghk}@nus.edu.sg
More informationResearch Summary. Chandra Nair. Overview
Research Summary Chandra Nair Overview In my early research career I considered random versions of combinatorial optimization problems; my doctoral thesis [17] resolved a long-standing conjecture about
More informationA Proof of the Converse for the Capacity of Gaussian MIMO Broadcast Channels
A Proof of the Converse for the Capacity of Gaussian MIMO Broadcast Channels Mehdi Mohseni Department of Electrical Engineering Stanford University Stanford, CA 94305, USA Email: mmohseni@stanford.edu
More informationOptimal Natural Encoding Scheme for Discrete Multiplicative Degraded Broadcast Channels
Optimal Natural Encoding Scheme for Discrete Multiplicative Degraded Broadcast Channels Bike ie, Student Member, IEEE and Richard D. Wesel, Senior Member, IEEE Abstract Certain degraded broadcast channels
More informationPrimary Rate-Splitting Achieves Capacity for the Gaussian Cognitive Interference Channel
Primary Rate-Splitting Achieves Capacity for the Gaussian Cognitive Interference Channel Stefano Rini, Ernest Kurniawan and Andrea Goldsmith Technische Universität München, Munich, Germany, Stanford University,
More informationMismatched Multi-letter Successive Decoding for the Multiple-Access Channel
Mismatched Multi-letter Successive Decoding for the Multiple-Access Channel Jonathan Scarlett University of Cambridge jms265@cam.ac.uk Alfonso Martinez Universitat Pompeu Fabra alfonso.martinez@ieee.org
More informationA Comparison of Superposition Coding Schemes
A Comparison of Superposition Coding Schemes Lele Wang, Eren Şaşoğlu, Bernd Bandemer, and Young-Han Kim Department of Electrical and Computer Engineering University of California, San Diego La Jolla, CA
More informationSuperposition Encoding and Partial Decoding Is Optimal for a Class of Z-interference Channels
Superposition Encoding and Partial Decoding Is Optimal for a Class of Z-interference Channels Nan Liu and Andrea Goldsmith Department of Electrical Engineering Stanford University, Stanford CA 94305 Email:
More informationOn the Capacity Region of the Gaussian Z-channel
On the Capacity Region of the Gaussian Z-channel Nan Liu Sennur Ulukus Department of Electrical and Computer Engineering University of Maryland, College Park, MD 74 nkancy@eng.umd.edu ulukus@eng.umd.edu
More informationOn the Optimality of Treating Interference as Noise in Competitive Scenarios
On the Optimality of Treating Interference as Noise in Competitive Scenarios A. DYTSO, D. TUNINETTI, N. DEVROYE WORK PARTIALLY FUNDED BY NSF UNDER AWARD 1017436 OUTLINE CHANNEL MODEL AND PAST WORK ADVANTAGES
More informationOn Gaussian MIMO Broadcast Channels with Common and Private Messages
On Gaussian MIMO Broadcast Channels with Common and Private Messages Ersen Ekrem Sennur Ulukus Department of Electrical and Computer Engineering University of Maryland, College Park, MD 20742 ersen@umd.edu
More informationThe Capacity of the Semi-Deterministic Cognitive Interference Channel and its Application to Constant Gap Results for the Gaussian Channel
The Capacity of the Semi-Deterministic Cognitive Interference Channel and its Application to Constant Gap Results for the Gaussian Channel Stefano Rini, Daniela Tuninetti, and Natasha Devroye Department
More informationThe Capacity Region of the Gaussian Cognitive Radio Channels at High SNR
The Capacity Region of the Gaussian Cognitive Radio Channels at High SNR 1 Stefano Rini, Daniela Tuninetti and Natasha Devroye srini2, danielat, devroye @ece.uic.edu University of Illinois at Chicago Abstract
More informationOn the Secrecy Capacity of the Z-Interference Channel
On the Secrecy Capacity of the Z-Interference Channel Ronit Bustin Tel Aviv University Email: ronitbustin@post.tau.ac.il Mojtaba Vaezi Princeton University Email: mvaezi@princeton.edu Rafael F. Schaefer
More informationAPPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.
APPENDIX A Background Mathematics A. Linear Algebra A.. Vector algebra Let x denote the n-dimensional column vector with components 0 x x 2 B C @. A x n Definition 6 (scalar product). The scalar product
More informationInformation Theory Meets Game Theory on The Interference Channel
Information Theory Meets Game Theory on The Interference Channel Randall A. Berry Dept. of EECS Northwestern University e-mail: rberry@eecs.northwestern.edu David N. C. Tse Wireless Foundations University
More informationSimultaneous Nonunique Decoding Is Rate-Optimal
Fiftieth Annual Allerton Conference Allerton House, UIUC, Illinois, USA October 1-5, 2012 Simultaneous Nonunique Decoding Is Rate-Optimal Bernd Bandemer University of California, San Diego La Jolla, CA
More informationMultiplicativity of Maximal p Norms in Werner Holevo Channels for 1 < p 2
Multiplicativity of Maximal p Norms in Werner Holevo Channels for 1 < p 2 arxiv:quant-ph/0410063v1 8 Oct 2004 Nilanjana Datta Statistical Laboratory Centre for Mathematical Sciences University of Cambridge
More informationCapacity Bounds for. the Gaussian Interference Channel
Capacity Bounds for the Gaussian Interference Channel Abolfazl S. Motahari, Student Member, IEEE, and Amir K. Khandani, Member, IEEE Coding & Signal Transmission Laboratory www.cst.uwaterloo.ca {abolfazl,khandani}@cst.uwaterloo.ca
More informationAn Achievable Rate Region for the 3-User-Pair Deterministic Interference Channel
Forty-Ninth Annual Allerton Conference Allerton House, UIUC, Illinois, USA September 8-3, An Achievable Rate Region for the 3-User-Pair Deterministic Interference Channel Invited Paper Bernd Bandemer and
More informationLecture 1: The Multiple Access Channel. Copyright G. Caire 12
Lecture 1: The Multiple Access Channel Copyright G. Caire 12 Outline Two-user MAC. The Gaussian case. The K-user case. Polymatroid structure and resource allocation problems. Copyright G. Caire 13 Two-user
More informationConvex Functions and Optimization
Chapter 5 Convex Functions and Optimization 5.1 Convex Functions Our next topic is that of convex functions. Again, we will concentrate on the context of a map f : R n R although the situation can be generalized
More informationDegrees of Freedom Region of the Gaussian MIMO Broadcast Channel with Common and Private Messages
Degrees of Freedom Region of the Gaussian MIMO Broadcast hannel with ommon and Private Messages Ersen Ekrem Sennur Ulukus Department of Electrical and omputer Engineering University of Maryland, ollege
More informationThe Gallager Converse
The Gallager Converse Abbas El Gamal Director, Information Systems Laboratory Department of Electrical Engineering Stanford University Gallager s 75th Birthday 1 Information Theoretic Limits Establishing
More informationDraft. On Limiting Expressions for the Capacity Region of Gaussian Interference Channels. Mojtaba Vaezi and H. Vincent Poor
Preliminaries Counterexample Better Use On Limiting Expressions for the Capacity Region of Gaussian Interference Channels Mojtaba Vaezi and H. Vincent Poor Department of Electrical Engineering Princeton
More informationAmobile satellite communication system, like Motorola s
I TRANSACTIONS ON INFORMATION THORY, VOL. 45, NO. 4, MAY 1999 1111 Distributed Source Coding for Satellite Communications Raymond W. Yeung, Senior Member, I, Zhen Zhang, Senior Member, I Abstract Inspired
More informationJoint Source-Channel Coding for the Multiple-Access Relay Channel
Joint Source-Channel Coding for the Multiple-Access Relay Channel Yonathan Murin, Ron Dabora Department of Electrical and Computer Engineering Ben-Gurion University, Israel Email: moriny@bgu.ac.il, ron@ee.bgu.ac.il
More information820 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 58, NO. 2, FEBRUARY Stefano Rini, Daniela Tuninetti, and Natasha Devroye
820 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 58, NO. 2, FEBRUARY 2012 Inner and Outer Bounds for the Gaussian Cognitive Interference Channel and New Capacity Results Stefano Rini, Daniela Tuninetti,
More informationOn Network Interference Management
On Network Interference Management Aleksandar Jovičić, Hua Wang and Pramod Viswanath March 3, 2008 Abstract We study two building-block models of interference-limited wireless networks, motivated by the
More informationOn Multiple User Channels with State Information at the Transmitters
On Multiple User Channels with State Information at the Transmitters Styrmir Sigurjónsson and Young-Han Kim* Information Systems Laboratory Stanford University Stanford, CA 94305, USA Email: {styrmir,yhk}@stanford.edu
More informationA New Achievable Region for Gaussian Multiple Descriptions Based on Subset Typicality
0 IEEE Information Theory Workshop A New Achievable Region for Gaussian Multiple Descriptions Based on Subset Typicality Kumar Viswanatha, Emrah Akyol and Kenneth Rose ECE Department, University of California
More informationDuality, Achievable Rates, and Sum-Rate Capacity of Gaussian MIMO Broadcast Channels
2658 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL 49, NO 10, OCTOBER 2003 Duality, Achievable Rates, and Sum-Rate Capacity of Gaussian MIMO Broadcast Channels Sriram Vishwanath, Student Member, IEEE, Nihar
More informationA Single-letter Upper Bound for the Sum Rate of Multiple Access Channels with Correlated Sources
A Single-letter Upper Bound for the Sum Rate of Multiple Access Channels with Correlated Sources Wei Kang Sennur Ulukus Department of Electrical and Computer Engineering University of Maryland, College
More informationLecture Notes: Eigenvalues and Eigenvectors. 1 Definitions. 2 Finding All Eigenvalues
Lecture Notes: Eigenvalues and Eigenvectors Yufei Tao Department of Computer Science and Engineering Chinese University of Hong Kong taoyf@cse.cuhk.edu.hk 1 Definitions Let A be an n n matrix. If there
More informationOn the Capacity of the Interference Channel with a Relay
On the Capacity of the Interference Channel with a Relay Ivana Marić, Ron Dabora and Andrea Goldsmith Stanford University, Stanford, CA {ivanam,ron,andrea}@wsl.stanford.edu Abstract Capacity gains due
More informationAn Alternative Proof for the Capacity Region of the Degraded Gaussian MIMO Broadcast Channel
IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 58, NO. 4, APRIL 2012 2427 An Alternative Proof for the Capacity Region of the Degraded Gaussian MIMO Broadcast Channel Ersen Ekrem, Student Member, IEEE,
More informationA New Metaconverse and Outer Region for Finite-Blocklength MACs
A New Metaconverse Outer Region for Finite-Blocklength MACs Pierre Moulin Dept of Electrical Computer Engineering University of Illinois at Urbana-Champaign Urbana, IL 680 Abstract An outer rate region
More informationError Exponent Region for Gaussian Broadcast Channels
Error Exponent Region for Gaussian Broadcast Channels Lihua Weng, S. Sandeep Pradhan, and Achilleas Anastasopoulos Electrical Engineering and Computer Science Dept. University of Michigan, Ann Arbor, MI
More informationVISCOSITY SOLUTIONS. We follow Han and Lin, Elliptic Partial Differential Equations, 5.
VISCOSITY SOLUTIONS PETER HINTZ We follow Han and Lin, Elliptic Partial Differential Equations, 5. 1. Motivation Throughout, we will assume that Ω R n is a bounded and connected domain and that a ij C(Ω)
More informationb jσ(j), Keywords: Decomposable numerical range, principal character AMS Subject Classification: 15A60
On the Hu-Hurley-Tam Conjecture Concerning The Generalized Numerical Range Che-Man Cheng Faculty of Science and Technology, University of Macau, Macau. E-mail: fstcmc@umac.mo and Chi-Kwong Li Department
More informationConvexity/Concavity of Renyi Entropy and α-mutual Information
Convexity/Concavity of Renyi Entropy and -Mutual Information Siu-Wai Ho Institute for Telecommunications Research University of South Australia Adelaide, SA 5095, Australia Email: siuwai.ho@unisa.edu.au
More informationTrace Inequalities for a Block Hadamard Product
Filomat 32:1 2018), 285 292 https://doiorg/102298/fil1801285p Published by Faculty of Sciences and Mathematics, University of Niš, Serbia Available at: http://wwwpmfniacrs/filomat Trace Inequalities for
More informationMultiaccess Channels with State Known to One Encoder: A Case of Degraded Message Sets
Multiaccess Channels with State Known to One Encoder: A Case of Degraded Message Sets Shivaprasad Kotagiri and J. Nicholas Laneman Department of Electrical Engineering University of Notre Dame Notre Dame,
More informationMajorization-preserving quantum channels
Majorization-preserving quantum channels arxiv:1209.5233v2 [quant-ph] 15 Dec 2012 Lin Zhang Institute of Mathematics, Hangzhou Dianzi University, Hangzhou 310018, PR China Abstract In this report, we give
More informationOn the Duality between Multiple-Access Codes and Computation Codes
On the Duality between Multiple-Access Codes and Computation Codes Jingge Zhu University of California, Berkeley jingge.zhu@berkeley.edu Sung Hoon Lim KIOST shlim@kiost.ac.kr Michael Gastpar EPFL michael.gastpar@epfl.ch
More informationGradient Estimate of Mean Curvature Equations and Hessian Equations with Neumann Boundary Condition
of Mean Curvature Equations and Hessian Equations with Neumann Boundary Condition Xinan Ma NUS, Dec. 11, 2014 Four Kinds of Equations Laplace s equation: u = f(x); mean curvature equation: div( Du ) =
More informationUSING multiple antennas has been shown to increase the
IEEE TRANSACTIONS ON COMMUNICATIONS, VOL. 55, NO. 1, JANUARY 2007 11 A Comparison of Time-Sharing, DPC, and Beamforming for MIMO Broadcast Channels With Many Users Masoud Sharif, Member, IEEE, and Babak
More informationAssignment 1: From the Definition of Convexity to Helley Theorem
Assignment 1: From the Definition of Convexity to Helley Theorem Exercise 1 Mark in the following list the sets which are convex: 1. {x R 2 : x 1 + i 2 x 2 1, i = 1,..., 10} 2. {x R 2 : x 2 1 + 2ix 1x
More informationInterference Channel aided by an Infrastructure Relay
Interference Channel aided by an Infrastructure Relay Onur Sahin, Osvaldo Simeone, and Elza Erkip *Department of Electrical and Computer Engineering, Polytechnic Institute of New York University, Department
More informationLecture 1: Entropy, convexity, and matrix scaling CSE 599S: Entropy optimality, Winter 2016 Instructor: James R. Lee Last updated: January 24, 2016
Lecture 1: Entropy, convexity, and matrix scaling CSE 599S: Entropy optimality, Winter 2016 Instructor: James R. Lee Last updated: January 24, 2016 1 Entropy Since this course is about entropy maximization,
More informationLinear and non-linear programming
Linear and non-linear programming Benjamin Recht March 11, 2005 The Gameplan Constrained Optimization Convexity Duality Applications/Taxonomy 1 Constrained Optimization minimize f(x) subject to g j (x)
More informationThe Role of Directed Information in Network Capacity
The Role of Directed Information in Network Capacity Sudeep Kamath 1 and Young-Han Kim 2 1 Department of Electrical Engineering Princeton University 2 Department of Electrical and Computer Engineering
More informationOptimal Sequences and Sum Capacity of Synchronous CDMA Systems
Optimal Sequences and Sum Capacity of Synchronous CDMA Systems Pramod Viswanath and Venkat Anantharam {pvi, ananth}@eecs.berkeley.edu EECS Department, U C Berkeley CA 9470 Abstract The sum capacity of
More informationPreliminary draft only: please check for final version
ARE211, Fall2012 CALCULUS4: THU, OCT 11, 2012 PRINTED: AUGUST 22, 2012 (LEC# 15) Contents 3. Univariate and Multivariate Differentiation (cont) 1 3.6. Taylor s Theorem (cont) 2 3.7. Applying Taylor theory:
More informationOptimization. A first course on mathematics for economists
Optimization. A first course on mathematics for economists Xavier Martinez-Giralt Universitat Autònoma de Barcelona xavier.martinez.giralt@uab.eu II.3 Static optimization - Non-Linear programming OPT p.1/45
More informationAn Achievable Error Exponent for the Mismatched Multiple-Access Channel
An Achievable Error Exponent for the Mismatched Multiple-Access Channel Jonathan Scarlett University of Cambridge jms265@camacuk Albert Guillén i Fàbregas ICREA & Universitat Pompeu Fabra University of
More informationUniqueness of Generalized Equilibrium for Box Constrained Problems and Applications
Uniqueness of Generalized Equilibrium for Box Constrained Problems and Applications Alp Simsek Department of Electrical Engineering and Computer Science Massachusetts Institute of Technology Asuman E.
More informationOn Scalable Source Coding for Multiple Decoders with Side Information
On Scalable Source Coding for Multiple Decoders with Side Information Chao Tian School of Computer and Communication Sciences Laboratory for Information and Communication Systems (LICOS), EPFL, Lausanne,
More informationCapacity Region of Reversely Degraded Gaussian MIMO Broadcast Channel
Capacity Region of Reversely Degraded Gaussian MIMO Broadcast Channel Jun Chen Dept. of Electrical and Computer Engr. McMaster University Hamilton, Ontario, Canada Chao Tian AT&T Labs-Research 80 Park
More informationThe Capacity Region of a Class of Discrete Degraded Interference Channels
The Capacity Region of a Class of Discrete Degraded Interference Channels Nan Liu Sennur Ulukus Department of Electrical and Computer Engineering University of Maryland, College Park, MD 74 nkancy@umd.edu
More informationEE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 17
EE/ACM 150 - Applications of Convex Optimization in Signal Processing and Communications Lecture 17 Andre Tkacenko Signal Processing Research Group Jet Propulsion Laboratory May 29, 2012 Andre Tkacenko
More informationInequalities For Singular Values And Traces Of Quaternion Hermitian Matrices
Inequalities For Singular Values And Traces Of Quaternion Hermitian Matrices K. Gunasekaran M. Rahamathunisha Ramanujan Research Centre, PG and Research Department of Mathematics, Government Arts College
More informationOptimal Power Control in Decentralized Gaussian Multiple Access Channels
1 Optimal Power Control in Decentralized Gaussian Multiple Access Channels Kamal Singh Department of Electrical Engineering Indian Institute of Technology Bombay. arxiv:1711.08272v1 [eess.sp] 21 Nov 2017
More informationOn the Capacity of the Multiple Antenna Broadcast Channel
DIMACS Series in Discrete Mathematics and Theoretical Computer Science On the Capacity of the Multiple Antenna Broadcast Channel David Tse and Pramod Viswanath Abstract. The capacity region of the multiple
More informationBounds and Capacity Results for the Cognitive Z-interference Channel
Bounds and Capacity Results for the Cognitive Z-interference Channel Nan Liu nanliu@stanford.edu Ivana Marić ivanam@wsl.stanford.edu Andrea J. Goldsmith andrea@wsl.stanford.edu Shlomo Shamai (Shitz) Technion
More informationSum-Power Iterative Watefilling Algorithm
Sum-Power Iterative Watefilling Algorithm Daniel P. Palomar Hong Kong University of Science and Technolgy (HKUST) ELEC547 - Convex Optimization Fall 2009-10, HKUST, Hong Kong November 11, 2009 Outline
More informationGeneralized Writing on Dirty Paper
Generalized Writing on Dirty Paper Aaron S. Cohen acohen@mit.edu MIT, 36-689 77 Massachusetts Ave. Cambridge, MA 02139-4307 Amos Lapidoth lapidoth@isi.ee.ethz.ch ETF E107 ETH-Zentrum CH-8092 Zürich, Switzerland
More informationFRAMES IN QUANTUM AND CLASSICAL INFORMATION THEORY
FRAMES IN QUANTUM AND CLASSICAL INFORMATION THEORY Emina Soljanin Mathematical Sciences Research Center, Bell Labs April 16, 23 A FRAME 1 A sequence {x i } of vectors in a Hilbert space with the property
More informationInterference Decoding for Deterministic Channels Bernd Bandemer, Student Member, IEEE, and Abbas El Gamal, Fellow, IEEE
2966 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 5, MAY 2011 Interference Decoding for Deterministic Channels Bernd Bemer, Student Member, IEEE, Abbas El Gamal, Fellow, IEEE Abstract An inner
More informationA LOCALIZATION PROPERTY AT THE BOUNDARY FOR MONGE-AMPERE EQUATION
A LOCALIZATION PROPERTY AT THE BOUNDARY FOR MONGE-AMPERE EQUATION O. SAVIN. Introduction In this paper we study the geometry of the sections for solutions to the Monge- Ampere equation det D 2 u = f, u
More informationRESEARCH ARTICLE. An extension of the polytope of doubly stochastic matrices
Linear and Multilinear Algebra Vol. 00, No. 00, Month 200x, 1 15 RESEARCH ARTICLE An extension of the polytope of doubly stochastic matrices Richard A. Brualdi a and Geir Dahl b a Department of Mathematics,
More informationMath 291-2: Final Exam Solutions Northwestern University, Winter 2016
Math 29-2: Final Exam Solutions Northwestern University, Winter 206 Determine whether each of the following statements is true or false f it is true, explain why; if it is false, give a counterexample
More informationMorning Session Capacity-based Power Control. Department of Electrical and Computer Engineering University of Maryland
Morning Session Capacity-based Power Control Şennur Ulukuş Department of Electrical and Computer Engineering University of Maryland So Far, We Learned... Power control with SIR-based QoS guarantees Suitable
More informationFeedback Capacity of the Gaussian Interference Channel to Within Bits: the Symmetric Case
1 arxiv:0901.3580v1 [cs.it] 23 Jan 2009 Feedback Capacity of the Gaussian Interference Channel to Within 1.7075 Bits: the Symmetric Case Changho Suh and David Tse Wireless Foundations in the Department
More informationOptimal Encoding Schemes for Several Classes of Discrete Degraded Broadcast Channels
Optimal Encoding Schemes for Several Classes of Discrete Degraded Broadcast Channels Bike Xie, Student Member, IEEE and Richard D. Wesel, Senior Member, IEEE Abstract arxiv:0811.4162v4 [cs.it] 8 May 2009
More informationELE539A: Optimization of Communication Systems Lecture 15: Semidefinite Programming, Detection and Estimation Applications
ELE539A: Optimization of Communication Systems Lecture 15: Semidefinite Programming, Detection and Estimation Applications Professor M. Chiang Electrical Engineering Department, Princeton University March
More informationQuantum Computing Lecture 2. Review of Linear Algebra
Quantum Computing Lecture 2 Review of Linear Algebra Maris Ozols Linear algebra States of a quantum system form a vector space and their transformations are described by linear operators Vector spaces
More informationLECTURE 18. Lecture outline Gaussian channels: parallel colored noise inter-symbol interference general case: multiple inputs and outputs
LECTURE 18 Last time: White Gaussian noise Bandlimited WGN Additive White Gaussian Noise (AWGN) channel Capacity of AWGN channel Application: DS-CDMA systems Spreading Coding theorem Lecture outline Gaussian
More informationMaximum Likelihood Estimation
Maximum Likelihood Estimation Assume X P θ, θ Θ, with joint pdf (or pmf) f(x θ). Suppose we observe X = x. The Likelihood function is L(θ x) = f(x θ) as a function of θ (with the data x held fixed). The
More informationDAVIS WIELANDT SHELLS OF NORMAL OPERATORS
DAVIS WIELANDT SHELLS OF NORMAL OPERATORS CHI-KWONG LI AND YIU-TUNG POON Dedicated to Professor Hans Schneider for his 80th birthday. Abstract. For a finite-dimensional operator A with spectrum σ(a), the
More informationOn John type ellipsoids
On John type ellipsoids B. Klartag Tel Aviv University Abstract Given an arbitrary convex symmetric body K R n, we construct a natural and non-trivial continuous map u K which associates ellipsoids to
More informationVariable Length Codes for Degraded Broadcast Channels
Variable Length Codes for Degraded Broadcast Channels Stéphane Musy School of Computer and Communication Sciences, EPFL CH-1015 Lausanne, Switzerland Email: stephane.musy@ep.ch Abstract This paper investigates
More informationChapter 7. Extremal Problems. 7.1 Extrema and Local Extrema
Chapter 7 Extremal Problems No matter in theoretical context or in applications many problems can be formulated as problems of finding the maximum or minimum of a function. Whenever this is the case, advanced
More informationOptimum Power Allocation in Fading MIMO Multiple Access Channels with Partial CSI at the Transmitters
Optimum Power Allocation in Fading MIMO Multiple Access Channels with Partial CSI at the Transmitters Alkan Soysal Sennur Ulukus Department of Electrical and Computer Engineering University of Maryland,
More informationSum-Rate Capacity of Poisson MIMO Multiple-Access Channels
1 Sum-Rate Capacity of Poisson MIMO Multiple-Access Channels Ain-ul-Aisha, Lifeng Lai, Yingbin Liang and Shlomo Shamai (Shitz Abstract In this paper, we analyze the sum-rate capacity of two-user Poisson
More informationTangent spaces, normals and extrema
Chapter 3 Tangent spaces, normals and extrema If S is a surface in 3-space, with a point a S where S looks smooth, i.e., without any fold or cusp or self-crossing, we can intuitively define the tangent
More informationSecond-Order Asymptotics for the Gaussian MAC with Degraded Message Sets
Second-Order Asymptotics for the Gaussian MAC with Degraded Message Sets Jonathan Scarlett and Vincent Y. F. Tan Department of Engineering, University of Cambridge Electrical and Computer Engineering,
More informationOptimal Power Allocation over Parallel Gaussian Broadcast Channels
Optimal Power Allocation over Parallel Gaussian Broadcast Channels David N.C. Tse Dept. of Electrical Engineering and Computer Sciences University of California at Berkeley email: dtse@eecs.berkeley.edu
More informationThe Capacity Region of the Gaussian MIMO Broadcast Channel
0-0 The Capacity Region of the Gaussian MIMO Broadcast Channel Hanan Weingarten, Yossef Steinberg and Shlomo Shamai (Shitz) Outline Problem statement Background and preliminaries Capacity region of the
More informationCut-Set Bound and Dependence Balance Bound
Cut-Set Bound and Dependence Balance Bound Lei Xiao lxiao@nd.edu 1 Date: 4 October, 2006 Reading: Elements of information theory by Cover and Thomas [1, Section 14.10], and the paper by Hekstra and Willems
More informationCHAPTER 2: QUADRATIC PROGRAMMING
CHAPTER 2: QUADRATIC PROGRAMMING Overview Quadratic programming (QP) problems are characterized by objective functions that are quadratic in the design variables, and linear constraints. In this sense,
More informationDistributed Data Storage with Minimum Storage Regenerating Codes - Exact and Functional Repair are Asymptotically Equally Efficient
Distributed Data Storage with Minimum Storage Regenerating Codes - Exact and Functional Repair are Asymptotically Equally Efficient Viveck R Cadambe, Syed A Jafar, Hamed Maleki Electrical Engineering and
More informationTwo Applications of the Gaussian Poincaré Inequality in the Shannon Theory
Two Applications of the Gaussian Poincaré Inequality in the Shannon Theory Vincent Y. F. Tan (Joint work with Silas L. Fong) National University of Singapore (NUS) 2016 International Zurich Seminar on
More information10725/36725 Optimization Homework 2 Solutions
10725/36725 Optimization Homework 2 Solutions 1 Convexity (Kevin) 1.1 Sets Let A R n be a closed set with non-empty interior that has a supporting hyperplane at every point on its boundary. (a) Show that
More informationOutline. Roadmap for the NPP segment: 1 Preliminaries: role of convexity. 2 Existence of a solution
Outline Roadmap for the NPP segment: 1 Preliminaries: role of convexity 2 Existence of a solution 3 Necessary conditions for a solution: inequality constraints 4 The constraint qualification 5 The Lagrangian
More informationLecture 7: Convex Optimizations
Lecture 7: Convex Optimizations Radu Balan, David Levermore March 29, 2018 Convex Sets. Convex Functions A set S R n is called a convex set if for any points x, y S the line segment [x, y] := {tx + (1
More informationCapacity Theorems for Relay Channels
Capacity Theorems for Relay Channels Abbas El Gamal Department of Electrical Engineering Stanford University April, 2006 MSRI-06 Relay Channel Discrete-memoryless relay channel [vm 7] Relay Encoder Y n
More informationLecture 8 : Eigenvalues and Eigenvectors
CPS290: Algorithmic Foundations of Data Science February 24, 2017 Lecture 8 : Eigenvalues and Eigenvectors Lecturer: Kamesh Munagala Scribe: Kamesh Munagala Hermitian Matrices It is simpler to begin with
More informationEE 4TM4: Digital Communications II. Channel Capacity
EE 4TM4: Digital Communications II 1 Channel Capacity I. CHANNEL CODING THEOREM Definition 1: A rater is said to be achievable if there exists a sequence of(2 nr,n) codes such thatlim n P (n) e (C) = 0.
More informationOn cardinality of Pareto spectra
Electronic Journal of Linear Algebra Volume 22 Volume 22 (2011) Article 50 2011 On cardinality of Pareto spectra Alberto Seeger Jose Vicente-Perez Follow this and additional works at: http://repository.uwyo.edu/ela
More information