On the Energy-Distortion Tradeoff of Gaussian Broadcast Channels with Feedback

Size: px
Start display at page:

Download "On the Energy-Distortion Tradeoff of Gaussian Broadcast Channels with Feedback"

Transcription

1 On the Energy-Distortion Tradeoff of Gaussian Broadcast Channels with Feedback Yonathan Murin, Yonatan Kaspi, Ron Dabora 3, and Deniz Gündüz 4 Stanford University, USA, University of California, San Diego, USA, 3 Ben-Gurion University, Israel, 4 Imperial College London, UK Abstract This work focuses on the transmission energy required for communicating a pair of correlated Gaussian sources over a two-user Gaussian broadcast channel with noiseless feedback from the receivers (GBCF. Our goal is to characterize the minimum transmission energy required for broadcasting a pair of source samples, such that each source can be reconstructed at its respective receiver to within a target distortion, when the source-channel bandwidth ratio is not restricted. This minimum transmission energy is defined as the energy-distortion tradeoff (EDT. We derive a lower bound and three upper bounds on the optimal EDT. For the upper bounds we analyze three transmission schemes. Two schemes are based on separate source-channel coding, which code over multiple samples of source pairs. The third scheme is based on joint source-channel coding obtained by extending the Ozarow-Leung (OL transmission scheme, which applies uncoded linear transmission. Numerical simulations show that despite its simplicity, the EDT of the OL-based scheme is close to that of the better separation-based scheme, which indicates that the OL scheme is attractive for energy-efficient source transmission over GBCFs. Index Terms Gaussian broadcast channel with feedback, correlated sources, joint source-channel coding, energy efficiency, energy-distortion tradeoff. I. INTRODUCTION This work studies the energy-distortion tradeoff (EDT for the transmission of a pair of correlated Gaussian sources over a two-user Gaussian broadcast channel (GBC with noiseless, causal feedback (FB, referred to as the GBCF. The EDT was originally proposed in [3] to This work was supported by Israel Science Foundation under grant 396/. arts of this work were presented at IEEE Information Theory Workshop (ITW, April 05, Jerusalem, Israel, [], and accepted for presentation at IEEE International Symposium on Information Theory (ISIT, July 06, Barcelona, Spain, [].

2 characterize the minimum energy-per-source sample required to achieve a target distortion pair at the receivers, without constraining the source-channel bandwidth ratio. In many practical scenarios, e.g., satellite broadcasting [4], sensor networks measuring physical processes [5], [6], and wireless body-area sensor networks [7] [9], correlated observations need to be transmitted over the channel. Moreover, in many emerging applications, particularly in the Internet of things context, the sampling rates are low, and hence, the transmitter has abundant channel bandwidth per source sample, whereas the main limitation is on the available energy per source sample. For example, in wireless body-area sensor networks, wireless computing devices located on, or inside the human body measure physiologic parameters, which typically exhibit correlations as they originate from the same source. Moreover, these devices are commonly limited in energy, due to size as well as health-related transmission power constraints, while bandwidth can be relatively abundant due to short distance of communications [0] [3]. It is well known that for lossy source transmission over Gaussian memoryless point-to-point channels, either with or without feedback, when the bandwidth ratio is fixed and the average power is finite, separate source and channel coding (SSCC achieves the minimum possible average mean square error (MSE distortion [4, Thm. 3]. In [3, Cor. ] it is further shown that SSCC is optimal also in the sense of EDT: For any target MSE distortion level, the minimal transmission energy is achieved by optimal lossy compression [5, Ch. 3] followed by the most energy efficient channel code [6]. While [3, Cor. ] considered unbounded number of source samples, more recent works [7, Thm. 9] and [8] showed that similar observations hold also for the point-to-point channel with finite number of source samples. Except for a few special scenarios, e.g., [9] [] and references therein, the optimality of SSCC does not generalize to multiuser scenarios. In such cases a joint design of the source and channel codes can improve the performance. The impact of feedback on lossy joint source-channel coding (JSCC over multiuser channels was considered by relatively few works. Several achievability schemes and a set of necessary conditions for losslessly transmitting a pair of discrete and memoryless correlated sources over a multiple-access channel (MAC with feedback were presented []. Lossy transmission of correlated Gaussian sources over a two-user Gaussian MAC with feedback was studied in [3], in which sufficient conditions as well as necessary conditions for the achievability of an MSE distortion pair were derived when the source and channel bandwidths match. The work [3] also showed that for the symmetric setting, if the channel signal-to-noise ratio (SNR is low enough,

3 then uncoded transmission is optimal. While [3] considered only source-channel coding with a unit bandwidth ratio, [3] studied the EDT for the transmission of correlated Gaussian sources over a two-user Gaussian MAC with and without feedback, when the bandwidth ratio is not restricted. Recently, [4] improved the lower bound derived in [3] for the two-user Gaussian MAC without feedback, and extended the results to more than two users. While EDT has received attention in recent years, the EDT of BCs has not been considered. revious works on GBCFs mainly focused on channel coding aspects, considering independent and uniformly distributed messages. A key work in this context is the work of Ozarow and Leung (OL [5], which derived inner and outer bounds on the capacity region of the two-user GBCF, by extending the point-to-point transmission strategy of Schalkwijk-Kailath (SK [6]. In contrast to the point-to-point case [6], for GBCFs, the scheme of [5] is generally suboptimal. Alternative to the estimation theoretic analysis of [5], channel coding schemes inspired from control theory are proposed for GBCFs in [7] and [8]. Specifically, [8] used linear quadratic Gaussian (LQG control theory to develop a scheme, which achieves rate pairs outside the achievable rate region of the SK-oriented code developed in [5]. Recently, it was shown in [9] and [30] that, for the two-user GBCF with independent noise components with equal variance, the LQG scheme of [8] achieves the maximal sum-rate among all possible linear-feedback schemes. Finally, it was shown in [3] that the capacity of GBCFs with independent noise components and only a common message cannot be achieved using a coding scheme that employs linear feedback. Instead, a capacity-achieving non-linear feedback scheme was presented in [3]. JSCC for the transmission of correlated Gaussian sources over GBCFs in the finite horizon regime was previously considered in [3], in which the minimal number of channel uses required to achieve a target MSE distortion pair was studied. Three linear encoding schemes based on uncoded transmission were considered: The first scheme was a JSCC scheme based on the coding scheme of [5], to which we shall refer as the OL scheme; The second scheme was a JSCC scheme based on the scheme of [8], to which we shall refer as the LQG scheme; and the third scheme was a JSCC scheme whose parameters are obtained using dynamic programming (D. We note that the advantages of linear and uncoded transmission, as implemented in the OL and in the LQG schemes, include low computational complexity, low coding delays, and low We note that in the present work we discuss only the former two schemes since the scheme based on D becomes analytically and computationally infeasible as the number of channel uses goes to infinity. 3

4 storage requirements. We further note that although the LQG channel coding scheme of [8] for the two-user GBCF (with two messages achieves the largest rate region out of all known channel coding schemes, [3] shows that when the time horizon is finite, the JSCC OL scheme can achieve lower MSE distortion pairs than the JSCC LQG scheme. In the present work we analyze lossy source coding over GBCFs using SSCC and JSCC schemes based on a different performance metric the EDT. Main Contributions: This is the first work towards characterizing the EDT in GBCFs. We derive lower and upper bounds on the minimum energy per source pair required to achieve a target MSE distortion, for the transmission of a pair of Gaussian sources over a two-user GBCF, without constraining the number of channel uses per source sample. The new lower bound is based on cut-set arguments, and the upper bounds are obtained using three transmission schemes: Two SSCC schemes and an uncoded JSCC scheme. The first SSCC scheme jointly compresses the two source sequences into a single bit stream, and transmits this stream to both receivers as a common message. The second SSCC scheme separately encodes each source sequence into two distinct bit streams, and broadcasts them via the LQG channel code of [8]. It is shown that in terms of the minimum energy-per-bit, the LQG code provides no gain compared to orthogonal transmission, from which we conclude that the first SSCC scheme, which jointly compresses the sequences into a single stream, is more energy efficient. As both SSCC schemes apply coding over multiple samples of the source pairs, they require high computational complexity, long delays, and large storage. Alternatively, we consider the uncoded JSCC OL scheme presented in [3]. For this scheme we first consider the case of fixed SNR and derive an upper bound on the number of channel uses required to achieve a target distortion pair. When the SNR approaches zero, the required number of channel uses grows, and the derived bound becomes tight. At the limiting scenario of SNR 0 it provides an upper bound on the EDT. While our primary focus in this work is on the analysis of the three schemes mentioned above, such an analysis is a first step towards identifying schemes that would achieve improved EDT performance in GBCFs. Numerical results indicate that the SSCC scheme based on joint compression achieves better EDT compared to the JSCC OL scheme; yet, in many scenarios the gap is quite small. Moreover, in many applications there is a constraint on the maximal allowed latency. In such scenarios, coding over large blocks of independent and identically distributed (i.i.d. pairs of source samples introduces unacceptable delays, and instantaneous transmission of each observed pair of source samples via the JSCC-OL scheme may be preferable in order to satisfy the latency requirement, 4

5 Fig. : Gaussian broadcast channel with correlated sources and feedback links. Ŝ m, and Ŝm, are the reconstructions of S m, and S m,, respectively. while maintaining high energy efficiency. The rest of this paper is organized as follows: The problem formulation is detailed in Section II. Lower bounds on the minimum energy are derived in Section III. Upper bounds on the minimum energy are derived in Sections IV and V. Numerical results are given in Section VI, and concluding remarks are provided in Section VII. A. Notation II. ROBLEM DEFINITION We use capital letters to denote random variables, e.g., X, and boldface letters to denote column random vectors, e.g., X; the k th element of a vector X is denoted by X k, k, and we use X j k, with j k, to denote (X k, X k+,..., X j. We use sans-serif fonts to denote matrices, e.g., Q. We use h( to denote differential entropy, I( ; to denote mutual information, and X Y Z to denote a Markov chain, as defined in [5, Ch. 9 and Ch. ]. We use E { }, ( T, log(, R, and N to denote stochastic expectation, transpose, natural basis logarithm, the set of real numbers, and the set of integers, respectively. We let O(g ( denote the set of functions g ( such that lim sup 0 g ( /g ( <. Finally, we define sgn(x as the sign of x R, with sgn(0, see [5]. B. roblem Setup The two-user GBCF is depicted in Fig., with all the signals being real. The encoder observes m i.i.d. realizations of a correlated and jointly Gaussian pair of sources (S,j, S,j N (0, Q s, j =,..., m, where Q s σs [ ρ s ] ρ s, ρs <. The task of the encoder is to send the observations of the i th source S m i,, i =,, to the i th decoder (receiver denoted by Rx i. The received signal at time k at Rx i is given by: 5

6 Y i,k = X k + Z i,k, i =,, ( for k =,..., n, where the noise sequences {Z,k, Z,k } n k=, are i.i.d. over k =,,..., n, with (Z,k, Z,k N (0, Q z, where Q z σz [ ρ z ] ρ z, ρz <. Let Y k (Y,k, Y,k. Rx i, i=,, uses its channel output sequence Y n i, to estimate S m i, via Ŝm i, = g i (Y n i,, g i : R n R m. The encoder maps the observed pair of source sequences and the noiseless causal channel outputs obtained through the feedback links into a channel input via: X k = f k (S m,, S m,, Y, Y,..., Y k, f k : R (m+k R. We study the symmetric GBCF with parameters (σ s, ρ s, σ z, ρ z, and define a (D, E, m, n code to be a collection of n encoding functions {f k } n k= and two decoding functions g, g, such that the MSE distortion satisfies: m E{(S i,j Ŝi,j } md, 0<D σs, i =,, ( j= and the energy of the transmitted signals satisfies: n E { Xk} me. (3 k= Our objective is to characterize the minimal E, for a given target MSE D at each user, such that for all ɛ > 0 there exist m, n and a (D + ɛ, E + ɛ, m, n code. We call this minimal value the EDT, and denote it by E(D. Remark (Energy constraint vs. power constraint. The constraint (3 reflects the energy per source sample rather than per channel use. Note that by defining m E, the constraint (3 can n be equivalently stated as n n k= E {X k } which is the well known average power constraint. Yet, since there is no constraint on the ratio between m and n, when we let the number of channel uses per source sample go to infinity, the classical average power constraint goes to zero. In the next section we present a lower bound on E(D. III. LOWER BOUND ON E(D Our first result is a lower bound on E(D. First, we define R S (D as the rate-distortion function for the source variable S, and R S,S (D as the rate distortion function for jointly compressing the pair of sources {S, S }, see [33, Sec. III.B]: Note that [33, Sec. III.B] uses the function R S,S (D, D as it considers different distortion constraint for each source. For the present case, in which the same distortion constraint is applied to both sources, R S,S (D can be obtained by setting D = D = D in [33, Eq. (0] and thus we use the simplified notation R S,S (D. 6

7 R S (D ( σ log s (4a D ( R S,S (D log σ s (+ ρ s D σs( ρ s, D>σs( ρ s (. (4b log σ 4 s ( ρ s, D σ D s( ρ s The lower bound on the EDT is stated in the following theorem: Theorem. The EDT E(D satisfies E(D E lb (D, where: E lb (D=σ z log e max { R S (D, (+ρ z R S,S (D }. (5 roof: As we consider a symmetric setting, in the following we focus on the distortion at Rx, and derive two different lower bounds. The first lower bound is obtained by identifying the minimal energy required in order to achieve an MSE distortion of D at Rx, while ignoring Rx. The second lower bound is obtained by considering the transmission of both sources over a point-to-point channel with two outputs Y and Y. We begin with the following lemma: Lemma. If for any ɛ > 0, a (D + ɛ, E + ɛ, m, n code exists, then the rate-distortion functions in (4 are upper bounded by: R S (D m R S,S (D m n I(X k ; Y,k k= n I(X k ; Y,k, Y,k. k= roof: The proof is provided in Appendix A. Now, for the right-hand-side of (6a we write: n I(X k ; Y,k (a m m k= (b m (c n k= ( log + var{x k} σz n var{x k } σz log e k= (6a (6b (E + ɛ σ z log e, (7 where (a follows from the capacity of an additive white Gaussian noise channel subject to an input variance constraint; (b follows from changing the logarithm base and from the inequality log e ( + x x, x 0, and (c follows by noting that (3 implies n k= var{x k} m(e+ɛ. Combining with (6a we obtain R S (D (E+ɛ σ z log e which implies that σ z log e R S (D E + ɛ. Since this holds for every ɛ > 0, we arrive at the first term on the right-hand-side of (5. Next, the right-hand-side of (6b can be expressed as: 7

8 m n I(X k ; Y,k, Y,k m k= n k= log ( QYk, (8 Q Zk where (8 follows from [5, Thm ], from [5, Thm. 9.4.] for jointly Gaussian random variables, and by defining Z k = (Z,k, Z,k and the covariance matrices Q Yk E { } Y k Yk T and Q Zk E { Z k Z T k }. To explicitly write QYk we note that E{Y i,k } = E {(X k + Z i,k } = E {X k } + σ z for i =,, and similarly E {Y,k Y,k } = E {X k } + ρ zσ z. We also have E{Z i,k } = σ z and E {Z,k Z,k } = ρ z σ z. Thus, we obtain Q Yk = E{X k }σ z( ρ z + σ 4 z( ρ z and Q Zk =σz( ρ 4 z. lugging these expressions into (8 results in: n ( m log QYk n E {Xk } Q Zk m σz( + ρ z log e k= k= (E + ɛ σ z( + ρ z log e, (9 where the inequalities follow the same arguments as those leading to (7. Combining with (6b we obtain R S,S (D (E+ɛ σ z (+ρz log e which implies that σ z( + ρ z log e R S,S (D E + ɛ. Since this holds for every ɛ > 0, we have the second term on the right-hand-side of (5. This concludes the proof. In the next sections we study three achievability schemes which lead to upper bounds on E(D. While these schemes have simple construction, analyzing their achievable EDT is novel and challenging. IV. UER BOUNDS ON E(D VIA SSCC SSCC in multi-user scenarios carries the advantages of modularity and ease of integration with the layering approach which is common in many practical communications systems. In this section we analyze the EDT of two SSCC schemes. The first scheme takes advantage of the correlation between the sources and ignores the correlation between the noise components; The second scheme ignores the correlation between the sources and aims at utilizing the correlation between the noise components. A. The SSCC-ρ s Scheme: Utilizing ρ s This scheme utilizes the correlation between the sources by first jointly encoding both source sequences into a single bit stream via the source coding scheme proposed in [34, Thm. 6], see also [33, Thm. III.]. This step gives rise to the rate-distortion function stated in (4b. The resulting bit stream is then encoded via channel code for sending a common message over the GBC (without feedback, and is transmitted to both receivers. Note that the optimal code for 8

9 transmitting a common message over GBCFs with ρ z 0 is not known, but, when ρ z = 0, the capacity for sending a common message over the GBCF is achievable using an optimal pointto-point channel code which ignores the feedback. Thus, SSCC-ρ s uses the correlation between the sources, but ignores the correlation among the noise components. The following theorem provides the minimum energy achieved by this scheme. Theorem. The SSCC-ρ s scheme achieves the following EDT: ( σ E sep (ρs z log σ s (+ ρ s e D σs( ρ (D= s, D >σs( ρ s ( σz log σ 4 s ( ρ s e, D σs( ρ s D roof: The optimal rate for jointly encoding the source sequences into a single bit stream is R S,S (D, given in (4b [33, Sec. III.B]. Note that from this stream both source sequences can be recovered to within a distortion D. The encoded bit stream is then transmitted to both receivers via a capacity-achieving point-to-point channel code [5, Thm. 0..] (note that this code does not exploit the causal feedback [5, Thm. 8..]. Let E common b min (0 denote the minimum energy-perbit required for reliable transmission over the Gaussian point-to-point channel [6]. From [6, pg. 05] we have Eb common min = σz log e. As the considered scheme is based on source-channel separation, the achievable EDT is given by E(D = E common b min stated in (4b. This results in the EDT in (0. R S,S (D, where R S,S (D is Remark (EDT without feedback. A basic question that may arise is about the EDT for transmitting a pair of correlated Gaussian sources over the GBC without feedback. While this problem has not been addressed previously, the transmission of correlated Gaussian source over the Gaussian broadcast channel (GBC has been studied in [35]. Applying the results of [35, Footnote ] leads to the EDT of the SSCC-ρ s scheme, which indeed does not use feedback. Hence, the EDT of the SSCC-ρ s scheme gives an indication of the achievable EDT for sending a pair of correlated Gaussian sources over GBCs without feedback. B. The SSCC-ρ z Scheme: Utilizing ρ z This scheme utilizes the correlation among the noise components, which is available through the feedback links for channel encoding, but does not exploit the correlation between the sources for compression. First, each of the source sequences is encoded using the optimal rate-distortion source code for scalar Gaussian sources [5, Thm. 3.3.]. Then, the resulting bit streams are sent over the GBCF using the LQG channel coding scheme of [8]. The following theorem characterizes the minimum energy per source sample required by this scheme. 9

10 Theorem 3. The SSCC-ρ z scheme achieves the EDT: E (ρz sep (D = σ z log e ( σ s D. ( roof: The encoder separately compresses each source sequence at rate R S (D, where R S (D is given in (4a. Thus, from each encoded stream the corresponding source sequence can be recovered to within a distortion D. Then, the two encoded bit streams are broadcast to their corresponding receivers using the LQG scheme of [8]. Let E LQG b min denote the minimum required energy per pair of encoded bits required by the LQG scheme. In Appendix B we show that for the symmetric setting: E LQG b min = σ z log e. ( Since two bit streams are transmitted, the achievable EDT is given by E(D=E LQG b min R S (D, yielding the EDT in (. Remark 3 (SSCC-ρ z vs. time-sharing. Since E (ρz sep (D is independent of ρ z, the LQG scheme cannot take advantage of the correlation among the noise components to improve the minimum energy per source sample needed in the symmetric setting. Indeed, an EDT of E (ρz sep (D can also be achieved by transmitting the two bit streams via time sharing over the GBCF without using the feedback. In this context, we recall that also [36, rop. ] stated that in Gaussian broadcast channels without feedback, time sharing is asymptotically optimal as the power tends to zero. Remark 4 (The relationship between E (ρs sep (D, E (ρz sep (D and E lb (D. We observe that E (ρs sep (D E (ρz sep (D. For D σ s( ρ s this relationship directly follows from the expressions of E (ρs sep (D and E (ρz sep (D. For D > σ s( ρ s the above relationship holds if the polynomial q(d = D (+ ρ s σ sd+σ 4 s( ρ s is positive. This is satisfied as the the discriminant of q(d is negative. We thus conclude that it is preferable to use the correlation between the sources than the correlation between the noise components. We further note that as D 0, the gap between E (ρs sep (D and E (ρz sep (D is bounded. On the other hand, as D 0, the gap between E (ρs sep (D and E lb (D is not bounded. 3 Remark 5 (Relevance to more than two users. The lower bound presented in Thm. can be extended to the case of K > sources using the results of [34, Thm. ] and [37]. The upper bound of Thm. can also be extended in a relatively simple manner to K > sources, again, 3 Note that when ρ z = 0, the right-hand-side of (5 is maximized by σ z log e R S (D. 0

11 using [34, Thm. ]. The upper bound in Thm. 3 can be extended to K > sources by using the LQG scheme for K > [8, Thm. ]. V. UER BOUND ON E(D VIA THE OL SCHEME Next, we derive a third upper bound on E(D by applying uncoded JSCC transmission based on the OL scheme [3, Sec. 3]. This scheme sequentially transmits the source pairs (S,j, S,j, j =,,..., m, without source coding. We note that the OL scheme is designed for a fixed = E/n, and from condition (3 we obtain that = E/n n n k= E {X k }. An upper bound on E(D can now be obtained by calculating the minimal number of channel uses required by the OL scheme to achieve the target distortion D, which we denote by K OL (, D, and then evaluating the required energy via K OL (,D k= E {X k }. A. JSCC Based on the OL Scheme In the OL scheme, each receiver recursively estimates its intended source samples. At each time index, the transmitter uses the feedback to compute the estimation errors at the receivers at the previous time index, and transmits a linear combination of these errors. The scheme is terminated after K OL (, D channel uses, where K OL (, D is chosen such that the target MSE D is achieved at each receiver. Setup and Initialization: Let Ŝi,k be the estimate of S i at Rx i after receiving the k th channel output Y i,k. Let ɛ i,k Ŝi,k S i be the estimation error after k transmissions, and define ˆɛ i,k Ŝ i,k Ŝi,k. It follows that ɛ i,k =ɛ i,k ˆɛ i,k. Next, define α i,k E{ɛ i,k } to be the MSE at Rx i after k transmissions, ρ k E{ɛ,kɛ,k} α k to be the correlation between the estimation errors after k transmissions, and Ψ k. For initialization, set Ŝi,0=0 (+ ρ k and ɛ i,0 = S i, thus, ρ 0 =ρ s. Encoding: At the k th channel use the transmitter sends X k = Ψ k αk (ɛ,k +ɛ,k sgn(ρ k, and the corresponding channel outputs are given by (. Decoding: Each receiver computes ˆɛ i,k, i =,, based only on Y i,k via ˆɛ i,k = E{ɛ i,k Y i,k} Y i,k, E{Yi,k} see [5, pg. 669] for the explicit expressions. Then, similarly to [39, Eq. (7], the estimate of the source S i is given by Ŝi,k = k m= ˆɛ i,m. Let Υ + σ z( ρ z and ν z σ 4 z( ρ z. The instantaneous MSE α k is given by the recursive expression [5, Eq. (5]: σz + Ψ k α i,k = α ( ρ k i,k, i =,, (3 +σz where the recursive expression for ρ k is given by [5, Eq. (7]: ρ k = (ρ zσzυ+ν z ρ k Ψ k Υ( ρ k sgn(ρ k +σz(σ z+ψ k ( ρ k. (4

12 Note that for this setup and intializations α,k = α,k α k. Remark 6 (Initialization of the OL scheme. Note that the OL scheme implements uncoded linear transmission at the sender, and linear (memoryless estimation at the receivers. Further note that in the above OL scheme we do not apply the initialization procedure described in [5, pg. 669], as it optimizes the achievable rate rather than the distortion. Instead, we set ɛ i,0 = S i and ρ 0 =ρ s, thus, taking advantage of the correlation among the sources. Let E OL-min (D denote the minimal energy per source pair required to achieve MSE D at each receiver using the OL scheme. Since in the OL scheme E {X k } =, k, we have E OL-min(D = min { K OL (, D}. From (3 one observes that the MSE value at time instant k depends on ρ k and the MSE at time k. Due to the non-linear recursive expression for ρ k in (4, it is very complicated to obtain an explicit analytical characterization for K OL (, D. For any fixed, we can upper bound E OL-min (D, and therefore E(D, via upper bounding K OL (, D. Thus, in the following we use upper bounds on K OL (, D to bound E OL-min (D. ( In [3, Thm. ] we showed that K OL (, D ( +σ z log σ s, which leads to the upper bound: D ( E OL-min (D min ( +σz log σ s D 0 E sep (ρz (D. However, when 0, the upper bound ( K OL (, D ( +σ z log σ s is not tight. 4 For this reason, in the next subsection we derive D a tighter upper bound on K OL (, D whose ratio to K OL (, D approaches as 0. This bound is then used to derive a tighter upper bound on E OL-min (D. B. A New Upper Bound on K OL (, D Following ideas from [3, Thm. 7], we assume a fixed σ z and approximate the recursive relationships for ρ k and α k given in (3 and (4 for small values of. We note that while σz [3, Thm. 7] obtained only asymptotic expressions for ρ k and α k for 0, in the following σz we derive tight bounds for these quantities and obtain an upper bound on K OL (, D which is valid for small values of > 0. Then, letting 0, the derived upper bound on K σz σz OL (, D yields an upper bound on E OL-min (D, and therefore on E(D. { First, define: ψ ρ z + 5( ρ z, ψ min{ ρz,( ρz} ρ and ψ σz 3 max z +ρ ( ρ z, z 4( ρ z }. We further define the positive quantities B ( and B ( in (4 at the top of the next page, and finally, we define the quantities: 4 This can be seen by considering a numerical example: Let σ s =, ρ s =0.9, σ z =, ρ z =0.7, D =, and consider two possible values for : = 0 4 and = 0 6. Via numerical simulations one can find that K OL(, D = 383, while the upper ( +σz bound is log = For we have K OL(, D = , while the upper bound is Thus, ( σ s D the gap between K OL(, D and the above bound increases as decreases.

13 B ( (8+ψ 3 +4σz +σzψ 4 +4σz 6 (4σzψ +8, 8σz 0 B ( + σ z (4 σz 6 ρ( (3 ρ z +B 8σz (, (5a ( ρ s F ( ψ B ( ψ (3 ρz 3 +B (, (5b F ( ρ s ψ B ( ρ s F 3 ( ψ B ( 8σ z B (, (5c ψ σz ( (3 ρz +B 8σz ( + B ( +B ( ρ z (, (5d ρ z F 4 ( ( + ρ( + σ z σz B (, (5e ρ lb (, D ρ z + σ s D (ρ z + ρ s e F 3(, (5f For small values of σ z D ub th σ s( ρ z ρ s e F 3( ρ z, (5g Dth lb σ s( ρ z ρ s e F 3(. (5h ρ z, the following theorem provides a tight upper bound on K OL (, D. Theorem 4. Let satisfy the conditions ρ( + σ z B ( < and B ( < ψ. The OL scheme achieves MSE D at each receiver within K OL (, D KOL ub (, D channel uses, where, K ub OL (, D is given by: ( σz log ( ρz ρ lb (,D(+ ρ s (3 ρ z ( ρ z ρ s (+ρ lb (,D ( ( KOL ub (, D= log D( ρz ρ( F σs ( ρz ρs 3 ( ( + σ z log ( ρz(+ ρ s (3 ρ z ρ z ρ s + σ z (F ( +F (, D >D ub th, (6a F 4 ( + σ z (F ( +F (, D <D lb th. (6b roof outline: Let ρ s 0 (otherwise replace S with S. From [5, pg. 669] it follows that ρ k decreases with k until it crosses zero. Let K th min{k N : ρ k+ < 0} be the largest time index k for which ρ k 0. In the proof of Thm. 4 we show that, for sufficiently small, σz ρ k ρ(, k K th. Hence, ρ k decreases until time K th and then has a bounded magnitude (larger than zero. This implies that the behavior of α k is different in the regions k K th and k > K th. Let Dth be the MSE after K th channel uses. We first derive upper and lower bounds 3

14 on D th, denoted by D ub th and Dth lb, respectively. Consequently, we arrive at the two cases in Thm. 4: (6a corresponds to the case of K OL (, D<K th, while (6b corresponds to the case K OL (, D>K th. The detailed proof is provided in Appendix C. Remark 7 (Bandwidth used by the OL scheme. Note that as 0, K ub OL increases to infinity. Since, as 0, K OL, it follows that as 0, K KOL ub OL. Assuming the source samples are generated at a fixed rate, this implies that the bandwidth used by the OL scheme increases to infinity as 0. Remark 8 (Thm. 4 holds for non-asymptotic values of. Note that the conditions on in Thm. 4 can be written as < th with th depending explicitly on σ z and ρ z. lugging B ( in (4 into the condition B ( < ψ, we obtain the condition: (8+ψ 4 +4σ z 3 +σ 4 zψ + 4σ 6 z (4σ zψ +8 < 8ψ σ 0 z. We note that, in this formulation the coefficients of m, m =,, 3, 4, are all positive. Therefore, the left-hand-side is monotonically increasing with, and since 8ψ σ 0 z is constant, the condition B ( < ψ is satisfied if < th,, for some threshold th,. Following similar arguments, the same conclusion holds for ρ( + σ z B ( < with some threshold th, instead of th,. Thus, by setting th =min{ th,, th, } we obtain that the conditions in Thm. 4 restrict the range of power constraint values for which the theorem holds for some < th, i.e., for low SNR values. C. An Upper Bound on E OL-min (D Next, we let 0, and use KOL ub (, D derived in Thm. 4 to obtain an upper bound on E OL-min (D, and therefore on E(D. This upper bound is stated in the following theorem. Theorem 5. Let D th σ s ( ρz ρs ρ z. Then, E OL-min (D E OL (D, where ( σz σs 3 ρ z log (+ ρs D+( ρ z(d σs+σ s ρ s, D D th, ( E OL (D= σz log( ( ρz ρs σs ( ρ zd ( + 3 ρ z log ( ρz(+ ρs ρ z ρ s, D<D th. roof: We evaluate K ub OL (, D for 0. Note that B i( O(, i =,, which implies that F j ( O(, j =,, 3, 4. To see why this holds, consider, for example, F ( : ( ρ s ψ 3 (3 ρz F ( = + B (. ψ B ( } {{ } (a 8σ z } {{ } (b (7 4

15 Since ρ s, ψ, and ψ 3 are constants, and since B ( O(, we have that (a O(/. Now, since (3 ρz is constant we have that (b O(. Combining these two asymptotics we 8σz conclude that F ( O(. Now, for D D th we bound the minimum E(D as follows: First, for D D ub th defined in (5g, we multiply both sides of (6a by. As F (, F ( O(, then, as 0, we obtain: ( KOL(, ub D = σ z ( ρz ρ lb (, D( + ρ s log + O( 3 ρ z ( ρ z ρ s ( + ρ lb (, D ( (a σz σ log s( + ρ s, 0 3 ρ z D + ( ρ z (D σs + σs ρ s where (a follows from (5f by noting that F 3 ( O(, and therefore, when 0, F 3 ( 0. This implies that as 0 we have ρ lb (, D ρ z + σ s D (ρ z + ρ s. Finally, note that for 0 we have D ub th D th. Next, for D <D th we bound the minimum E(D by first noting that since ρ( O( and σz B ( O(, then F 4 ( O(. Now, for D < Dth lb defined in (5h, multiplying both sides of (6b by we obtain: ( ( KOL(, ub D =σz D( ρz ρ( log +O( σs( ρ z ρ s +O( ( + σ z ( ρz (+ ρ s log +O( 3 ρ z ρ z ρ s ( ( (a ( ρz ρ s σ 0 σ s z log + ( ( ρz ( + ρ s log, ( ρ z D 3 ρ z ρ z ρ s where (a follows from the fact that ρ( O(, see (5a. This concludes the proof. Remark 9 (erformance for extreme correlation values. Similarly to Remark 4, as D 0, the gap between E OL (D and E lb (D is not bounded, which is in contrast to the situation for the OL-based JSCC for Gaussian MAC with feedback, cf. [3, Remark 6]. When ρ s = 0 we obtain that E OL (D = E (ρs sep (D = E (ρz sep (D, for all 0 D σs, which follows as the sources are, in this case we independent. When ρ s and ρ z then E OL (D E lb (D σ z log ( σ s D also have E (ρs sep (D E lb (D and E (ρz sep (D E OL (D. Remark 0 (Comparison of the OL scheme and the separation-based schemes. From (0 and (7 it follows that if D <σ s( ρ s then E OL (D E (ρs sep (D is given by: E OL (D E sep (ρs (D ( ( = σz ( ρz ρ s σs log ( ρ z + 3 ρ z log ( ( ρz ( + ρ s ρ z ρ s log( σs( ρ 4 s. (8 5

16 Note that E OL (D E (ρs sep (D is independent of D in this range. Similarly, from Thm. 3 and (7 it follows that if D <D th then E sep (ρz (D E OL (D is independent of D and is given by: ( ( ( E sep (ρz (D E OL (D = σz ρz log + ( ρz ρ s log. (9 ( ρ z ρ s 3 ρ z ( ρ z ( + ρ s Note that in both cases the gap decreases with ρ s since the scenario approaches the transmission of independent sources. The gap also increases as ρ z decreases. Remark (Uncoded JSCC transmission via the LQG scheme. In this work we did not analyze the EDT of JSCC using the LQG scheme, E LQG (D. The reason is two-fold: analytic tractability and practical relevance. For the analytic tractability, we note that in [3, Sec. 4] we adapted the LQG scheme of [8] to the transmission of correlated Gaussian sources over GBCFs. It follows from that work that obtaining a closed-form expression for E LQG (D seems intractable. Yet, using the results and analysis of [3] one can find good approximations for E LQG (D. As for the practical relevance, we showed in [3] that, in the context of JSCC, and in contrast to the results of [9] for the channel coding problem, when the duration of transmission is finite and the transmission power is very low, the OL scheme outperforms the LQG scheme. This conclusion is expected to hold for the EDT as well. Indeed, numerical simulations indicate that the LQG scheme of [3, Sec. 4] achieves roughly the same minimum energy as the SSCC-ρ z scheme, while in Section VI we show that the OL scheme outperforms the SSCC-ρ z scheme. VI. NUMERICAL RESULTS In the following, we numerically compare E lb (D, E (ρs sep (D, E (ρz sep (D and E OL (D. We set σ s =σ z = and consider several values of ρ z and ρ s. Fig. depicts E lb (D, E (ρs sep (D, E (ρz sep (D and E OL (D for ρ z = 0.5, and for two values of ρ s : ρ s = 0. and ρ s = 0.9. As E (ρz sep (D is not a function of ρ s, it is plotted only once. It can be observed that when ρ s = 0., then E (ρs sep (D, E (ρz sep (D and E OL (D are almost the same. This follows because when the correlation between the sources is low, the gain from utilizing this correlation is also low. Furthermore, when ρ s = 0. the gap between the lower bound and the upper bounds is evident. On the other hand, when ρ s = 0.9, both SSCC-ρ s and OL significantly improve upon SSCC-ρ z. This follows as SSCC-ρ z does not take advantage of the correlation among the sources. It can further be observed that when the distortion is low, there is a small gap between OL and SSCC-ρ s, while when the distortion is high, OL and SSCC-ρ s require roughly the same amount of energy. This is also supported by Fig 4. We conclude that as the SSCC-ρ s scheme encodes over long 6

17 Fig. : Upper and lower bounds on E(D for σs = σz =, Fig. 3: Upper and lower bounds on E(D for σs = σz = and ρz = 0.5. Solid lines correspond to ρs = 0.9, while, ρs = 0.8. Solid lines correspond to ρz = 0.9, while dashed lines correspond to ρs = 0.. dashed lines correspond to ρz = 0.9. Fig. 4: Normalized excess energy requirement of the OL Fig. 5: Normalized excess energy requirement of the scheme over the SSCC-ρs scheme, ρz = 0.5. SSCC-ρz scheme over the OL scheme, ρz = 0.5. sequences of source samples, it better exploits the correlation among the sources compared to the OL scheme. (ρ (ρ Fig. 3 depicts Elb (D, Eseps (D, Esepz (D and EOL (D vs. D, for ρs = 0.8, and for ρz (ρ (ρ { 0.9, 0.9}. As Eseps (D and Esepz (D are not functions of ρz, we plot them only once. It can (ρ be observed that when ρz = 0.9, Elb (D, Eseps (D and EOL (D are very close to each other, as was analytically concluded in Remark 9. On the other hand, for ρz = 0.9 the gap between the bounds is large. (ρ (ρ Note that analytically comparing Eseps (D, Esepz (D and EOL (D for any D is difficult. Our (ρ (ρ numerical simulations suggest the relationship Eseps (D EOL (D Esepz (D, for all values (ρ of D, ρs, ρz. For example, Fig. 4 depicts the difference EOL (D Eseps (D for ρz = 0.5, and for all values of D and ρs. It can be observed that for low correlation among the sources, or (ρ for high distortion values, Eseps (D EOL (D. On the other hand, when the correlation among the sources is high and the distortion is low, then the SSCC-ρs scheme improves upon the OL 7

18 scheme. When D < σ s( ρ s we can use (8 to analytically compute the gap between the energy requirements of the two schemes. For instance, at ρ s = 0.99, and for D < 0.0 the gap is approximately Fig. 5 depicts the difference E (ρz sep (D E OL (D for ρ z = 0.5. It can be observed that larger ρ s results in a larger gap. Again we can use (9 to analytically compute the gap between the energy requirements of the two schemes: At ρ s = 0.99,and for D < 0.34, the gap is approximately Finally, as stated in Remark, the LQG scheme achieves approximately the same minimum energy as the SSCC-ρ z scheme, hence, OL is expected to outperform LQG. This is in accordance with [3, Sec. 6], which shows that for low values of, OL outperforms LQG, but, in contrast to the channel coding problem in which the LQG scheme of [8] is known to achieve higher rates compared to the OL scheme of [5]. VII. CONCLUSIONS AND FUTURE WORK In this work we studied the EDT for sending correlated Gaussian sources over GBCFs, without constraining the source-channel bandwidth ratio. In particular, we first lower bounded the minimum energy per source pair using information theoretic tools, and then presented upper bounds on the minimum energy per source pair by analyzing three transmission schemes. The first scheme, SSCC-ρ s, jointly encodes the source sequences into a single bit stream, while the second scheme, SSCC-ρ z, separately encodes each of the sequences, thus, it does not exploit the correlation among the sources. We further showed that the LQG channel coding scheme of [8] achieves the same minimum energy-per-bit as orthogonal transmission, and therefore, in terms of the minimum energy-per-bit, it does not take advantage of the correlation among the noise components. We also concluded that SSCC-ρ s outperforms SSCC-ρ z. For the OL scheme we first derived an upper bound on the number of channel uses required to achieve a target distortion pair, which, in the limit 0, leads to an upper bound on the minimum energy per source pair. Numerical results indicate that SSCC-ρ s outperforms the OL scheme as well. On the other hand, the gap between the energy requirements of the two schemes is rather small. We note that in the SSCC-ρ s scheme coding takes place over blocks samples of source pairs which introduces high computational complexity, large delays, and requires large amount of storage. On the other hand, the OL scheme applies linear and uncoded transmission to each source sample pair separately, which requires low computational complexity, short delays, and limited storage. We conclude that the OL scheme provides an attractive alternative for energy efficient transmission over GBCFs. 8

19 Finally, we note that for the Gaussian MAC with feedback, OL-based JSCC is very close to the lower bound, cf. [3, Fig. 4], while, as indicated in Section VI, for the GBCF the gap between the OL-JSCC and the lower bound is larger. This difference is also apparent in the channel coding problem. 5 Therefore, it will be interesting to see if the duality results between the Gaussian MAC with feedback and the GBCF, presented in [9] and [30] for the channel coding problem, can be extended to JSCC, and if the approach of [9] and [30] facilitates a tractable EDT analysis. We consider this as a direction for future work. AENDIX A ROOF OF LEMMA We begin with the proof of (6a. From [5, Thm. 3..] we have: Now, for any ɛ > 0 we write: m R S (D + ɛ (a (b (c R S (D = inf Ŝ S :E{(Ŝ S } D inf I(Ŝ; S. Ŝm, S m, : m j= E{(Ŝ,j S,j } m(d+ɛ inf Ŝm, S m, : m j= E{(Ŝ,j S,j } m(d+ɛ inf Ŝm, S m, : m j= E{(Ŝ,j S,j } m(d+ɛ (d I(Ŝm,; S m,, m I(Ŝ,j; S,j j= m j= m j= I(Ŝ,j; S,j S j, I(Ŝm,; S,j S j, (A. (A. where (a follows from the convexity of the mutual information I(Ŝ; S in the conditional distribution Ŝ S ; (b follows from the assumption that the sources are memoryless; (c is due to the non-negativity of mutual information, and (d follows from the chain rule for mutual information. Next, we upper bound I(Ŝm,; S m, as follows: I(Ŝm,; S m, (a I(Y n (b (c = k=,; S m, n n h(y,k k= n I(X k ; Y,k, k= h(y,k S m,, X k, Y k, (A.3 5 Note that while the OL strategy achieves the capacity of the Gaussian MAC with feedback [8, Sec. V.A], for the GBCF the OL strategy is sub-optimal [5]. 9

20 where (a follows from the data processing inequality [5, Sec..8], by noting that S m Y n Ŝ m ; (b follows from the fact that conditioning reduces entropy, and (c follows from the fact that since the channel is memoryless, then Y,k channel input X k, see (. By combining (A. (A.3 we obtain (6a. Next, we prove (6b. From [33, Thm. III.] we have: R S,S (D= Again, for any ɛ > 0, we write: m R S,S (D + ɛ (a (b inf Ŝ,Ŝ S,S : E{(Ŝi S i } D, i=, inf Ŝm,,Ŝm, Sm,,Sm, : depend on (S m, X k, Y k, only through the I(Ŝ, Ŝ; S, S. j= m j= E{(Ŝj,i S j,i } m(d+ɛ, i=, inf Ŝm,,Ŝm, Sm,,Sm, : j= m j= E{(Ŝj,i S j,i } m(d+ɛ, i=, m I(Ŝ,j, Ŝ,j; S,j, S,j m I(Ŝm,, Ŝm,; S,, m S, m (A.4 I(Ŝm,, Ŝm,; S m,, S m,, (A.5 where (a is due to the convexity of the mutual information I(Ŝ, Ŝ; S, S in the conditional distribution Ŝ,Ŝ S,S, and (b follows from the memorylessness of the sources, the chain rule for mutual information, and from the fact that it is non-negative. Next, we upper bound I(Ŝm,, Ŝm,; S,, m S, m as follows: I(Ŝm, Ŝm ; S m, S m (a I(Y n, Y n ; S m, S m n (b (c = k= h(y,k, Y,k n k= n I(X k ; Y,k, Y,k, k= h(y,k, Y,k S m, S m, X k, Y k,, Y k, (A.6 where (a follows from the data processing inequality [5, Sec..8], by noting that we have (S m, S m (Y n, Y n (Ŝm, Ŝm ; (b follows from the fact that conditioning reduces entropy, and (c follows from the fact that the channel is memoryless, thus, Y,k and Y,k depend on (S m, S m, X k, Y k,, Y k, only through the channel input X k, see (. By combining (A.4 (A.6 we obtain (6b. This concludes the proof of the lemma. AENDIX B ROOF OF EQUATION ( - MINIMUM ENERGY-ER-BIT FOR THE LQG SCHEME We first note that by following the approach taken in the achievability part of [40, Thm. ] it can be shown that for the symmetric GBCF with symmetric rates, the minimum energy-per-bit is given by: 0

21 E LQG b min = lim 0 RLQG sum (, (B. where RLQG sum ( is the sum rate achievable by the LQG scheme. Let x 0 be the unique positive ( real root of the third order polynomial p(x=(+ρ z x 3 +( ρ z x +ρ z + x ( ρ σz z. From [8, Eq. (6], for the symmetric GBCF, the achievable per-user rate of the LQG scheme is R LQG ( = log (x 0 bits. We now follow the approach taken in [3, Appendix A.3] and bound x 0 using Budan s theorem [4]: Theorem (Budan s Theorem. Let t(x = a 0 +a x+...+a n x n be a real polynomial of degree n, and let t (j (x be its j th derivative. For α R, define the function V (α as the number of sign variations in the sequence t(α, t ( (α,..., t (n (α. Then, the number of roots of the polynomial t(x in the open interval (a, b is equal to V (a V (b e, where e is an even number (which may be zero. Explicitly writing the derivatives of p(x and evaluating the sequence p (i (, i = 0,,, 3, ( we have V ( =. Note that sgn(p ( ( = sgn 4 depends on the term, however, σz σ ( z since sgn(p (0 (=sgn = and sgn(p ( (=sgn (8+4ρ z =, in both cases we have σ z V ( =. Next, we let χ = where α > 0 is a real constant. Setting x = +χ we obtain ασz p(+χ=(+ρ z χ 3 +(4+ρ z αχ +(4 αχ, p ( (+χ=3(+ρ z χ +(8+4ρ z αχ+4, and p ( (+χ, p (3 (+χ > 0. Note that we are interested in the regime 0 which implies that χ 0. Now, for χ small enough we have p ( (+χ 4>0. Furthermore, when χ 0 we have p (0 (+χ = p (+ (0 (4 α. Clearly, for any 0 < α < 4, lim ασz ασz 0 p (0 ( + > 0, ασz and when α >4, lim 0 p (0 (+ < 0. Thus, letting 0 < δ <4, Budan s theorem implies that ασz ( when 0, the number of roots of p(x in the interval +, + is. From (4+δσz (4 δσz Descartes rule [4, Sec..6.3], we know that there is a unique positive root, thus, as this holds for any 0<δ <4, we conclude that lim 0 x 0 =+. lugging the value of x σz 0 into (B., and considering the sum-rate, we obtain: This concludes the proof. E LQG b min = lim 0 log ( + σ z AENDIX C ROOF OF THEOREM 4 = σ z log e. (B. First, note that if ρ s < 0, we can replace S with S, which changes only the sign of ρ s in the joint distribution of the sources. Note that changing the sign of ρ k in (4 only changes

22 the sign of ρ k while ρ k remains unchanged. Hence, α k in (3 is not affected by changing the sign of ρ s. Therefore, in the following we assume that 0 ρ s <. To simplify the notation we also omit the dependence of K OL (, D on and D, and write K OL. For characterizing the termination time of the OL scheme we first characterize the temporal evolution of ρ k. From [5, pg. 669], ρ k decreases (with k until it crosses zero. Let K th min{k : ρ k+ < 0}, regardless of whether the target MSE was achieved or not. We begin our analysis with the case K OL K th. A. The Case of K OL K th From (4 we write the (first order Maclaurin series expansion [4, Ch ] of ρ k+ ρ k, in the parameter : ρ k+ ρ k = (( ρ k sgn(ρ k+( ρ z (sgn(ρ k +ρ k + Res (, k, (C. σ z where Res (, k is the remainder of the first order Maclaurin series expansion. The following lemma upper bounds Res (, k : Lemma C.. For any k, we have Res (, k B (, where B ( is defined in (4. roof: Let ϕ(, k ρ k+ ρ k. From Taylor s theorem [4, Subsec ] it follows that Res (, k = ϕ(x,k x, for some 0 x. In the following we upper bound ϕ(x,k, for x 0 x : Let b =( ρ k (sgn(ρ k+ρ k, b =ρ z σ z( ρ k (sgn(ρ k+ρ k +σ z( ρ z ((sgn(ρ k + ρ k +ρ k ( ρ k, a = ( ρ k, a = σ z ((+ ρ k + ρ k, and a 0 = σ 4 z(+ ρ k. 6 Using (4, the expression ρ k+ ρ k can now be explicitly written as ϕ(, k = we obtain: ϕ(x, k x = b b a +a +a 0, from which ( (a a b a b x 3 + 3a 0 a b x + 3a 0 a b x + a 0 a b a 0b (a x + a x + a 0 3 Since a, a > 0, we lower bound the denominator of ϕ(x,k x in the range 0 x by (a x + a x + a 0 3 a 3 0 = 8σ z. Next, we upper bound each of the terms in the numerator of ϕ(x,k x. For the coefficient of x 3 we write a a b a b (4σ z + ρ z σ z + σ z( ρ z 5 = σ z (8 + ψ, where the inequality follows from the fact that 3 + ρ k ρ k coefficient of x. 4. For the we write 3a 0 a b 4σ 4 z. For the coefficient of x we write 3a 0 a b σ 6 z ( ρ z + 5( ρ z = σ 6 zψ. Finally, for the constant term we write a 0 a b a 0b 4σ 8 z (4σ zψ + 8. Collecting the above bounds on the terms of the numerator, and the bound on the denominator, we obtain Res (, k B (, concluding the proof of the lemma. 6 Note that in order to simplify the expressions we ignore the dependence of b, b, a, a, and a 0 in k

23 Note that for k K th we have ρ k > 0. Hence, (C. together with Lemma C. imply that, for k K th we have: ρ k+ ρ k ( + ρ k( ρ z ρ k + B ( σz. Next, note that the function f(x ( + x( ρ z x, 0 x < satisfies: min{ ρ z, ( ρ z } f(x (3 ρ z, 0 x<. (C. 4 The lower bound on f(x follows from the fact that f(x is concave, and the upper bound is obtained via: max x R f(x. When B ( < ψ then we have min{ ρz,( ρz} min{ ρ z,( ρ z} σ z B ( and the bound on (+ρ k( ρ z ρ k σz min{ ρ z, ( ρ z } σ z > B (, hence >0. Thus, we can combine the lower and upper bounds on Res (, k, to obtain the following lower and upper bounds on ρ k+ ρ k : B ( ρ k+ ρ k (3 ρ z + B ( σz 8σz. (C.3 Now, recalling that ρ 0 = ρ s, the fact that the bound in (C.3 does not depend on k results in the following upper bound on K th : ρ s K th min{ ρ σz z, ( ρ z } B ( = ρ s ψ B (. Next, using the fact that ρ k 0 for k < K th, we rewrite (C. as follows: ρ k+ ρ k ( + ρ k ( ρ z ρ k = σz which implies that for K OL K th we have: Observe that K OL ρ k+ ρ k ( + ρ k ( ρ z ρ k = K OL σ z + + Res (, k ( + ρ k ( ρ z ρ k, K OL (C.4 Res (, k ( + ρ k ( ρ z ρ k. (C.5 Res (,k (+ρ k ( ρ z ρ k O(, which follows from the fact that 0 < (+ρ k ( ρ z ρ k is lower and upper bounded independent of and ρ k, see (C., and from the fact that Res (, k O(. Next, we focus on the left-hand-side of (C.5 and write: K OL KOL ρ k+ ρ k ( + ρ k ( ρ z ρ k = ( + ρ k ( ρ z ρ k Since ρ z <, it follows that f(x = (+x( x ρ z ρk+ ρ k dρ. (C.6 is continuous, differentiable and bounded over 0 x <, which implies that there exists a constant c 0 such that: max x [ρ k+,ρ k ] f(x f(ρ k c 0 ρ k+ ρ k. (C.7 The constant c 0 is upper bounded in the following Lemma C.. Note that (C.7 constitutes an upper bound on the maximal magnitude of the difference between and. f(ρ k+ f(ρ k { } Lemma C.. The constant c 0, in (C.7, satisfies: c 0 max ρz +ρ ( ρ z, z 4( ρ z ψ 3. 3

Joint Source-Channel Coding for the Multiple-Access Relay Channel

Joint Source-Channel Coding for the Multiple-Access Relay Channel Joint Source-Channel Coding for the Multiple-Access Relay Channel Yonathan Murin, Ron Dabora Department of Electrical and Computer Engineering Ben-Gurion University, Israel Email: moriny@bgu.ac.il, ron@ee.bgu.ac.il

More information

THE multiple-access relay channel (MARC) is a multiuser

THE multiple-access relay channel (MARC) is a multiuser IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 60, NO. 10, OCTOBER 2014 6231 On Joint Source-Channel Coding for Correlated Sources Over Multiple-Access Relay Channels Yonathan Murin, Ron Dabora, and Deniz

More information

5958 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 56, NO. 12, DECEMBER 2010

5958 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 56, NO. 12, DECEMBER 2010 5958 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 56, NO. 12, DECEMBER 2010 Capacity Theorems for Discrete, Finite-State Broadcast Channels With Feedback and Unidirectional Receiver Cooperation Ron Dabora

More information

Approximately achieving the feedback interference channel capacity with point-to-point codes

Approximately achieving the feedback interference channel capacity with point-to-point codes Approximately achieving the feedback interference channel capacity with point-to-point codes Joyson Sebastian*, Can Karakus*, Suhas Diggavi* Abstract Superposition codes with rate-splitting have been used

More information

An Outer Bound for the Gaussian. Interference channel with a relay.

An Outer Bound for the Gaussian. Interference channel with a relay. An Outer Bound for the Gaussian Interference Channel with a Relay Ivana Marić Stanford University Stanford, CA ivanam@wsl.stanford.edu Ron Dabora Ben-Gurion University Be er-sheva, Israel ron@ee.bgu.ac.il

More information

On the Capacity of the Interference Channel with a Relay

On the Capacity of the Interference Channel with a Relay On the Capacity of the Interference Channel with a Relay Ivana Marić, Ron Dabora and Andrea Goldsmith Stanford University, Stanford, CA {ivanam,ron,andrea}@wsl.stanford.edu Abstract Capacity gains due

More information

Source-Channel Coding Theorems for the Multiple-Access Relay Channel

Source-Channel Coding Theorems for the Multiple-Access Relay Channel Source-Channel Coding Theorems for the Multiple-Access Relay Channel Yonathan Murin, Ron Dabora, and Deniz Gündüz Abstract We study reliable transmission of arbitrarily correlated sources over multiple-access

More information

Cooperative Communication with Feedback via Stochastic Approximation

Cooperative Communication with Feedback via Stochastic Approximation Cooperative Communication with Feedback via Stochastic Approximation Utsaw Kumar J Nicholas Laneman and Vijay Gupta Department of Electrical Engineering University of Notre Dame Email: {ukumar jnl vgupta}@ndedu

More information

ECE Information theory Final (Fall 2008)

ECE Information theory Final (Fall 2008) ECE 776 - Information theory Final (Fall 2008) Q.1. (1 point) Consider the following bursty transmission scheme for a Gaussian channel with noise power N and average power constraint P (i.e., 1/n X n i=1

More information

Capacity of the Discrete Memoryless Energy Harvesting Channel with Side Information

Capacity of the Discrete Memoryless Energy Harvesting Channel with Side Information 204 IEEE International Symposium on Information Theory Capacity of the Discrete Memoryless Energy Harvesting Channel with Side Information Omur Ozel, Kaya Tutuncuoglu 2, Sennur Ulukus, and Aylin Yener

More information

On Source-Channel Communication in Networks

On Source-Channel Communication in Networks On Source-Channel Communication in Networks Michael Gastpar Department of EECS University of California, Berkeley gastpar@eecs.berkeley.edu DIMACS: March 17, 2003. Outline 1. Source-Channel Communication

More information

ProblemsWeCanSolveWithaHelper

ProblemsWeCanSolveWithaHelper ITW 2009, Volos, Greece, June 10-12, 2009 ProblemsWeCanSolveWitha Haim Permuter Ben-Gurion University of the Negev haimp@bgu.ac.il Yossef Steinberg Technion - IIT ysteinbe@ee.technion.ac.il Tsachy Weissman

More information

Upper Bounds on the Capacity of Binary Intermittent Communication

Upper Bounds on the Capacity of Binary Intermittent Communication Upper Bounds on the Capacity of Binary Intermittent Communication Mostafa Khoshnevisan and J. Nicholas Laneman Department of Electrical Engineering University of Notre Dame Notre Dame, Indiana 46556 Email:{mhoshne,

More information

Source and Channel Coding for Correlated Sources Over Multiuser Channels

Source and Channel Coding for Correlated Sources Over Multiuser Channels Source and Channel Coding for Correlated Sources Over Multiuser Channels Deniz Gündüz, Elza Erkip, Andrea Goldsmith, H. Vincent Poor Abstract Source and channel coding over multiuser channels in which

More information

Interference Channels with Source Cooperation

Interference Channels with Source Cooperation Interference Channels with Source Cooperation arxiv:95.319v1 [cs.it] 19 May 29 Vinod Prabhakaran and Pramod Viswanath Coordinated Science Laboratory University of Illinois, Urbana-Champaign Urbana, IL

More information

Information Theory for Wireless Communications, Part II:

Information Theory for Wireless Communications, Part II: Information Theory for Wireless Communications, Part II: Lecture 5: Multiuser Gaussian MIMO Multiple-Access Channel Instructor: Dr Saif K Mohammed Scribe: Johannes Lindblom In this lecture, we give the

More information

Lecture 4 Noisy Channel Coding

Lecture 4 Noisy Channel Coding Lecture 4 Noisy Channel Coding I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw October 9, 2015 1 / 56 I-Hsiang Wang IT Lecture 4 The Channel Coding Problem

More information

6196 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 9, SEPTEMBER 2011

6196 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 9, SEPTEMBER 2011 6196 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 9, SEPTEMBER 2011 On the Structure of Real-Time Encoding and Decoding Functions in a Multiterminal Communication System Ashutosh Nayyar, Student

More information

Concatenated Coding Using Linear Schemes for Gaussian Broadcast Channels with Noisy Channel Output Feedback

Concatenated Coding Using Linear Schemes for Gaussian Broadcast Channels with Noisy Channel Output Feedback IEEE TRANSACTIONS ON COMMUNICATIONS, VOL. XX, NO. Y, MONTH 204 Concatenated Coding Using Linear Schemes for Gaussian Broadcast Channels with Noisy Channel Output Feedback Ziad Ahmad, Student Member, IEEE,

More information

Cut-Set Bound and Dependence Balance Bound

Cut-Set Bound and Dependence Balance Bound Cut-Set Bound and Dependence Balance Bound Lei Xiao lxiao@nd.edu 1 Date: 4 October, 2006 Reading: Elements of information theory by Cover and Thomas [1, Section 14.10], and the paper by Hekstra and Willems

More information

Wideband Fading Channel Capacity with Training and Partial Feedback

Wideband Fading Channel Capacity with Training and Partial Feedback Wideband Fading Channel Capacity with Training and Partial Feedback Manish Agarwal, Michael L. Honig ECE Department, Northwestern University 145 Sheridan Road, Evanston, IL 6008 USA {m-agarwal,mh}@northwestern.edu

More information

Reliable Computation over Multiple-Access Channels

Reliable Computation over Multiple-Access Channels Reliable Computation over Multiple-Access Channels Bobak Nazer and Michael Gastpar Dept. of Electrical Engineering and Computer Sciences University of California, Berkeley Berkeley, CA, 94720-1770 {bobak,

More information

Towards control over fading channels

Towards control over fading channels Towards control over fading channels Paolo Minero, Massimo Franceschetti Advanced Network Science University of California San Diego, CA, USA mail: {minero,massimo}@ucsd.edu Invited Paper) Subhrakanti

More information

Iterative Encoder-Controller Design for Feedback Control Over Noisy Channels

Iterative Encoder-Controller Design for Feedback Control Over Noisy Channels IEEE TRANSACTIONS ON AUTOMATIC CONTROL 1 Iterative Encoder-Controller Design for Feedback Control Over Noisy Channels Lei Bao, Member, IEEE, Mikael Skoglund, Senior Member, IEEE, and Karl Henrik Johansson,

More information

820 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 58, NO. 2, FEBRUARY Stefano Rini, Daniela Tuninetti, and Natasha Devroye

820 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 58, NO. 2, FEBRUARY Stefano Rini, Daniela Tuninetti, and Natasha Devroye 820 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 58, NO. 2, FEBRUARY 2012 Inner and Outer Bounds for the Gaussian Cognitive Interference Channel and New Capacity Results Stefano Rini, Daniela Tuninetti,

More information

Lecture 5 Channel Coding over Continuous Channels

Lecture 5 Channel Coding over Continuous Channels Lecture 5 Channel Coding over Continuous Channels I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw November 14, 2014 1 / 34 I-Hsiang Wang NIT Lecture 5 From

More information

On the Secrecy Capacity of the Z-Interference Channel

On the Secrecy Capacity of the Z-Interference Channel On the Secrecy Capacity of the Z-Interference Channel Ronit Bustin Tel Aviv University Email: ronitbustin@post.tau.ac.il Mojtaba Vaezi Princeton University Email: mvaezi@princeton.edu Rafael F. Schaefer

More information

A Proof of the Converse for the Capacity of Gaussian MIMO Broadcast Channels

A Proof of the Converse for the Capacity of Gaussian MIMO Broadcast Channels A Proof of the Converse for the Capacity of Gaussian MIMO Broadcast Channels Mehdi Mohseni Department of Electrical Engineering Stanford University Stanford, CA 94305, USA Email: mmohseni@stanford.edu

More information

Lecture 22: Final Review

Lecture 22: Final Review Lecture 22: Final Review Nuts and bolts Fundamental questions and limits Tools Practical algorithms Future topics Dr Yao Xie, ECE587, Information Theory, Duke University Basics Dr Yao Xie, ECE587, Information

More information

Lecture 1: The Multiple Access Channel. Copyright G. Caire 12

Lecture 1: The Multiple Access Channel. Copyright G. Caire 12 Lecture 1: The Multiple Access Channel Copyright G. Caire 12 Outline Two-user MAC. The Gaussian case. The K-user case. Polymatroid structure and resource allocation problems. Copyright G. Caire 13 Two-user

More information

IN this paper, we show that the scalar Gaussian multiple-access

IN this paper, we show that the scalar Gaussian multiple-access 768 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 50, NO. 5, MAY 2004 On the Duality of Gaussian Multiple-Access and Broadcast Channels Nihar Jindal, Student Member, IEEE, Sriram Vishwanath, and Andrea

More information

On the Capacity and Degrees of Freedom Regions of MIMO Interference Channels with Limited Receiver Cooperation

On the Capacity and Degrees of Freedom Regions of MIMO Interference Channels with Limited Receiver Cooperation On the Capacity and Degrees of Freedom Regions of MIMO Interference Channels with Limited Receiver Cooperation Mehdi Ashraphijuo, Vaneet Aggarwal and Xiaodong Wang 1 arxiv:1308.3310v1 [cs.it] 15 Aug 2013

More information

The Capacity Region of the Gaussian Cognitive Radio Channels at High SNR

The Capacity Region of the Gaussian Cognitive Radio Channels at High SNR The Capacity Region of the Gaussian Cognitive Radio Channels at High SNR 1 Stefano Rini, Daniela Tuninetti and Natasha Devroye srini2, danielat, devroye @ece.uic.edu University of Illinois at Chicago Abstract

More information

Approximate Capacity of Fast Fading Interference Channels with no CSIT

Approximate Capacity of Fast Fading Interference Channels with no CSIT Approximate Capacity of Fast Fading Interference Channels with no CSIT Joyson Sebastian, Can Karakus, Suhas Diggavi Abstract We develop a characterization of fading models, which assigns a number called

More information

Interference Channel aided by an Infrastructure Relay

Interference Channel aided by an Infrastructure Relay Interference Channel aided by an Infrastructure Relay Onur Sahin, Osvaldo Simeone, and Elza Erkip *Department of Electrical and Computer Engineering, Polytechnic Institute of New York University, Department

More information

Feedback Capacity of the Gaussian Interference Channel to Within Bits: the Symmetric Case

Feedback Capacity of the Gaussian Interference Channel to Within Bits: the Symmetric Case 1 arxiv:0901.3580v1 [cs.it] 23 Jan 2009 Feedback Capacity of the Gaussian Interference Channel to Within 1.7075 Bits: the Symmetric Case Changho Suh and David Tse Wireless Foundations in the Department

More information

Primary Rate-Splitting Achieves Capacity for the Gaussian Cognitive Interference Channel

Primary Rate-Splitting Achieves Capacity for the Gaussian Cognitive Interference Channel Primary Rate-Splitting Achieves Capacity for the Gaussian Cognitive Interference Channel Stefano Rini, Ernest Kurniawan and Andrea Goldsmith Technische Universität München, Munich, Germany, Stanford University,

More information

A Half-Duplex Cooperative Scheme with Partial Decode-Forward Relaying

A Half-Duplex Cooperative Scheme with Partial Decode-Forward Relaying A Half-Duplex Cooperative Scheme with Partial Decode-Forward Relaying Ahmad Abu Al Haija, and Mai Vu, Department of Electrical and Computer Engineering McGill University Montreal, QC H3A A7 Emails: ahmadabualhaija@mailmcgillca,

More information

On Gaussian MIMO Broadcast Channels with Common and Private Messages

On Gaussian MIMO Broadcast Channels with Common and Private Messages On Gaussian MIMO Broadcast Channels with Common and Private Messages Ersen Ekrem Sennur Ulukus Department of Electrical and Computer Engineering University of Maryland, College Park, MD 20742 ersen@umd.edu

More information

On Capacity Under Received-Signal Constraints

On Capacity Under Received-Signal Constraints On Capacity Under Received-Signal Constraints Michael Gastpar Dept. of EECS, University of California, Berkeley, CA 9470-770 gastpar@berkeley.edu Abstract In a world where different systems have to share

More information

Simultaneous SDR Optimality via a Joint Matrix Decomp.

Simultaneous SDR Optimality via a Joint Matrix Decomp. Simultaneous SDR Optimality via a Joint Matrix Decomposition Joint work with: Yuval Kochman, MIT Uri Erez, Tel Aviv Uni. May 26, 2011 Model: Source Multicasting over MIMO Channels z 1 H 1 y 1 Rx1 ŝ 1 s

More information

Energy-Distortion Tradeoff for the Gaussian Broadcast Channel with Feedback

Energy-Distortion Tradeoff for the Gaussian Broadcast Channel with Feedback 016 IEEE International Sympoium on Information Theory Energy-itortion Tradeoff for the Gauian Broadcat Channel with Feedback Yonathan Murin 1, Yonatan Kapi, Ron abora 3, and eniz Gündüz 4 1 Stanford Univerity,

More information

K User Interference Channel with Backhaul

K User Interference Channel with Backhaul 1 K User Interference Channel with Backhaul Cooperation: DoF vs. Backhaul Load Trade Off Borna Kananian,, Mohammad A. Maddah-Ali,, Babak H. Khalaj, Department of Electrical Engineering, Sharif University

More information

Multicoding Schemes for Interference Channels

Multicoding Schemes for Interference Channels Multicoding Schemes for Interference Channels 1 Ritesh Kolte, Ayfer Özgür, Haim Permuter Abstract arxiv:1502.04273v1 [cs.it] 15 Feb 2015 The best known inner bound for the 2-user discrete memoryless interference

More information

Multiuser Successive Refinement and Multiple Description Coding

Multiuser Successive Refinement and Multiple Description Coding Multiuser Successive Refinement and Multiple Description Coding Chao Tian Laboratory for Information and Communication Systems (LICOS) School of Computer and Communication Sciences EPFL Lausanne Switzerland

More information

Random Access: An Information-Theoretic Perspective

Random Access: An Information-Theoretic Perspective Random Access: An Information-Theoretic Perspective Paolo Minero, Massimo Franceschetti, and David N. C. Tse Abstract This paper considers a random access system where each sender can be in two modes of

More information

AN INTRODUCTION TO SECRECY CAPACITY. 1. Overview

AN INTRODUCTION TO SECRECY CAPACITY. 1. Overview AN INTRODUCTION TO SECRECY CAPACITY BRIAN DUNN. Overview This paper introduces the reader to several information theoretic aspects of covert communications. In particular, it discusses fundamental limits

More information

Error Exponent Region for Gaussian Broadcast Channels

Error Exponent Region for Gaussian Broadcast Channels Error Exponent Region for Gaussian Broadcast Channels Lihua Weng, S. Sandeep Pradhan, and Achilleas Anastasopoulos Electrical Engineering and Computer Science Dept. University of Michigan, Ann Arbor, MI

More information

AN INFORMATION THEORY APPROACH TO WIRELESS SENSOR NETWORK DESIGN

AN INFORMATION THEORY APPROACH TO WIRELESS SENSOR NETWORK DESIGN AN INFORMATION THEORY APPROACH TO WIRELESS SENSOR NETWORK DESIGN A Thesis Presented to The Academic Faculty by Bryan Larish In Partial Fulfillment of the Requirements for the Degree Doctor of Philosophy

More information

On the Capacity of Diffusion-Based Molecular Timing Channels With Diversity

On the Capacity of Diffusion-Based Molecular Timing Channels With Diversity On the Capacity of Diffusion-Based Molecular Timing Channels With Diversity Nariman Farsad, Yonathan Murin, Milind Rao, and Andrea Goldsmith Electrical Engineering, Stanford University, USA Abstract This

More information

On the Throughput, Capacity and Stability Regions of Random Multiple Access over Standard Multi-Packet Reception Channels

On the Throughput, Capacity and Stability Regions of Random Multiple Access over Standard Multi-Packet Reception Channels On the Throughput, Capacity and Stability Regions of Random Multiple Access over Standard Multi-Packet Reception Channels Jie Luo, Anthony Ephremides ECE Dept. Univ. of Maryland College Park, MD 20742

More information

LQG Control Approach to Gaussian Broadcast Channels With Feedback

LQG Control Approach to Gaussian Broadcast Channels With Feedback IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 58, NO. 8, AUGUST 2012 5267 LQG Control Approach to Gaussian Broadcast Channels With Feedback Ehsan Ardestanizadeh, Member, IEEE, Paolo Minero, Member, IEEE,

More information

Feedback Capacity of the First-Order Moving Average Gaussian Channel

Feedback Capacity of the First-Order Moving Average Gaussian Channel Feedback Capacity of the First-Order Moving Average Gaussian Channel Young-Han Kim* Information Systems Laboratory, Stanford University, Stanford, CA 94305, USA Email: yhk@stanford.edu Abstract The feedback

More information

On the Duality between Multiple-Access Codes and Computation Codes

On the Duality between Multiple-Access Codes and Computation Codes On the Duality between Multiple-Access Codes and Computation Codes Jingge Zhu University of California, Berkeley jingge.zhu@berkeley.edu Sung Hoon Lim KIOST shlim@kiost.ac.kr Michael Gastpar EPFL michael.gastpar@epfl.ch

More information

Lecture 6 I. CHANNEL CODING. X n (m) P Y X

Lecture 6 I. CHANNEL CODING. X n (m) P Y X 6- Introduction to Information Theory Lecture 6 Lecturer: Haim Permuter Scribe: Yoav Eisenberg and Yakov Miron I. CHANNEL CODING We consider the following channel coding problem: m = {,2,..,2 nr} Encoder

More information

ELEC546 Review of Information Theory

ELEC546 Review of Information Theory ELEC546 Review of Information Theory Vincent Lau 1/1/004 1 Review of Information Theory Entropy: Measure of uncertainty of a random variable X. The entropy of X, H(X), is given by: If X is a discrete random

More information

Secure Degrees of Freedom of the MIMO Multiple Access Wiretap Channel

Secure Degrees of Freedom of the MIMO Multiple Access Wiretap Channel Secure Degrees of Freedom of the MIMO Multiple Access Wiretap Channel Pritam Mukherjee Sennur Ulukus Department of Electrical and Computer Engineering University of Maryland, College Park, MD 074 pritamm@umd.edu

More information

Dirty Paper Coding vs. TDMA for MIMO Broadcast Channels

Dirty Paper Coding vs. TDMA for MIMO Broadcast Channels TO APPEAR IEEE INTERNATIONAL CONFERENCE ON COUNICATIONS, JUNE 004 1 Dirty Paper Coding vs. TDA for IO Broadcast Channels Nihar Jindal & Andrea Goldsmith Dept. of Electrical Engineering, Stanford University

More information

On Common Information and the Encoding of Sources that are Not Successively Refinable

On Common Information and the Encoding of Sources that are Not Successively Refinable On Common Information and the Encoding of Sources that are Not Successively Refinable Kumar Viswanatha, Emrah Akyol, Tejaswi Nanjundaswamy and Kenneth Rose ECE Department, University of California - Santa

More information

On the Capacity Region of the Gaussian Z-channel

On the Capacity Region of the Gaussian Z-channel On the Capacity Region of the Gaussian Z-channel Nan Liu Sennur Ulukus Department of Electrical and Computer Engineering University of Maryland, College Park, MD 74 nkancy@eng.umd.edu ulukus@eng.umd.edu

More information

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 2, FEBRUARY Uplink Downlink Duality Via Minimax Duality. Wei Yu, Member, IEEE (1) (2)

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 2, FEBRUARY Uplink Downlink Duality Via Minimax Duality. Wei Yu, Member, IEEE (1) (2) IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 2, FEBRUARY 2006 361 Uplink Downlink Duality Via Minimax Duality Wei Yu, Member, IEEE Abstract The sum capacity of a Gaussian vector broadcast channel

More information

Multiaccess Channels with State Known to One Encoder: A Case of Degraded Message Sets

Multiaccess Channels with State Known to One Encoder: A Case of Degraded Message Sets Multiaccess Channels with State Known to One Encoder: A Case of Degraded Message Sets Shivaprasad Kotagiri and J. Nicholas Laneman Department of Electrical Engineering University of Notre Dame Notre Dame,

More information

The Fading Number of a Multiple-Access Rician Fading Channel

The Fading Number of a Multiple-Access Rician Fading Channel The Fading Number of a Multiple-Access Rician Fading Channel Intermediate Report of NSC Project Capacity Analysis of Various Multiple-Antenna Multiple-Users Communication Channels with Joint Estimation

More information

Energy State Amplification in an Energy Harvesting Communication System

Energy State Amplification in an Energy Harvesting Communication System Energy State Amplification in an Energy Harvesting Communication System Omur Ozel Sennur Ulukus Department of Electrical and Computer Engineering University of Maryland College Park, MD 20742 omur@umd.edu

More information

On Network Interference Management

On Network Interference Management On Network Interference Management Aleksandar Jovičić, Hua Wang and Pramod Viswanath March 3, 2008 Abstract We study two building-block models of interference-limited wireless networks, motivated by the

More information

The Poisson Channel with Side Information

The Poisson Channel with Side Information The Poisson Channel with Side Information Shraga Bross School of Enginerring Bar-Ilan University, Israel brosss@macs.biu.ac.il Amos Lapidoth Ligong Wang Signal and Information Processing Laboratory ETH

More information

ECE Information theory Final

ECE Information theory Final ECE 776 - Information theory Final Q1 (1 point) We would like to compress a Gaussian source with zero mean and variance 1 We consider two strategies In the first, we quantize with a step size so that the

More information

A New Achievable Region for Gaussian Multiple Descriptions Based on Subset Typicality

A New Achievable Region for Gaussian Multiple Descriptions Based on Subset Typicality 0 IEEE Information Theory Workshop A New Achievable Region for Gaussian Multiple Descriptions Based on Subset Typicality Kumar Viswanatha, Emrah Akyol and Kenneth Rose ECE Department, University of California

More information

Appendix B Information theory from first principles

Appendix B Information theory from first principles Appendix B Information theory from first principles This appendix discusses the information theory behind the capacity expressions used in the book. Section 8.3.4 is the only part of the book that supposes

More information

Communication constraints and latency in Networked Control Systems

Communication constraints and latency in Networked Control Systems Communication constraints and latency in Networked Control Systems João P. Hespanha Center for Control Engineering and Computation University of California Santa Barbara In collaboration with Antonio Ortega

More information

Rate Region of the Quadratic Gaussian Two-Encoder Source-Coding Problem

Rate Region of the Quadratic Gaussian Two-Encoder Source-Coding Problem Rate Region of the Quadratic Gaussian Two-Encoder Source-Coding Problem Aaron B Wagner, Saurabha Tavildar, and Pramod Viswanath June 9, 2007 Abstract We determine the rate region of the quadratic Gaussian

More information

Title. Author(s)Tsai, Shang-Ho. Issue Date Doc URL. Type. Note. File Information. Equal Gain Beamforming in Rayleigh Fading Channels

Title. Author(s)Tsai, Shang-Ho. Issue Date Doc URL. Type. Note. File Information. Equal Gain Beamforming in Rayleigh Fading Channels Title Equal Gain Beamforming in Rayleigh Fading Channels Author(s)Tsai, Shang-Ho Proceedings : APSIPA ASC 29 : Asia-Pacific Signal Citationand Conference: 688-691 Issue Date 29-1-4 Doc URL http://hdl.handle.net/2115/39789

More information

An Achievable Rate Region for the 3-User-Pair Deterministic Interference Channel

An Achievable Rate Region for the 3-User-Pair Deterministic Interference Channel Forty-Ninth Annual Allerton Conference Allerton House, UIUC, Illinois, USA September 8-3, An Achievable Rate Region for the 3-User-Pair Deterministic Interference Channel Invited Paper Bernd Bandemer and

More information

18.2 Continuous Alphabet (discrete-time, memoryless) Channel

18.2 Continuous Alphabet (discrete-time, memoryless) Channel 0-704: Information Processing and Learning Spring 0 Lecture 8: Gaussian channel, Parallel channels and Rate-distortion theory Lecturer: Aarti Singh Scribe: Danai Koutra Disclaimer: These notes have not

More information

Tracking and Control of Gauss Markov Processes over Packet-Drop Channels with Acknowledgments

Tracking and Control of Gauss Markov Processes over Packet-Drop Channels with Acknowledgments Tracking and Control of Gauss Markov Processes over Packet-Drop Channels with Acknowledgments Anatoly Khina, Victoria Kostina, Ashish Khisti, and Babak Hassibi arxiv:1702.01779v4 [cs.it] 23 May 2018 Abstract

More information

arxiv: v2 [cs.it] 28 May 2017

arxiv: v2 [cs.it] 28 May 2017 Feedback and Partial Message Side-Information on the Semideterministic Broadcast Channel Annina Bracher and Michèle Wigger arxiv:1508.01880v2 [cs.it] 28 May 2017 May 30, 2017 Abstract The capacity of the

More information

Midterm Exam Information Theory Fall Midterm Exam. Time: 09:10 12:10 11/23, 2016

Midterm Exam Information Theory Fall Midterm Exam. Time: 09:10 12:10 11/23, 2016 Midterm Exam Time: 09:10 12:10 11/23, 2016 Name: Student ID: Policy: (Read before You Start to Work) The exam is closed book. However, you are allowed to bring TWO A4-size cheat sheet (single-sheet, two-sided).

More information

(Classical) Information Theory III: Noisy channel coding

(Classical) Information Theory III: Noisy channel coding (Classical) Information Theory III: Noisy channel coding Sibasish Ghosh The Institute of Mathematical Sciences CIT Campus, Taramani, Chennai 600 113, India. p. 1 Abstract What is the best possible way

More information

CS6304 / Analog and Digital Communication UNIT IV - SOURCE AND ERROR CONTROL CODING PART A 1. What is the use of error control coding? The main use of error control coding is to reduce the overall probability

More information

Decentralized Stochastic Control with Partial Sharing Information Structures: A Common Information Approach

Decentralized Stochastic Control with Partial Sharing Information Structures: A Common Information Approach Decentralized Stochastic Control with Partial Sharing Information Structures: A Common Information Approach 1 Ashutosh Nayyar, Aditya Mahajan and Demosthenis Teneketzis Abstract A general model of decentralized

More information

Lattices for Distributed Source Coding: Jointly Gaussian Sources and Reconstruction of a Linear Function

Lattices for Distributed Source Coding: Jointly Gaussian Sources and Reconstruction of a Linear Function Lattices for Distributed Source Coding: Jointly Gaussian Sources and Reconstruction of a Linear Function Dinesh Krithivasan and S. Sandeep Pradhan Department of Electrical Engineering and Computer Science,

More information

Efficient Use of Joint Source-Destination Cooperation in the Gaussian Multiple Access Channel

Efficient Use of Joint Source-Destination Cooperation in the Gaussian Multiple Access Channel Efficient Use of Joint Source-Destination Cooperation in the Gaussian Multiple Access Channel Ahmad Abu Al Haija ECE Department, McGill University, Montreal, QC, Canada Email: ahmad.abualhaija@mail.mcgill.ca

More information

Capacity Region of the Permutation Channel

Capacity Region of the Permutation Channel Capacity Region of the Permutation Channel John MacLaren Walsh and Steven Weber Abstract We discuss the capacity region of a degraded broadcast channel (DBC) formed from a channel that randomly permutes

More information

Capacity Theorems for Relay Channels

Capacity Theorems for Relay Channels Capacity Theorems for Relay Channels Abbas El Gamal Department of Electrical Engineering Stanford University April, 2006 MSRI-06 Relay Channel Discrete-memoryless relay channel [vm 7] Relay Encoder Y n

More information

Distributed Hypothesis Testing Over Discrete Memoryless Channels

Distributed Hypothesis Testing Over Discrete Memoryless Channels 1 Distributed Hypothesis Testing Over Discrete Memoryless Channels Sreejith Sreekumar and Deniz Gündüz Imperial College London, UK Email: {s.sreekumar15, d.gunduz}@imperial.ac.uk Abstract A distributed

More information

On the K-user Cognitive Interference Channel with Cumulative Message Sharing Sum-Capacity

On the K-user Cognitive Interference Channel with Cumulative Message Sharing Sum-Capacity 03 EEE nternational Symposium on nformation Theory On the K-user Cognitive nterference Channel with Cumulative Message Sharing Sum-Capacity Diana Maamari, Daniela Tuninetti and Natasha Devroye Department

More information

On Scalable Source Coding for Multiple Decoders with Side Information

On Scalable Source Coding for Multiple Decoders with Side Information On Scalable Source Coding for Multiple Decoders with Side Information Chao Tian School of Computer and Communication Sciences Laboratory for Information and Communication Systems (LICOS), EPFL, Lausanne,

More information

Degrees of Freedom Region of the Gaussian MIMO Broadcast Channel with Common and Private Messages

Degrees of Freedom Region of the Gaussian MIMO Broadcast Channel with Common and Private Messages Degrees of Freedom Region of the Gaussian MIMO Broadcast hannel with ommon and Private Messages Ersen Ekrem Sennur Ulukus Department of Electrical and omputer Engineering University of Maryland, ollege

More information

Interactions of Information Theory and Estimation in Single- and Multi-user Communications

Interactions of Information Theory and Estimation in Single- and Multi-user Communications Interactions of Information Theory and Estimation in Single- and Multi-user Communications Dongning Guo Department of Electrical Engineering Princeton University March 8, 2004 p 1 Dongning Guo Communications

More information

Generalized Writing on Dirty Paper

Generalized Writing on Dirty Paper Generalized Writing on Dirty Paper Aaron S. Cohen acohen@mit.edu MIT, 36-689 77 Massachusetts Ave. Cambridge, MA 02139-4307 Amos Lapidoth lapidoth@isi.ee.ethz.ch ETF E107 ETH-Zentrum CH-8092 Zürich, Switzerland

More information

ECE 534 Information Theory - Midterm 2

ECE 534 Information Theory - Midterm 2 ECE 534 Information Theory - Midterm Nov.4, 009. 3:30-4:45 in LH03. You will be given the full class time: 75 minutes. Use it wisely! Many of the roblems have short answers; try to find shortcuts. You

More information

Interactive Interference Alignment

Interactive Interference Alignment Interactive Interference Alignment Quan Geng, Sreeram annan, and Pramod Viswanath Coordinated Science Laboratory and Dept. of ECE University of Illinois, Urbana-Champaign, IL 61801 Email: {geng5, kannan1,

More information

Secret Key Agreement Using Asymmetry in Channel State Knowledge

Secret Key Agreement Using Asymmetry in Channel State Knowledge Secret Key Agreement Using Asymmetry in Channel State Knowledge Ashish Khisti Deutsche Telekom Inc. R&D Lab USA Los Altos, CA, 94040 Email: ashish.khisti@telekom.com Suhas Diggavi LICOS, EFL Lausanne,

More information

Simultaneous Nonunique Decoding Is Rate-Optimal

Simultaneous Nonunique Decoding Is Rate-Optimal Fiftieth Annual Allerton Conference Allerton House, UIUC, Illinois, USA October 1-5, 2012 Simultaneous Nonunique Decoding Is Rate-Optimal Bernd Bandemer University of California, San Diego La Jolla, CA

More information

Feedback Capacity of a Class of Symmetric Finite-State Markov Channels

Feedback Capacity of a Class of Symmetric Finite-State Markov Channels Feedback Capacity of a Class of Symmetric Finite-State Markov Channels Nevroz Şen, Fady Alajaji and Serdar Yüksel Department of Mathematics and Statistics Queen s University Kingston, ON K7L 3N6, Canada

More information

Information Theory Meets Game Theory on The Interference Channel

Information Theory Meets Game Theory on The Interference Channel Information Theory Meets Game Theory on The Interference Channel Randall A. Berry Dept. of EECS Northwestern University e-mail: rberry@eecs.northwestern.edu David N. C. Tse Wireless Foundations University

More information

Optimality of Walrand-Varaiya Type Policies and. Approximation Results for Zero-Delay Coding of. Markov Sources. Richard G. Wood

Optimality of Walrand-Varaiya Type Policies and. Approximation Results for Zero-Delay Coding of. Markov Sources. Richard G. Wood Optimality of Walrand-Varaiya Type Policies and Approximation Results for Zero-Delay Coding of Markov Sources by Richard G. Wood A thesis submitted to the Department of Mathematics & Statistics in conformity

More information

X 1 : X Table 1: Y = X X 2

X 1 : X Table 1: Y = X X 2 ECE 534: Elements of Information Theory, Fall 200 Homework 3 Solutions (ALL DUE to Kenneth S. Palacio Baus) December, 200. Problem 5.20. Multiple access (a) Find the capacity region for the multiple-access

More information

Sum Capacity of General Deterministic Interference Channel with Channel Output Feedback

Sum Capacity of General Deterministic Interference Channel with Channel Output Feedback Sum Capacity of General Deterministic Interference Channel with Channel Output Feedback Achaleshwar Sahai Department of ECE, Rice University, Houston, TX 775. as27@rice.edu Vaneet Aggarwal Department of

More information

Information Dimension

Information Dimension Information Dimension Mina Karzand Massachusetts Institute of Technology November 16, 2011 1 / 26 2 / 26 Let X would be a real-valued random variable. For m N, the m point uniform quantized version of

More information