The Feedback Capacity of the First-Order Moving Average Gaussian Channel

Size: px
Start display at page:

Download "The Feedback Capacity of the First-Order Moving Average Gaussian Channel"

Transcription

1 The Feedback Capacity of the First-Order Moving Average Gaussian Channel arxiv:cs/04036v [cs.it] 2 Nov 2004 Young-Han Kim Information Systems Laboratory Stanford University March 3, 2008 Abstract The feedback capacity of the stationary Gaussian additive noise channel has been open, except for the case where the noise is white. Here we obtain the closed-form feedback capacity of the first-order moving average additive Gaussian noise channel. Specifically, the channel is given by Y i = X i + Z i, i =,2,..., where the input {X i } satisfies a power constraint and the noise {Z i } is a first-order moving average Gaussian process defined by Z i = αu i +U i, α <, with white Gaussian innovation {U i } i=0. We show that the feedback capacity of this channel is log x 0, where x 0 is the unique positive root of the equation ρx 2 = ( x 2 )( α x) 2, and ρ is the ratio of the average input power per transmission to the variance of the noise innovation U i. Paralleling the simple linear signalling scheme by Schalkwijk and Kailath for the additive white Gaussian noise channel, the optimal transmitter sends a real-valued information-bearing signal at the beginning of communication, then subsequently processes the feedback noise process through a simple linear stationary first-order autoregressive filter to help the receiver decode the information. The resulting decoding error decays doublyexponentially in the duration of the communication. This feedback capacity of the first-order moving average Gaussian channel is very similar, in form, to the best known achievable rate for the first-order autoregressive Gaussian noise channel studied by Butman, Wolfowitz, and Tiernan, although the optimality of the latter is yet to be established. Index Terms Additive Gaussian noise channels, first-order moving average, Gaussian feedback capacity, linear signalling. Introduction and Summary Consider the additive Gaussian noise channel with feedback as depicted in Figure. The channel Y i = X i + Z i, i =, 2,..., has additive Gaussian noise Z, Z 2,..., where Z n = (Z,...,Z n )

2 Z i Z n N n(0, K Z ) W {,..., 2 nr } X i (W, Y i DECODED MESSAGE ) Y i MESSAGE TRANSMITTER RECEIVER Ŵ = Ŵn(Y n ) UNIT DELAY Figure : Gaussian channel with feedback. N n (0, K Z ). We wish to communicate a message W {, 2,..., 2 nr } reliably over the channel Y n = X n + Z n. The channel output is causally fed back to the transmitter. We specify a (2 nr, n) code with the codewords (X (W), X 2 (W, Y ),..., X n (W, Y n )) satisfying the expected power constraint E n n Xi 2 (W, Y i ) P, () i= and decoding function Ŵn : R n {, 2,..., 2 nr }. The probability of error P (n) e is defined by P (n) e := Pr{Ŵn(Y n ) W }, where the message W is independent of Z n and is uniformly distributed over {, 2,..., 2 nr }. We will call an infinite sequence of nonnegative numbers {C n,fb } n= n-block feedback capacity if for every ǫ > 0, there exists a sequence of (2 n(cn,fb ǫ), n) codes with P e (n) 0 as n, and for every ǫ > 0 and any sequence of codes with 2 n(cn,fb+ǫ) codewords, P e (n) is bounded away from zero for all n. We define the feedback capacity C FB as C FB := lim n C n,fb, if the limit exists. Note that this definition of feedback capacity agrees with the usual operational definition for the capacity of memoryless channels without feedback as the supremum of achievable rates []. In [2], Cover and Pombra characterized the n-block feedback capacity C n,fb as C n,fb = max tr(k X ) np 2n log det(k Y ) det(k Z ). (2) Here K X, K Y and K Z respectively denote the covariance matrices of X n, Y n and Z n, and the maximization is over all X n of the form X n = BZ n + V n with strictly lower-triangular n n B and multivariate Gaussian V n independent of Z n. Equivalently, we can rewrite (2) as C n,fb = max 2n log det((b + I)K Z(B + I) T + K V ) det(k Z ) (3) More precisely, encoding functions X i : {,..., 2 nr } R i R, i =, 2,..., n. 2

3 where the maximization is over all nonnegative definite n n K V and strictly lower triangular n n B such that tr(bk Z B T + K V ) np. When the noise process {Z n } is stationary, the n-block capacity is super-additive in the sense that n C n,fb + m C m,fb (n + m) C n+m,fb, for all n, m =, 2,.... Then the feedback capacity C FB is well-defined (see, for example, [3]) as C FB = lim n C n,fb = lim max n B,K V 2n log det((b + I)K Z(B + I) T + K V ). (4) det(k Z ) To obtain a closed-form expression for the feedback capacity C FB, however, we need to go further than (4) since the above characterization does not give us any hint on the sequence (in n) of (B, K V ) maximizing C n,fb or its limiting behavior. In this paper, we study in detail the case where the additive Gaussian noise process {Z i } i= is a moving average process of order one (MA()). We define the Gaussian MA() noise process {Z i } i= with parameter α, α <, as Z i = α U i + U i, (5) where {U i } i=0 is a white Gaussian innovation process. Without loss of generality, we will assume that U i, i = 0,,..., has unit variance. There are alternative ways of defining Gaussian MA() processes, which we will review in Section 2. Note that the condition α < is not restrictive. When α >, it can be readily verified that the process {Z i } has the same distribution as the process { Z i } defined by Z i = α(β U i + U i ), where the moving average parameter β is given by β = /α, thus β <. For the degenerate case α =, one can easily show that the non-feedback capacity is infinity as is the feedback capacity. Hence we exclude this case from our discussion. We state the main theorem, the proof of which will be given in Section 3. Theorem. For the additive Gaussian MA() noise channel Y i = X i + Z i, i =, 2,..., with the Gaussian MA() noise process {Z i } defined in (5), the feedback capacity C FB under the power constraint n i= EX2 i np is given by C FB = log x 0, where x 0 is the unique positive root of the fourth-order polynomial P x 2 = ( x 2 )( α x) 2. (6) 3

4 As will be shown later in Sections 3 and 4, the feedback capacity C FB is achieved by an asymptotically stationary ergodic input process {X i } satisfying EXi 2 = P for all i. Thus by ergodic theorem, the feedback capacity does not diminish under a more restrictive power constraint n Xi 2 (W, Y i ) P. n i= (See also the arguments given in [2, Section VIII] based on the stationarity of the noise process.) The literature on Gaussian feedback channels is vast. We first mention some of prior works closely related to our main discussion. In earlier work, Schalkwijk and Kailath [4, 5] (see also the discussion by Wolfowitz [6]) considered the feedback over the additive white Gaussian noise channel, and proposed a simple linear signalling scheme that achieves the feedback capacity. The coding scheme by Schalkwijk and Kailath can be summarized as follows: Let θ be one of 2 nr equally spaced real numbers on some interval, say, [0, ]. At time k, the receiver forms the maximum likelihood estimate ˆθ k (Y,...,Y k ) of θ. Using the feedback information, at time k +, we send X k+ = γ k (θ ˆθ k ), where γ k is a scaling factor properly chosen to meet the power constraint. After n transmissions, the receiver finds the value of θ among 2 nr alternatives that is closest to ˆθ n. This simple signalling scheme, without any coding, achieves the feedback capacity. As is shown by Shannon [7], feedback does not increase the capacity of memoryless channels. (See also Kadota et al. [8, 9] for continuous cases.) The benefit of feedback, however, does not consists of the simplicity of coding only. The decoding error of the Schalkwijk-Kailath scheme decays doubly exponentially in the duration of communication, compared to the exponential decay for the nonfeedback scenario. In fact, there exists a feedback coding scheme such that the decoding error decreases more rapidly than the exponential of any order [0,, 2]. Later Schalkwijk extended his work to the centerof-gravity information feedback for higher dimensional signal spaces [3]. Butman [4] generalized the linear coding scheme of for white noise processes to autoregressive (AR) noise processes. For first-order autoregressive (AR()) processes {Z i } i= with regression parameter α, α <, defined by Z i = αz i + U i, he obtained a lower bound on the feedback capacity as log x 0, where x 0 is the unique positive root of the fourth-order polynomial P x 2 = ( x2 ) ( + α x) 2. (7) This rate has been shown to be optimal among a certain class of linear feedback schemes by Wolfowitz [5] and Tiernan [6] and is strongly believed to be the capacity of the AR() feedback capacity. A recent study by Yang, Tatikonda, and Kavcic [7] supports this conjecture. Tiernan and Schalkwijk [8] found an upper bound of the AR() feedback capacity, which meets Butman s lower bound for very low and very high signal-to-noise ratio. Butman [9] also obtained capacity upper and lower bounds for AR processes with higher order. For the case of moving average (MA) noise processes, there are far fewer results in the literature, although MA processes are usually more tractable than AR processes of the same order. Ozarow [20, 4

5 2] gave upper and lower bounds of the feedback capacity for AR() and MA() channels and showed that feedback strictly increases the capacity. Substantial progress was made by Ordentlich [22]; he observed that K V in (3) is at most of rank k for a MA noise process with order k. He also showed [23] that the optimal (K V, B) necessarily has the property that the current input signal X k is orthogonal to the past outputs (Y,...,Y k ). For the special case of MA() processes, this development, combined with the arguments given in [5], suggests that a linear signalling scheme similar to the Schalkwijk-Kailath scheme be optimal, which is confirmed by our Theorem. To conclude this section, we review, in a rather incomplete manner, previous works on the Gaussian feedback channel in addition to aforementioned ones, and then point out where the current work lies in the literature. The standard literature on the Gaussian feedback channel and simple feedback coding schemes over it traces back to a 956 paper by Elias [24] and its sequels [25, 26]. Turin [27, 28, 29], Horstein [30], Khas minskii [3], and Ferguson [32] studied a sequential binary signalling scheme over the Gaussian feedback channel with symbol-by-symbol decoding that achieves the feedback capacity with an error exponent better than the non-feedback case. As mentioned above, Schalkwijk and Kailath [4, 5, 3] made a major breakthrough by showing that a simple linear feedback coding scheme achieves the feedback capacity with doubly exponentially decreasing probability of decoding error. This fascinating result has been extended in many directions. Omura [33] reformulated the feedback communication problem as a stochastic-control problem and applied this approach to multiplicative and additive noise channels with noiseless feedback and to additive noise channels with noisy feedback. Pinsker [0], Kramer [], and Zigangirov [2] studied feedback coding schemes under which the probability of decoding error decays as the exponential of arbitrary high order. Wyner [34] and Kramer [] studied the performance of the Schalkwijk- Kailath scheme under a peak power constraint and reported the singly exponential behavior of the probability of decoding error under a peak power constraint. The actual error exponent of the Gaussian feedback channel under the peak power constraint was later obtained by Schalkwijk and Barron [35]. Kashyap [36], Lavenberg [37, 38] and Kramer [] looked at the case of noisy or intermittent feedback. The more natural question of transmitting a Gaussian source over a Gaussian feedback channel was studied by Kailath [39], Cruise [40], Schalkwijk and Bluestein [4], Ovseevich [42], and Ihara [43]. There are also many notable extensions of the Schalkwijk-Kailath scheme in the area of multiple user information theory. Using the Schalkwijk-Kailath scheme, Ozarow and Leung-Yan- Cheong [44] showed that feedback increases the capacity region of stochastically degraded broadcast channels, which is rather surprising since feedback does not increase the capacity region of physically degraded broadcast channels, as shown by El Gamal [45]. Ozarow [46] also established the feedback capacity region of two-user white Gaussian multiple access channel through a very innovative application of the Schalkwijk-Kailath coding scheme. The extension to a larger number of users was attempted by Kramer [47], where he also showed that feedback increases the capacity region of strong interference channels. Following these results on the white Gaussian noise channel on hand, the next focus was on the feedback capacity of the colored Gaussian noise channel. Butman [4, 9] extended the Schalkwijk-Kailath coding scheme to autoregressive noise channels. Subsequently, Tiernan and Schalkwijk [8, 6], Wolfowitz [5], Ozarow [20, 2], Dembo [50], and Yang et al. [7] studied the 5

6 feedback capacity of finite-order ARMA additive Gaussian noise channels and obtained many interesting upper and lower bounds. Using an asymptotic equipartition theorem for nonstationary nonergodic Gaussian noise processes, Cover and Pombra [2] obtained the n-block capacity (3) for the arbitrary colored Gaussian channel with or without feedback. (We can take B = 0 in (3) for the non-feedback case.) Using matrix inequalities, they also showed that feedback does not increase the capacity much; namely, feedback increases the capacity at most twice (a result obtained by Pinsker [48] and Ebert [49]), and feedback increases the capacity at most by half a bit. The extensions and refinements on the result by Cover and Pombra abound. Dembo [50] showed that the feedback does not increase the capacity at very low signal-to-noise ratio or very high signal-tonoise ratio. As mentioned above, Ordentlich [22] examined the properties of the optimal solution (K V, B) in (3) and found the rank condition of K V for finite-order MA noise processes. Chen and Yanagi [5, 52, 53] studied Cover s conjecture [54] that the feedback capacity is at most as large as the non-feedback capacity with twice the power, and made several refinements on the upper bounds by Cover and Pombra. Thomas [55], Pombra and Cover [56], and Ordentlich [57] extended the factor-of-two bound result to the colored Gaussian multiple access channels with feedback. Ihara obtained a coding theorem for continuous-time Gaussian channels with feedback [58, 59] and showed that the factor-of-two bound on the feedback capacity is tight by considering cleverly constructed nonstationary channels [60, 6]. (See also [62, Examples and 6.8.].) In fact, besides the white Gaussian noise channel, Ihara s example is the only nontrivial channel with known closedform feedback capacity. Hence Theorem provides the first feedback capacity result on stationary colored Gaussian channels. Moreover, as will be discussed in Section 4, a simple linear signalling scheme similar to the Schalkwijk-Kailath scheme achieves the feedback capacity. This result links the Cover-Pombra formulation of the feedback capacity with the Schalkwijk-Kailath scheme and its generalizations to stationary colored channels, and casts new hope on the optimality of the achievable rate for the AR() channel obtained by Butman [4]. 2 First-Order Moving Average Gaussian Processes In this section, we digress a little to review a few characteristics of first-order moving average Gaussian processes. First, we give three alternative characterizations of Gaussian MA() processes. As defined in the previous section, the Gaussian MA() noise process {Z i } i= with parameter α, α <, can be characterized as where the innovations U 0, U,... are i.i.d. N(0, ). Z i = α U i + U i, (8) We reinterpret the above definition in (8) by regarding the noise process {Z i } as the output of the linear time-invariant minimum-phase (i.e., all zeros and poles inside the unit circle) filter with transfer function H(z) = + αz, (9) which is driven by the white innovation process {U i }. Thus we alternatively characterize the Gaussian MA() noise process {Z i } with parameter α and unit innovation through its power spectral 6

7 density S Z (ω) given by S Z (ω) = + αe jω 2 = + α 2 + 2α cosω. (0) We can further identify the power spectral density S Z (ω) with the infinite Toeplitz covariance matrix of a Gaussian process. Thus, we can define {Z i } as (Z,...,Z n ) N n (0, K Z ) for each finite horizon n where K Z is tri-diagonal with + α 2 α 0 0 α + α 2. α... K Z = 0 α + α , α 0 0 α + α 2 or equivalently, + α 2, i j = 0, [K Z ] i,j = α, i j =, 0, i j 2. Note that this covariance matrix K Z is consistent with our initial definition of the MA() process given in (8). Thus all three definitions of the MA() process given above are equivalent. As we will see in the next section, the special structure of the MA() process, especially the tri-diagonality of the covariance matrix, makes the maximization in (3) easier than the generic case. We will need to calculate the entropy rate of the MA() Gaussian process later in our discussion. As shown by Kolmogorov (see [, Section.6]), the entropy rate of a stationary Gaussian process with power spectral density S(ω) can be expressed as 4π π π log (2πeS(ω)) dω. We can calculate the above integral with the power spectral density S Z (ω) in (0) by Jensen s formula [63, Theorem 5.8] and obtain the entropy rate of the MA() Gaussian process (8) as 4π π π log (2πeS Z (ω))dω = 4π π π log ( 2πe + αe iω 2) dω = log(2πe). () 2 Recall our standing assumption α <. The reader is advised to see [62, Chapter 2] for a general discussion of the entropy rate of stationary Gaussian processes, including the MA() process with parameter α =. We finish our digression by noting a certain reciprocal relationship between the Gaussian MA() process with parameter α and the Gaussian AR() process with parameter α. We can define the Gaussian AR() process {Z i } i= with parameter α, α <, as Z i = αz i + U i, 7

8 where the innovations U, U 2,... are i.i.d. N(0, ) and Z 0 N(0, /( α 2 )) is independent of U, U 2,.... Equivalently, we can define the above process as the output of the linear time-invariant filter with transfer function G(z) = + αz = H(z), where H(z) is the transfer function (9) of the MA() process with parameter α. This reciprocity is indeed reflected in the striking similarity between the fourth-order polynomial (6) for the capacity of the Gaussian MA() noise channel and the fourth-order polynomial (7) for the best known achievable rate of the Gaussian AR() noise channel. 3 Proof of Theorem We will first transform the optimization problem given in (3) to a series of (asymptotically) equivalent forms. Then we solve the problem by imposing individual power constraints (P,...,P n ) on each input signal. Subsequently we optimize over (P,..., P n ) under the average power constraint P + + P n np. Then using Lemma 2, we will prove that the uniform power allocation P = = P n = P is asymptotically optimal. This leads to a closed-form solution given in Theorem. Step. Transformations into equivalent optimization problems. Recall that we wish to solve the optimization problem: maximize log det((b + I)K Z (B + I) T + K V ) (2) over all nonnegative definite K V and strictly lower triangular B satisfying tr(bk Z B T +K V ) np. We approximate the covariance matrix K Z of the given MA() noise process with parameter α by another covariance matrix K Z. Define K Z = H ZHZ T where the lower-triangular Toeplitz matrix H Z is given by α 0 0. H Z = 0 α α This matrix K Z is a covariance matrix of the Gaussian process { Z i } i=0 defined by Z = U, Z i = U i + α U i, i = 2, 3,..., 8

9 where {U i } i= is the white Gaussian process with unit variance. It is easy to check that K Z K Z and that the difference between K Z and K Z is given by { α [ K Z K Z ] i,j = 2, i = j =, 0, otherwise. It is quite intuitive that there is no asymptotic difference in capacity between the channel with the original noise covariance K Z and the channel with K Z. We will verify this claim more rigorously in the Appendix. Throughout we will assume that the noise covariance matrix of the given channel is K Z, which is equivalent to the statement that the zeroth-time noise innovation U 0 is revealed to both the transmitter and the receiver. Now by identifying K V = F V F T V for some lower-triangular F V and identifying F Z = BH Z for some strictly lower-triangular F Z, we transform the optimization problem (2) with new variables (F V, F Z ) as maximize log det(f V F T V + (F Z + H Z )(F Z + H Z ) T ) subject to tr(f V F T V + F ZF T Z ) np. (3) We shall use 2n-dimensional row vectors f i and h i, i =,...,n, to denote the i-th row of F := [F V F Z ] and H := [ 0 n n H Z ], respectively. There is an obvious identification between the time-i input signal X i and the vector f i, i =,..., n, for we can regard f i as a point in the Hilbert space with the innovations of V n and Z n as a basis. We can similarly identify Z i with h i and Y i with f i + h i. We also introduce new variables (P,...,P n ) representing the power constraint for each input f i. Now the optimization problem in (3) becomes the following equivalent form: maximize log det((f + H)(F + H) T ) subject to f i 2 P i, i =,...,n, n i= P i np. (4) Here denotes the Euclidean norm of a 2n-dimensional vector. Note that the variables (f,..., f n ) should satisfy the relevant triangularity conditions inherited from (F V, F Z ). We make this clear by requiring f i V i, i =,..., n, where V i := {(v,...,v 2n ) R 2n : v i+ = = v n = 0 = v n+i = = v 2n }. Step 2. Optimization under the individual power constraint for each signal. We solve the optimization problem (4) in (f,...,f n ) after fixing (P,...,P n ). This step is mostly algebraic, but we can easily give a geometric interpretation. We need some notation first. We define an n-by-2n matrix S = s. s n := f + h. f n + h n = F + H, 9

10 and we define the n-by-2n matrix E by E = e. e n := [ 0 n n I ], where I is identity. We also define an n-by-2n matrix g h e G =. :=. = H E. g n h n e n We can interpret the row vector e i as the noise innovation U i and the row vector g i as Z i U i. We will use the notation F k to denote the k-by-2n submatrix of F which consists of the first k rows of F, that is, F k = We will use the similar notation for the k-by-2n submatrices of G, H, E, and S. f. f k. We now introduce a sequence of 2n-by-2n matrices {Π k } n k= as Π k = I S T k (S ks T k ) S k. Observe that S k is of full rank and thus that (S k S T k ) always exists. We can view Π k as a map of a 2n-dimensional row vector (acting from the right) to its component orthogonal to the subspace spanned by the rows s,...,s k of S k. (Or Π k maps a generic random variable A to A E(A Y k ).) It is easy to verify that Π k = Π T k = Π kπ k and Π k S T k = 0. Finally we define the intermediate objective functions of the maximization (4) as J k (P,...,P k ) := max f,...,f k f i 2 P i log det(s k S T k ), k =,...,n, so that C n,fb = max P i : P i np 2n J n(p,...,p n ). We will show that if (f,...,f k ) maximizes J k (P,...,P k ), then (f,..., f k, f k ) maximizes J k (P,...,P k ) for some f k satisfying f k = f k Π k. Thus the maximization for J n can be 0

11 solved in a greedy fashion by sequentially maximizing J, J 2,...,J n through f, f 2,...,f n. Furthermore, we will obtain the recursive relationship J 0 := 0, J = log( + P ), ( Pk+ J k+ J k = log + + α e J k J k ) 2, k =, 2,.... We need the following result to proceed to the actual maximization. Lemma. Assume P 0 and k n. Assume S k and Π k to be defined as above. Let V be an arbitrary subspace of R 2n such that V is not contained in the span of s,...,s k. Then, for any w V, max (v + w) Π k (v + w) T = ( P + w Πk ) 2. v V: v 2 P Furthermore, if w Π k 0, the maximum is attained by v = P w Π k w Π k (5) Proof. When w Π k = 0, that is, w span{s,...,s k }, the maximum of (v+w) Π k (v+w) T = v Π k v T is attained by any vector v, v 2 = P, orthogonal to span{s,..., s k } and we trivially have When w Π k 0, we have max v Π k v T = P. v V: v 2 P (v + w) Π k (v + w) T = (v + w Π k ) Π k 2 v + w Π k 2 ( P + w Π k )2, where the first inequality follows from the fact that I Π k is nonnegative definite. It is easy to check that we have equality if v is given by (5). We observe that, for k = 2,...,n, det(s k S T k ) = det ( [ Sk s k ][ Sk s k [ Sk Sk = det T S k s T k s k Sk T s k s T k ] T ) = det(s k S T k ) s k (I S T k (S k S T k ) S k )s T k = det(s k S T k ) s k Π k s T k = det(s k S T k ) (f k + g k + e k ) Π k (f k + g k + e k ) T ] = det(s k S T k ) [ + (f k + g k ) Π k (f k + g k ) T], (6)

12 v v w w Π k w span(s,..., s k ) span(s,..., s k ) (a) The case w Π k 0. (b) The case w Π k = 0. Figure 2: Geometric interpretation of Lemma. where (6) follows from the fact that e k Π k = e k and e k e T k =. Now fix f,...,f k. Since V k is not contained in span{s,...,s k } and g k V k, we have from the above lemma and (6) that ( ( ) ) 2 max det(s k S T f k : f k 2 k ) = det(s k Sk T ) + Pk + g k Π k. (7) P k If α 0, the maximum of is attained by fk = g k Π k P k g k Π k. (8) In the special case α = 0, that is, when the noise is white, we trivially have max det(s k S f k : f k 2 k T ) = det(s k Sk T ) ( + P k), P k which immediately implies that J k = J k +log(+p k ) = k i= log(+p i), which, in turn, combined with the concavity of the logarithm, implies that C n,fb = C FB = log( + P). 2 We continue our discussion under the assumption α 0, throughout this step. Until this point we have not used the special structure of the MA() noise process. Now we rely heavily on it. Following (7), we have, for k = 2,..., n, J k = max f,...,f k [ log det(s k S T k ) + log ( ( ) )] 2 + Pk + g k Π k. (9) We wish to show that both terms in (9) are individually maximized by the same (f,..., f k ). First note that g = 0, g k = αe k, k = 2, 3,..., and e k s T k =, k =, 2,.... (20) Also recall that s k = f k + g k + e k and e k Π k = e k. (2) 2

13 For k = 2, we have J 2 = max f [ = max f = max f = max f log det(s S T ) + log ( + ( )] P 2 + α e Π ) 2 log(s s T ) + log log(s s T ) + log log(s s T ) + max f + ( P2 + α + ( P2 + α log ( e I st s s s T ) 2 s s T ( P2 + + α s s T ) )2 e T ) 2. Since we trivially have we have shown that J = max f log(s s T ) = log( + P ), ( P2 J 2 J = log + + α e J ) 2. For k 3, we observe that Π k = I Sk (S T k Sk ) T S k [ ] T [ Sk 2 Sk 2 Sk 2 T S k 2 s T k = I s k s k Sk 2 T s k s T k ] [ Sk 2 s k ] Now from (20) and (2), we have = I S T k 2 ( Sk 2 S T k 2) Sk 2 Π k 2 s T k ( sk Π k 2 s T k ) sk Π k 2 = Π k 2 (I Π k 2 s T k ( sk Π k 2 s T k ) sk Π k 2 )Π k 2. g k Π k 2 = g k Π k gk T ( ) ) = α 2 e k (I Π k 2 s T k sk Π k 2 s T sk k Π k 2 e T k ) = α ( 2 s k Π k 2 s T k ) = α ( 2. (22) + (f k + g k ) Π k 2 (f k + g k ) T It follows from (7),(8), and (22) that, for fixed (f,...,f k 2 ), both det(s k Sk T ) and g k Π k have the same maximizer fk = g k Π k 2 P k g k Π k 2, 3

14 Thus, for fixed (f,...,f k 2 ), both terms of (9) are simultaneously maximized by fk ; hence we have J k = max f,...,f k 2 + log [ ( ( ) ) 2 log det(s k 2 Sk 2 ( T ) + log + Pk + g k Π k 2 + ( Pk + α ) 2 )]. + g k Π k 2 Reasoning inductively, we can conclude that, if (f,...,f k ) maximizes J k, then J k is maximized by the same (f,...,f k ), combined with fk = g k Π k P k g k Π k. Furthermore, combining (9) and (22), we have the desired recursion for J k as J 0 = 0, J = log( + P ), (24) ( ) 2 Pk+ J k+ J k = log + + α, k =, 2,.... (25) e J k J k (23) Step 3. Optimal power allocation over time. In the previous step, we solved the optimization problem (4) under a fixed power allocation (P,...,P n ). Thanks to the special structure of the MA() noise process, this brute force optimization was tractable via backward dynamic programming. Here we optimize the power allocation (P,...,P n ) under the constraint n i= P i np, n, As we saw earlier, when α = 0, we can use the concavity of the logarithm to show that, for all C n,fb = 2n J n(p,...,p n ) = max P i : i P i np 2n n log( + P i ) = log( + P), 2 with P = = P n = P. When α 0, it is not tractable to optimize (P,...,P n ) for J n in (23) (25) to get a closed-form solution of C n,fb for finite n. The following lemma, however, enables us to figure out the asymptotically optimal power allocation and to obtain a closed-form solution for C FB = lim n C n,fb. Lemma 2. Let ψ : [0, ) [0, ) [0, ) such that the following conditions hold: i= (i) ψ(ξ, ζ) is continuous and strictly concave in (ξ, ζ), (ii) ψ(ξ, ζ) is increasing in ξ and ζ, respectively, 4

15 ξ ψ(ξ,p) ξ n+ ξ n ξ 0 = 0 Figure 3: Convergence to the unique point ξ. (iii) For any ζ > 0, there is a unique solution ξ = ξ (ζ) > 0 to the equation ξ = ψ(ξ, ζ). For some fixed P > 0, let {P i } i= be any infinite sequence of nonnegative numbers satisfying n lim sup P i P. n n Let {ξ i } i=0 be defined recursively as Then ξ 0 = 0, i= ξ i = ψ(ξ i, P i ), i =, 2,.... lim sup n n n ξ i ξ. Furthermore, if P i P, i =, 2,..., then the corresponding ξ i converges to ξ. i= Proof. Fix ǫ > 0. From the concavity and monotonicity of ψ, for n sufficiently large, n ξ i = n ψ(ξ i, P i ) n n i= i= ( ) n ψ ξ i, n P i n n i= i= ( ) n ψ ξ i, P + ǫ. n i= 5

16 Taking lim sup on both sides and using the continuity of ψ, we have ( ) ξ n := lim sup ξ i lim sup ψ ξ i, P + ǫ = ψ(ξ, P + ǫ). n n n n i Since ǫ is arbitrary and ψ is continuous, we have ξ ψ(ξ, P). But from uniqueness of ξ and strict concavity of ψ, we have Thus ξ ξ. i= ξ ξ if and only if ξ ψ(ξ, P). (26) It remains to show that we can actually attain ξ by choosing P i P, i =, 2,.... Let ξ i = ψ(ξ i, P), i =, 2,.... From the monotonicity of ψ(, P) and (26), we have ξ i ξ i = ψ(ξ i, P) ξ = ψ(ξ, P), i =, 2,.... Thus the sequence {ξ i } has a limit, which we denote as ξ. But from the continuity of ψ(, P), we must have ( ) ξ = lim ξ n = lim ψ(ξ n, P) = ψ lim ξ n n n, P = ψ(ξ, P). n Thus ξ = ξ. We continue our main discussion. Define ψ(ξ, ζ) := 2 log + ( ζ + α e 2ξ The conditions (i) (iii) of Lemma 2 can be easily checked. For concavity, we rely on the simple composition rule for concave functions [64, Section 3.2.4] without messy calculus. Let ψ (ξ) = 2 log(+ξ), ψ 2(ξ, ζ) = ( ξ+ ζ) 2, and ψ 3 (ξ) = α 2 ( exp( 2ξ)). Then ψ(ξ, ζ) = ψ (ψ 2 (ψ 3 (ξ), ζ)). Now that ψ is strictly concave and strictly increasing, ψ 2 is strictly concave and elementwise strictly increasing, and ψ 3 is strictly concave, we can conclude that ψ is strictly concave. Since for any ζ > 0, ψ(0, ζ) > 0 and ψ(ξ, ζ) c(ζ) < as ξ tends to infinity, the uniqueness of the root of ξ = ψ(ξ, ζ) is trivial from the continuity of ψ. For an arbitrary infinite sequence {P i } i= satisfying we define ξ 0 = 0, lim sup n n ) 2. n P i np, (27) i= ξ i = ψ(ξ i, P i ), i =, 2,.... 6

17 Note that ξ = 2 J (P ), Now from Lemma 2, we have ξ i = 2 (J i(p,...,p i ) J i (P,...,P i )), i = 2, 3,.... lim sup n where ξ is the unique solution to 2n J n(p,...,p n ) = lim sup n n ξ = ψ(ξ, P) = 2 log + Since our choice of {P i } is arbitrary, we conclude that sup lim sup n ( P + α n ξ i ξ, i= e 2ξ ) 2. 2n J n(p,...,p n ) = lim n 2n J n(p,..., P) = ξ, where the supremum (in fact, maximum) is over all infinite sequences {P i } satisfying the asymptotic average power constraint (27). Finally, we prove that C FB = ξ. More specifically, we will show that C FB = lim n C n,fb = lim max n P i : i P i np = sup {P i } i= = ξ. 2n J n(p,...,p n ) (28) lim sup J n (P,...,P n ) (29) n The only subtlety here is how to justify the interchange of the order of limit and supremum in (28) and (29). It is easy to verify that lim max n P i : i P i np 2n J n(p,...,p n ) sup {P i } i= lim sup J n (P,...,P n ), n for it is always advantageous to choose a finite sequence (P,...,P n ) for each n rather than choosing a single infinite sequence {P i }. To prove the other direction of inequality, we fix ǫ > 0 and choose n and (Q,...,Q n ) such that n Q i np i= 7

18 and 2n J n(q,...,q n ) C FB ǫ. (30) Now we construct an infinite sequence {P i } by concatenating a finite sequence (Q,...,Q n ) repeatedly, that is, P kn+i = Q i for all i =,..., n, and k =, 2,.... Obviously, this choice of {P i } i= satisfies the power constraint (27). Now from the monotonicity of ψ(ξ, ζ) in ξ, for any i = 0,,..., n and k =, 2,..., ξ i+ = ψ(ξ i, P i ) = ψ(ξ i, Q i ) ψ(ξ kn+i, Q i ) = ψ(ξ kn+i, P kn+i ) = ξ kn+i+. Hence 2kn J kn(p,..., P kn ) = kn kn i= ( ξ i k kn ) n ξ i i= = 2n J n(p,...,p n ). which, combined with (30), implies that lim sup n which, in turn, implies that 2n J n(p,...,p n ) C FB ǫ, k =, 2,..., sup lim sup {P i } n 2n J n(p,...,p n ) C FB ǫ. i= Since ǫ is arbitrary, we have the desired inequality. Thus C FB = ξ. We conclude this section by characterizing the capacity C FB = ξ in an alternative form. Recall that ξ is the unique solution to ( ξ = ) P 2 log + + α 2. e 2ξ Let x 0 = exp( ξ ), or equivalently, ξ = log x 0. It is easy to verify that 0 < x 0 is the unique positive solution to ( x = + ) 2 P + α x 2, 2 or equivalently, P x 2 = ( x 2 )( α x) 2. This establishes the feedback capacity C FB of the additive Gaussian noise channel with the noise covariance K Z, which is, in turn, the feedback capacity of the first-order moving average additive Gaussian noise channel with parameter α, as is argued at the end of Step and proved in the Appendix. This completes the proof of Theorem. 8

19 4 Discussion The derived asymptotically optimal feedback input signal sequence, or equivalently, the (sequence of) matrices (K (n) V, B(n) ) has two prominent properties. First, the optimal (K V, B) for the n-block can be found sequentially, built on the optimal (K V, B) for the (n )-block. Although this property may sound quite natural, it is not true in general for other channel models. Later in this section, we will see an MA(2) channel counterexample. As a corollary to this sequentiality property, the optimal K V has rank one, which agrees with the previous result by Ordentlich [22]. Secondly, the current input signal X k is orthogonal to the past output signals (Y,..., Y k ). In the notation of Section 2, we have f k Sk T = 0. This orthogonality property is indeed a necessary condition for the optimal (K V, B) for any (possibly nonstationary nonergodic) noise covariance matrix K Z [65, 23]. We explore the possibility of extending the current proof technique to a more general class of noise processes. The answer is negative. We comment on two simple cases: MA(2) and AR(). Consider the following MA(2) noise process which is essentially two interleaved MA() processes: Z i = U i + αu i 2, i =, 2,.... It is easy to see that this channel has the same capacity as the MA() channel with parameter α, which can be attained by signalling separately for each interleaved MA() channel. This suggests that the sequentiality property does not hold for this example. Indeed, we sequentially optimize the n-block capacity to obtain the rate log x 0, where x 0 is the unique positive root of the sixth order polynomial P x 2 = ( x 2 )( α x 2 ) 2. It is not hard to see that this rate is strictly less than the MA() feedback capacity unless α = 0. Note that a similar argument can prove that Butman s conjecture on the AR(k) capacity [9, Abstract] is not true in general. In contrast to MA() channels, we are missing two basic ingredients for AR() channels the optimality of rank-one K V and the asymptotic optimality of the uniform power allocation. Under these two conditions, both of which are yet to be justified, it is known [5, 6] that the optimal achievable rate is given by log x 0, where x 0 is the unique positive root of the fourth order polynomial P x 2 = x 2 ( + α x 2 ) 2. There is, however, a major difficulty in establishing the above two conditions by the two-stage optimization strategy we used in the previous section, namely, first maximizing (f,...,f n ) and then (P,...,P n ). For certain values of individual signal power constraints (P,...,P n ), the optimal (f,...,f n ) does not satisfy the sequentiality, resulting in K V with rank higher than one. Hence we cannot obtain the recursion formula for the n-block capacity [5, Section 5] that corresponds to (23) (25) through a greedy maximization of J n (P,..., P n ). Finally we show that the feedback capacity of the MA() channel can be achieved by using a simple stationary filter of the noise innovation process. Before we proceed, we point out that 9

20 the optimal input process {X i } we obtained in the previous section is asymptotically stationary. This observation is not hard to prove through the well-developed theory on asymptotic behavior of recursive estimators [66, Chapter 4]. At the beginning of the communication, we send 2 X N(0, P). For subsequent transmissions, we transmit the filtered version of the noise innovation process up to the time k : X k = β X k + σu k, k = 2, 3,.... (3) In other words, we use a first-order regressive filter with transfer function given by σz βz. Here β = sgn(α) x 0 with x 0 being the same unique positive root of the fourth-order polynomial (6) in Theorem. The scaling factor σ is chosen to satisfy the power constraint as where σ = sgn(α) P ( β 2 ), sgn(ζ) = This input process and the MA() noise process yield the output process given by Y = X + αu 0 + U, {, ζ 0,, ζ < 0. Z k = αu k + U k, k =, 2,..., Y k = β X k + (α + σ)u k + U k, = β Y k αβ U k 2 + (α β + σ)u k, k = 2, 3,..., which is asymptotically stationary with power spectral density S Y (ω) = + αe jω + σe jω 2 βe jω = + (α β + σ)e jω αβe j2ω ( βe jω ) = β 2 + αβ 2 e jω 2. 2 Technically, we mean that we generate 2 nr X (W) code functions i.i.d. according to N(0, P) for some R < C FB, and send one of them. 20 2

21 The asymptotic stationarity here should not bother us since {Y k } is stationary for k 2 and h(y Y 2,...,Y n ) is uniformly bounded in n, so that the entropy rate of the process {Y k } k= is determined by (Y 2, Y 3,...). Thus from () in Section 2, the entropy rate of the output process {Y k } is given by π log (2πeS Y (ω))dω = 4π π 2 log(2πeβ 2 ) = 2 log(2πex 2 0 ). Hence we attain the feedback capacity C FB. Furthermore, it can be shown that the mean-square error of X given the observations Y,...,Y n decays exponentially with rate β 2 = 2 2C FB. In other words, var(x Y,...,Y n ) = E(X E(X Y,...,Y n )) 2. = P 2 2nC FB. (32) We can interpret the signal X k as the adjustment of the receiver s estimate of the message bearing signal X after observing (Y,...,Y k ). The connection to the Schalkwijk-Kailath coding scheme is now apparent. Recall that there is a simple linear relationship [66, Section 3.4] [67, Section 4.5] between the minimum mean square error estimate (in other words, the minimum variance biased estimate) for the Gaussian input X and the maximum likelihood estimate (or equivalently, the minimum variance unbiased estimate) for an arbitrary real input θ. Thus we can easily transform the above coding scheme based on the asymptotic equipartition property [2] to the Schalkwijk-like linear coding scheme based on the maximum likelihood nearest neighborhood decoding of uniformly spaced 2 nr points. More specifically, we send as X one of 2 nr possible signals, say, θ Θ := { P, P +, P + 2,..., P 2, P, P }, where = 2 P/(2 nr ). Subsequent transmissions follow (3). The receiver forms the maximum likelihood estimate ˆθ n (Y,...,Y n ) and finds the nearest signal point to ˆθ n in Θ. The analysis of the error for this coding scheme follows Schalkwijk [5] and Butman [4]. From (32) and the standard result on the relationship between the minimum variance unbiased and biased estimation errors, the maximum likelihood estimation error ˆθ n θ is, conditioned on θ, Gaussian with mean θ and variance exponentially decaying with rate β 2 = 2 2nC FB. Thus, the nearest neighbor decoding error, ignoring lower order terms, is given by where [ ( P e (n) = E θ Pr ˆθ n θ 2 ) ] (. 3 ) θ = erfc 2 n(c FB R), 2σθ 2 erfc(x) = 2 exp( t 2 )dt, π and σ 2 θ is the variance of input signal θ chosen uniformly over Θ. As far as R < C FB, the decoding error decays doubly exponentially in n. Note that this coding scheme uses only the second moments of the noise process. This implies that the rate C FB is achievable for the additive noise channel with any non-gaussian noise process with the same covariance matrix. x 2

22 Acknowledgement The author is very grateful to Tom Cover for his invaluable insights and guidance throughout this work. He also wishes to thank Styrmir Sigurjónsson and Erik Ordentlich for many enlightening discussions, and Sina Zahedi for his numerical optimization program, which was especially useful in the initial phase of this study. Appendix Asymptotic equivalence of K Z and K Z for feedback capacity. Recall that Z n N n (0, K Z ) and Z n N n (0, K Z ). To stress the fact that we are dealing with two distinct noise covariance matrices, we use the notation C n,fb (K) for n-block feedback capacity of the channel with n-block noise covariance matrix K. With a little abuse of notation, we similarly use C FB (K) for feedback capacity of the channel with infinite noise covariance matrix naturally extended from K. Assume (B, K V ) maximizes C n,fb (K Z ) = max 2n log det((b + I)K Z(B + I) T + K V ) det(k Z ) and (B, K V ) maximizes C n,fb(k Z ). Since K Z K Z, so that (B + I)K Z (B + I) (B + I)K Z (B + I), (33) C n,fb (K Z ) = I(V n ; V n + (B + I)Z n ) V n N(0,K V ) I(V n ; V n + (B + I) Z n ) V n N(0,K V ) (34) I(V n ; V n + (B + I) Z n ) V n N(0,K V ) = C n,fb (K Z ), where (34) follows from (33), divisibility of the Gaussian distribution, and the data processing inequality [, Section 2.8]. On the other hand, so that (B + I)K Z(B + I) + K V (B + I)K Z (B + I) + K V, C n,fb (K Z ) = n [h(v n + (B + I)Z n ) h(z n )] V n N(0,K V ) n [h(v n + (B + I)Z n ) h(z n )] V n N(0,K V ) n [h(v n + (B + I) Z n ) h(z n )] V n N(0,K V ) = C n,fb (K Z ) + n (h( Z n ) h(z n )). 22

23 But from the condition α <, we can easily check that (h( Z n ) h(z n ))/n tends to zero as n. Hence, C FB (K Z ) = C FB (K Z ). References [] T. M. Cover and J. A. Thomas, Elements of Information Theory, Wiley, New York, 99. [2] T. M. Cover and S. Pombra, Gaussian feedback capacity, IEEE Trans. Inform. Theory, vol. IT-35, pp , January 989. [3] G. Pólya and G. Szegö, Problems and Theorems in Analysis, I: Series, Integral Calculus, Theory of Functions, Springer, New York, 976. [4] J. P. M. Schalkwijk and T. Kailath, A coding scheme for additive noise channels with feedback I: No bandwidth constraint, IEEE Trans. Inform. Theory, vol. IT-2, pp , April 966. [5] J. P. M. Schalkwijk, A coding scheme for additive noise channels with feedback II: Bandlimited signals, IEEE Trans. Inform. Theory, vol. IT-2, pp , April 966. [6] J. Wolfowitz, Note on the Gaussian channel with feedback and a power constraint, Information and Control, vol. 2, pp. 7 78, 968. [7] C. E. Shannon, The zero error capacity of a noisy channel, IRE Trans. Inform. Theory, vol. IT-2, pp. 8 9, September 956. [8] T. T. Kadota, M. M. Zakai, and J. Ziv, Mutual information of the white Gaussian channel with and without feedback, IEEE Trans. Inform. Theory, vol. IT-7, pp , July 97. [9] T. T. Kadota, M. M. Zakai, and J. Ziv, Capacity of a continuous memoryless channel with feedback, IEEE Trans. Inform. Theory, vol. IT-7, pp , July 97. [0] M. S. Pinsker, The probability of error in block transmission in a memoryless Gaussian channel with feedback, Probl. Inf. Transm., vol. 4, pp. 4, 968. [] A. J. Kramer, Improving communication reliability by use of an intermittent feedback channel, IEEE Trans. Inform. Theory, vol. IT-5, pp , January 969. [2] K. Sh. Zigangirov, Upper bounds for the error probability for channels with feedback, Probl. Inf. Transm., vol. 6, pp , 970. [3] J. P. M. Schalkwijk, Center-of-gravity information feedback, IEEE Trans. Inform. Theory, vol. IT-4, pp , March 968. [4] S. A. Butman, A general formulation of linear feedback communication systems with solutions, IEEE Trans. Inform. Theory, vol. IT-5, pp , May

24 [5] J. Wolfowitz, Signalling over a Gaussian channel with feedback and autoregressive noise, J. Appl. Probab., vol. 2, pp , 975. [6] J. C. Tiernan, Analysis of the optimum linear system for the autoregressive forward channel with noiseless feedback, IEEE Trans. Inform. Theory, vol. IT-22, pp , May 976. [7] S. Yang, A. Kavcic, and S. Tatikonda, Linear Gaussian channels: feedback capacity under power constraints, in Proc. IEEE Int. Symp. Inform. Theory, p. 72, June [8] J. C. Tiernan and J. P. M. Schalkwijk, An upper bound to the capacity of the bandlimited Gaussian autoregressive channel with noiseless feedback, IEEE Trans. Inform. Theory, vol. IT-20, pp. 3 36, May 974. [9] S. A. Butman, Linear feedback rate bounds for regressive channels, IEEE Trans. Inform. Theory, vol. IT-22, pp , May 976. [20] L. H. Ozarow, Random coding for additive Gaussian channels with feedback, IEEE Trans. Inform. Theory, vol. IT-36, pp. 7 22, January 990. [2] L. H. Ozarow, Upper bounds on the capacity of Gaussian channels with feedback, IEEE Trans. Inform. Theory, vol. IT-36, pp. 56 6, January 990. [22] E. Ordentlich, A class of optimal coding schemes for moving average additive Gaussian noise channels with feedback, in Proc. IEEE Int. Symp. Inform. Theory, p. 467, June 994. [23] E. Ordentlich, Private communication. [24] P. Elias, Channel capacity without coding, M.I.T. Research Lab. of Elect., Quarterly Progress Rept., pp , October 5, 956. [25] P. Elias, Channel capacity without coding, in Lectures on Communication System Theory, E. J. Baghdady, ed., pp , New York: McGraw-Hill, 96. [26] P. Elias, Networks of Gaussian channels with applications to feedback systems, IEEE Trans. Inform. Theory, vol. IT-3, pp , July 967. [27] G. L. Turin, Signal design for sequential detection systems with feedback, IEEE Trans. Inform. Theory, vol. IT-, pp , July 965. [28] G. L. Turin, Comparison of sequential and nonsequential detection systems with uncertainty feedback, IEEE Trans. Inform. Theory, vol. IT-2, pp. 5 8, January 966. [29] G. L. Turin, More on uncertainty feedback: The bandlimited case, IEEE Trans. Inform. Theory, vol. IT-4, pp , March 968. [30] M. Horstein, On the design of signals for sequential and nonsequential detection systems with feedback, IEEE Trans. Inform. Theory, vol. IT-2, pp , October 966. [3] R. Z. Khas minskii, Sequential signal transmission in a Gaussian channel with feedback, Probl. Inf. Transm., vol. 3, pp ,

25 [32] M. J. Ferguson, Optimal signal design for sequential signaling over a channel with feedback, IEEE Trans. Inform. Theory, vol. IT-4, pp , March 968. [33] J. K. Omura, Optimum linear transmission of analog data for channels with feedback, IEEE Trans. Inform. Theory, vol. IT-4, pp , January 968. [34] A. D. Wyner, On the Schalkwijk-Kailath coding scheme with a peak energy constraint, IEEE Trans. Inform. Theory, vol. IT-4, pp , January 968. [35] J. P. M. Schalkwijk and M. E. Barron, Sequential signalling under a peak power constraint, IEEE Trans. Inform. Theory, vol. IT-7, pp , May 97. [36] R. L. Kashyap, Feedback coding schemes for an additive noise channel with a noisy feedback link, IEEE Trans. Inform. Theory, vol. IT-4, pp , May 968. [37] S. S. Lavenberg, Feedback communication using orthogonal signals, IEEE Trans. Inform. Theory, vol. IT-5, pp , July 969. [38] S. S. Lavenberg, Repetitive signaling using a noisy feedback channel, IEEE Trans. Inform. Theory, vol. IT-7, pp , May 97. [39] T. Kailath, An application of Shannon s rate-distortion theory to analog communication over feedback channels, in Proc. Princeton Symp. System Science, March 967. [40] T. J. Cruise, Achievement of rate-distortion bound over additive white noise channel utilizing a noiseless feedback channel, Proc. IEEE, vol. 55, pp , April 967. [4] J. P. M. Schalkwijk and L. I. Bluestein, Transmission of analog waveforms through channels with feedback, IEEE Trans. Inform. Theory, vol. IT-3, pp , October 967. [42] I. A. Ovseevich, Optimum transmission of a Gaussian message over a channel with white Gaussian noise and feedback, Probl. Inf. Transm., vol. 6, pp. 9 99, 970. [43] S. Ihara, Optimal coding in white Gaussian channel with feedback, in Proc. Second Japan- USSR Symp. Probability Theory (Lecture Notes in Math., vol. 330, pp , Berlin: Springer Verlag. [44] L. H. Ozarow, S. K. Leung-Yan-Cheong, An achievable region and outer bound for the Gaussian broadcast channel with feedback, IEEE Trans. Inform. Theory, vol. IT-30, pp , July 984. [45] A. A. El Gamal, The feedback capacity of degraded broadcast channels, IEEE Trans. Inform. Theory, vol. IT-24, pp , May 978. [46] L. H. Ozarow, The capacity of the white Gaussian multiple access channel with feedback, IEEE Trans. Inform. Theory, vol. IT-30, pp , July 984. [47] G. Kramer, Feedback strategies for white Gaussian interference networks, IEEE Trans. Inform. Theory, vol. IT-48, pp , June

26 [48] M. Pinsker, Talk delivered at the Soviet Information Theory Meeting, 969. (No abstract published.) [49] P. M. Ebert, The capacity region of the Gaussian channel with feedback, Bell Syst. Tech. J., vol. 49, pp , Oct [50] A. Dembo, On Gaussian feedback capacity, IEEE Trans. Inform. Theory, vol. IT-35, pp , September 989. [5] K. Yanagi, An upper bound to the capacity of discrete time Gaussian channel with feedback II, IEEE Trans. Inform. Theory, vol. IT-40, pp , March 994. [52] H. W. Chen and K. Yanagi, Refinements of the half-bit and factor-of-two bounds for capacity in Gaussian channel with feedback, IEEE Trans. Inform. Theory, vol. IT-45, pp , January 999. [53] H. W. Chen and K. Yanagi, Upper bounds on the capacity of discrete-time blockwise white Gaussian channels with feedback, IEEE Trans. Inform. Theory, vol. IT-46, pp. 25 3, May [54] T. M. Cover, Conjecture: Feedback doesn t help much, in Open Problems in Communication and Computation, T. M. Cover and B. Gopinath, eds., New York: Springer-Verlag, 987. [55] J. A. Thomas, Feedback can at most double Gaussian multiple access channel capacity, IEEE Trans. Inform. Theory, vol. IT-33, pp. 7 76, September 987. [56] S. Pombra, T. M. Cover, Non white Gaussian multiple access channels with feedback, IEEE Trans. Inform. Theory, vol. IT-40, pp , May 994. [57] E. Ordentlich, On the factor-of-two bound for Gaussian multiple-access channels with feedback, IEEE Trans. Inform. Theory, vol. IT-42, pp , November 996. [58] S. Ihara, On the capacity of the continuous time Gaussian channel with feedback, J. Multivariate Anal., vol. 0, pp , 980. [59] S. Ihara, Coding theorems for a continuous-time Gaussian channel with feedback, IEEE Trans. Inform. Theory, vol. IT-40, pp , November 994. [60] S. Ihara, Capacity of discrete time Gaussian channel with and without feedback I, Mem. Fac. Sci. Kochi Univ. Ser. A Math., vol. 9, pp. 2 36, 988. [6] S. Ihara, Capacity of mismatched Gaussian channels with and without feedback, Probab. Theory Related Fields, vol. 84, pp [62] S. Ihara, Information Theory for Continuous Systems, World Scientific, Singapore, 993. [63] W. Rudin, Real and Complex Analysis, 3rd ed., McGraw-Hill, New York,

Feedback Capacity of the First-Order Moving Average Gaussian Channel

Feedback Capacity of the First-Order Moving Average Gaussian Channel Feedback Capacity of the First-Order Moving Average Gaussian Channel Young-Han Kim* Information Systems Laboratory, Stanford University, Stanford, CA 94305, USA Email: yhk@stanford.edu Abstract The feedback

More information

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 7, JULY More precisely, encoding functions X : f1;...; 2 g2! ; i = 1; 2;...;n.

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 7, JULY More precisely, encoding functions X : f1;...; 2 g2! ; i = 1; 2;...;n. IEEE TRANSACTIONS ON INFORMATION THEORY, VOL 52, NO 7, JULY 2006 3063 Feedback Capacity of the First-Order Moving Average Gaussian Channel Young-Han Kim, Student Member, IEEE Abstract Despite numerous

More information

On the Feedback Capacity of Stationary Gaussian Channels

On the Feedback Capacity of Stationary Gaussian Channels On the Feedback Capacity of Stationary Gaussian Channels Young-Han Kim Information Systems Laboratory Stanford University Stanford, CA 94305-950 yhk@stanford.edu Abstract The capacity of stationary additive

More information

WE consider a communication scenario in which

WE consider a communication scenario in which IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 56, NO. 1, JANUARY 2010 57 Feedback Capacity of Stationary Gaussian Channels Young-Han Kim, Member, IEEE Abstract The feedback capacity of additive stationary

More information

Cooperative Communication with Feedback via Stochastic Approximation

Cooperative Communication with Feedback via Stochastic Approximation Cooperative Communication with Feedback via Stochastic Approximation Utsaw Kumar J Nicholas Laneman and Vijay Gupta Department of Electrical Engineering University of Notre Dame Email: {ukumar jnl vgupta}@ndedu

More information

Superposition Encoding and Partial Decoding Is Optimal for a Class of Z-interference Channels

Superposition Encoding and Partial Decoding Is Optimal for a Class of Z-interference Channels Superposition Encoding and Partial Decoding Is Optimal for a Class of Z-interference Channels Nan Liu and Andrea Goldsmith Department of Electrical Engineering Stanford University, Stanford CA 94305 Email:

More information

The Poisson Channel with Side Information

The Poisson Channel with Side Information The Poisson Channel with Side Information Shraga Bross School of Enginerring Bar-Ilan University, Israel brosss@macs.biu.ac.il Amos Lapidoth Ligong Wang Signal and Information Processing Laboratory ETH

More information

Approximately achieving the feedback interference channel capacity with point-to-point codes

Approximately achieving the feedback interference channel capacity with point-to-point codes Approximately achieving the feedback interference channel capacity with point-to-point codes Joyson Sebastian*, Can Karakus*, Suhas Diggavi* Abstract Superposition codes with rate-splitting have been used

More information

Shannon meets Wiener II: On MMSE estimation in successive decoding schemes

Shannon meets Wiener II: On MMSE estimation in successive decoding schemes Shannon meets Wiener II: On MMSE estimation in successive decoding schemes G. David Forney, Jr. MIT Cambridge, MA 0239 USA forneyd@comcast.net Abstract We continue to discuss why MMSE estimation arises

More information

Generalized Writing on Dirty Paper

Generalized Writing on Dirty Paper Generalized Writing on Dirty Paper Aaron S. Cohen acohen@mit.edu MIT, 36-689 77 Massachusetts Ave. Cambridge, MA 02139-4307 Amos Lapidoth lapidoth@isi.ee.ethz.ch ETF E107 ETH-Zentrum CH-8092 Zürich, Switzerland

More information

On the Reliability of Gaussian Channels with Noisy Feedback

On the Reliability of Gaussian Channels with Noisy Feedback Forty-Fourth Annual Allerton Conference Allerton House, UIUC, Illinois, USA Sept 7-9, 006 WIIA.0 On the Reliability of Gaussian Channels with Noisy Feedback Young-Han Kim, Amos Lapidoth, Tsachy Weissman

More information

Capacity of the Discrete Memoryless Energy Harvesting Channel with Side Information

Capacity of the Discrete Memoryless Energy Harvesting Channel with Side Information 204 IEEE International Symposium on Information Theory Capacity of the Discrete Memoryless Energy Harvesting Channel with Side Information Omur Ozel, Kaya Tutuncuoglu 2, Sennur Ulukus, and Aylin Yener

More information

Towards control over fading channels

Towards control over fading channels Towards control over fading channels Paolo Minero, Massimo Franceschetti Advanced Network Science University of California San Diego, CA, USA mail: {minero,massimo}@ucsd.edu Invited Paper) Subhrakanti

More information

WE study the capacity of peak-power limited, single-antenna,

WE study the capacity of peak-power limited, single-antenna, 1158 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 56, NO. 3, MARCH 2010 Gaussian Fading Is the Worst Fading Tobias Koch, Member, IEEE, and Amos Lapidoth, Fellow, IEEE Abstract The capacity of peak-power

More information

A Proof of the Converse for the Capacity of Gaussian MIMO Broadcast Channels

A Proof of the Converse for the Capacity of Gaussian MIMO Broadcast Channels A Proof of the Converse for the Capacity of Gaussian MIMO Broadcast Channels Mehdi Mohseni Department of Electrical Engineering Stanford University Stanford, CA 94305, USA Email: mmohseni@stanford.edu

More information

On the Capacity Region of the Gaussian Z-channel

On the Capacity Region of the Gaussian Z-channel On the Capacity Region of the Gaussian Z-channel Nan Liu Sennur Ulukus Department of Electrical and Computer Engineering University of Maryland, College Park, MD 74 nkancy@eng.umd.edu ulukus@eng.umd.edu

More information

Feedback Capacity of the Gaussian Interference Channel to Within Bits: the Symmetric Case

Feedback Capacity of the Gaussian Interference Channel to Within Bits: the Symmetric Case 1 arxiv:0901.3580v1 [cs.it] 23 Jan 2009 Feedback Capacity of the Gaussian Interference Channel to Within 1.7075 Bits: the Symmetric Case Changho Suh and David Tse Wireless Foundations in the Department

More information

Lecture 5 Channel Coding over Continuous Channels

Lecture 5 Channel Coding over Continuous Channels Lecture 5 Channel Coding over Continuous Channels I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw November 14, 2014 1 / 34 I-Hsiang Wang NIT Lecture 5 From

More information

Refinement of the outer bound of capacity region in Gaussian multiple access channel with feedback

Refinement of the outer bound of capacity region in Gaussian multiple access channel with feedback International Symposium on Information Theory and its Applications, ISITA2004 Parma, Italy, October 0 3, 2004 Refinement of the outer bound of capacity region in Gaussian multiple access channel with feedback

More information

ELEC546 Review of Information Theory

ELEC546 Review of Information Theory ELEC546 Review of Information Theory Vincent Lau 1/1/004 1 Review of Information Theory Entropy: Measure of uncertainty of a random variable X. The entropy of X, H(X), is given by: If X is a discrete random

More information

Lecture 4 Noisy Channel Coding

Lecture 4 Noisy Channel Coding Lecture 4 Noisy Channel Coding I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw October 9, 2015 1 / 56 I-Hsiang Wang IT Lecture 4 The Channel Coding Problem

More information

Sum Capacity of General Deterministic Interference Channel with Channel Output Feedback

Sum Capacity of General Deterministic Interference Channel with Channel Output Feedback Sum Capacity of General Deterministic Interference Channel with Channel Output Feedback Achaleshwar Sahai Department of ECE, Rice University, Houston, TX 775. as27@rice.edu Vaneet Aggarwal Department of

More information

An introduction to basic information theory. Hampus Wessman

An introduction to basic information theory. Hampus Wessman An introduction to basic information theory Hampus Wessman Abstract We give a short and simple introduction to basic information theory, by stripping away all the non-essentials. Theoretical bounds on

More information

Secure Degrees of Freedom of the MIMO Multiple Access Wiretap Channel

Secure Degrees of Freedom of the MIMO Multiple Access Wiretap Channel Secure Degrees of Freedom of the MIMO Multiple Access Wiretap Channel Pritam Mukherjee Sennur Ulukus Department of Electrical and Computer Engineering University of Maryland, College Park, MD 074 pritamm@umd.edu

More information

Information Theory. Lecture 10. Network Information Theory (CT15); a focus on channel capacity results

Information Theory. Lecture 10. Network Information Theory (CT15); a focus on channel capacity results Information Theory Lecture 10 Network Information Theory (CT15); a focus on channel capacity results The (two-user) multiple access channel (15.3) The (two-user) broadcast channel (15.6) The relay channel

More information

Capacity Region of Reversely Degraded Gaussian MIMO Broadcast Channel

Capacity Region of Reversely Degraded Gaussian MIMO Broadcast Channel Capacity Region of Reversely Degraded Gaussian MIMO Broadcast Channel Jun Chen Dept. of Electrical and Computer Engr. McMaster University Hamilton, Ontario, Canada Chao Tian AT&T Labs-Research 80 Park

More information

Approximate Capacity of Fast Fading Interference Channels with no CSIT

Approximate Capacity of Fast Fading Interference Channels with no CSIT Approximate Capacity of Fast Fading Interference Channels with no CSIT Joyson Sebastian, Can Karakus, Suhas Diggavi Abstract We develop a characterization of fading models, which assigns a number called

More information

Capacity-achieving Feedback Scheme for Flat Fading Channels with Channel State Information

Capacity-achieving Feedback Scheme for Flat Fading Channels with Channel State Information Capacity-achieving Feedback Scheme for Flat Fading Channels with Channel State Information Jialing Liu liujl@iastate.edu Sekhar Tatikonda sekhar.tatikonda@yale.edu Nicola Elia nelia@iastate.edu Dept. of

More information

Capacity of Memoryless Channels and Block-Fading Channels With Designable Cardinality-Constrained Channel State Feedback

Capacity of Memoryless Channels and Block-Fading Channels With Designable Cardinality-Constrained Channel State Feedback 2038 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 50, NO. 9, SEPTEMBER 2004 Capacity of Memoryless Channels and Block-Fading Channels With Designable Cardinality-Constrained Channel State Feedback Vincent

More information

Can Feedback Increase the Capacity of the Energy Harvesting Channel?

Can Feedback Increase the Capacity of the Energy Harvesting Channel? Can Feedback Increase the Capacity of the Energy Harvesting Channel? Dor Shaviv EE Dept., Stanford University shaviv@stanford.edu Ayfer Özgür EE Dept., Stanford University aozgur@stanford.edu Haim Permuter

More information

Multiuser Successive Refinement and Multiple Description Coding

Multiuser Successive Refinement and Multiple Description Coding Multiuser Successive Refinement and Multiple Description Coding Chao Tian Laboratory for Information and Communication Systems (LICOS) School of Computer and Communication Sciences EPFL Lausanne Switzerland

More information

18.2 Continuous Alphabet (discrete-time, memoryless) Channel

18.2 Continuous Alphabet (discrete-time, memoryless) Channel 0-704: Information Processing and Learning Spring 0 Lecture 8: Gaussian channel, Parallel channels and Rate-distortion theory Lecturer: Aarti Singh Scribe: Danai Koutra Disclaimer: These notes have not

More information

Error Exponent Region for Gaussian Broadcast Channels

Error Exponent Region for Gaussian Broadcast Channels Error Exponent Region for Gaussian Broadcast Channels Lihua Weng, S. Sandeep Pradhan, and Achilleas Anastasopoulos Electrical Engineering and Computer Science Dept. University of Michigan, Ann Arbor, MI

More information

Upper Bounds on the Capacity of Binary Intermittent Communication

Upper Bounds on the Capacity of Binary Intermittent Communication Upper Bounds on the Capacity of Binary Intermittent Communication Mostafa Khoshnevisan and J. Nicholas Laneman Department of Electrical Engineering University of Notre Dame Notre Dame, Indiana 46556 Email:{mhoshne,

More information

On the Duality between Multiple-Access Codes and Computation Codes

On the Duality between Multiple-Access Codes and Computation Codes On the Duality between Multiple-Access Codes and Computation Codes Jingge Zhu University of California, Berkeley jingge.zhu@berkeley.edu Sung Hoon Lim KIOST shlim@kiost.ac.kr Michael Gastpar EPFL michael.gastpar@epfl.ch

More information

On the Duality of Gaussian Multiple-Access and Broadcast Channels

On the Duality of Gaussian Multiple-Access and Broadcast Channels On the Duality of Gaussian ultiple-access and Broadcast Channels Xiaowei Jin I. INTODUCTION Although T. Cover has been pointed out in [] that one would have expected a duality between the broadcast channel(bc)

More information

On Multiple User Channels with State Information at the Transmitters

On Multiple User Channels with State Information at the Transmitters On Multiple User Channels with State Information at the Transmitters Styrmir Sigurjónsson and Young-Han Kim* Information Systems Laboratory Stanford University Stanford, CA 94305, USA Email: {styrmir,yhk}@stanford.edu

More information

On the Secrecy Capacity of Fading Channels

On the Secrecy Capacity of Fading Channels On the Secrecy Capacity of Fading Channels arxiv:cs/63v [cs.it] 7 Oct 26 Praveen Kumar Gopala, Lifeng Lai and Hesham El Gamal Department of Electrical and Computer Engineering The Ohio State University

More information

ECE Information theory Final (Fall 2008)

ECE Information theory Final (Fall 2008) ECE 776 - Information theory Final (Fall 2008) Q.1. (1 point) Consider the following bursty transmission scheme for a Gaussian channel with noise power N and average power constraint P (i.e., 1/n X n i=1

More information

Common Information. Abbas El Gamal. Stanford University. Viterbi Lecture, USC, April 2014

Common Information. Abbas El Gamal. Stanford University. Viterbi Lecture, USC, April 2014 Common Information Abbas El Gamal Stanford University Viterbi Lecture, USC, April 2014 Andrew Viterbi s Fabulous Formula, IEEE Spectrum, 2010 El Gamal (Stanford University) Disclaimer Viterbi Lecture 2

More information

A Comparison of Superposition Coding Schemes

A Comparison of Superposition Coding Schemes A Comparison of Superposition Coding Schemes Lele Wang, Eren Şaşoğlu, Bernd Bandemer, and Young-Han Kim Department of Electrical and Computer Engineering University of California, San Diego La Jolla, CA

More information

Feedback Capacity of a Class of Symmetric Finite-State Markov Channels

Feedback Capacity of a Class of Symmetric Finite-State Markov Channels Feedback Capacity of a Class of Symmetric Finite-State Markov Channels Nevroz Şen, Fady Alajaji and Serdar Yüksel Department of Mathematics and Statistics Queen s University Kingston, ON K7L 3N6, Canada

More information

(each row defines a probability distribution). Given n-strings x X n, y Y n we can use the absence of memory in the channel to compute

(each row defines a probability distribution). Given n-strings x X n, y Y n we can use the absence of memory in the channel to compute ENEE 739C: Advanced Topics in Signal Processing: Coding Theory Instructor: Alexander Barg Lecture 6 (draft; 9/6/03. Error exponents for Discrete Memoryless Channels http://www.enee.umd.edu/ abarg/enee739c/course.html

More information

Continuous-Model Communication Complexity with Application in Distributed Resource Allocation in Wireless Ad hoc Networks

Continuous-Model Communication Complexity with Application in Distributed Resource Allocation in Wireless Ad hoc Networks Continuous-Model Communication Complexity with Application in Distributed Resource Allocation in Wireless Ad hoc Networks Husheng Li 1 and Huaiyu Dai 2 1 Department of Electrical Engineering and Computer

More information

Discrete Memoryless Channels with Memoryless Output Sequences

Discrete Memoryless Channels with Memoryless Output Sequences Discrete Memoryless Channels with Memoryless utput Sequences Marcelo S Pinho Department of Electronic Engineering Instituto Tecnologico de Aeronautica Sao Jose dos Campos, SP 12228-900, Brazil Email: mpinho@ieeeorg

More information

Lecture 6 Channel Coding over Continuous Channels

Lecture 6 Channel Coding over Continuous Channels Lecture 6 Channel Coding over Continuous Channels I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw November 9, 015 1 / 59 I-Hsiang Wang IT Lecture 6 We have

More information

AN INTRODUCTION TO SECRECY CAPACITY. 1. Overview

AN INTRODUCTION TO SECRECY CAPACITY. 1. Overview AN INTRODUCTION TO SECRECY CAPACITY BRIAN DUNN. Overview This paper introduces the reader to several information theoretic aspects of covert communications. In particular, it discusses fundamental limits

More information

(Classical) Information Theory III: Noisy channel coding

(Classical) Information Theory III: Noisy channel coding (Classical) Information Theory III: Noisy channel coding Sibasish Ghosh The Institute of Mathematical Sciences CIT Campus, Taramani, Chennai 600 113, India. p. 1 Abstract What is the best possible way

More information

Variable Length Codes for Degraded Broadcast Channels

Variable Length Codes for Degraded Broadcast Channels Variable Length Codes for Degraded Broadcast Channels Stéphane Musy School of Computer and Communication Sciences, EPFL CH-1015 Lausanne, Switzerland Email: stephane.musy@ep.ch Abstract This paper investigates

More information

This research was partially supported by the Faculty Research and Development Fund of the University of North Carolina at Wilmington

This research was partially supported by the Faculty Research and Development Fund of the University of North Carolina at Wilmington LARGE SCALE GEOMETRIC PROGRAMMING: AN APPLICATION IN CODING THEORY Yaw O. Chang and John K. Karlof Mathematical Sciences Department The University of North Carolina at Wilmington This research was partially

More information

Linear Codes, Target Function Classes, and Network Computing Capacity

Linear Codes, Target Function Classes, and Network Computing Capacity Linear Codes, Target Function Classes, and Network Computing Capacity Rathinakumar Appuswamy, Massimo Franceschetti, Nikhil Karamchandani, and Kenneth Zeger IEEE Transactions on Information Theory Submitted:

More information

Fading Wiretap Channel with No CSI Anywhere

Fading Wiretap Channel with No CSI Anywhere Fading Wiretap Channel with No CSI Anywhere Pritam Mukherjee Sennur Ulukus Department of Electrical and Computer Engineering University of Maryland, College Park, MD 7 pritamm@umd.edu ulukus@umd.edu Abstract

More information

Computation of Information Rates from Finite-State Source/Channel Models

Computation of Information Rates from Finite-State Source/Channel Models Allerton 2002 Computation of Information Rates from Finite-State Source/Channel Models Dieter Arnold arnold@isi.ee.ethz.ch Hans-Andrea Loeliger loeliger@isi.ee.ethz.ch Pascal O. Vontobel vontobel@isi.ee.ethz.ch

More information

Soft Covering with High Probability

Soft Covering with High Probability Soft Covering with High Probability Paul Cuff Princeton University arxiv:605.06396v [cs.it] 20 May 206 Abstract Wyner s soft-covering lemma is the central analysis step for achievability proofs of information

More information

Concatenated Coding Using Linear Schemes for Gaussian Broadcast Channels with Noisy Channel Output Feedback

Concatenated Coding Using Linear Schemes for Gaussian Broadcast Channels with Noisy Channel Output Feedback IEEE TRANSACTIONS ON COMMUNICATIONS, VOL. XX, NO. Y, MONTH 204 Concatenated Coding Using Linear Schemes for Gaussian Broadcast Channels with Noisy Channel Output Feedback Ziad Ahmad, Student Member, IEEE,

More information

On the Secrecy Capacity of the Z-Interference Channel

On the Secrecy Capacity of the Z-Interference Channel On the Secrecy Capacity of the Z-Interference Channel Ronit Bustin Tel Aviv University Email: ronitbustin@post.tau.ac.il Mojtaba Vaezi Princeton University Email: mvaezi@princeton.edu Rafael F. Schaefer

More information

Appendix B Information theory from first principles

Appendix B Information theory from first principles Appendix B Information theory from first principles This appendix discusses the information theory behind the capacity expressions used in the book. Section 8.3.4 is the only part of the book that supposes

More information

Optimal Power Allocation for Parallel Gaussian Broadcast Channels with Independent and Common Information

Optimal Power Allocation for Parallel Gaussian Broadcast Channels with Independent and Common Information SUBMIED O IEEE INERNAIONAL SYMPOSIUM ON INFORMAION HEORY, DE. 23 1 Optimal Power Allocation for Parallel Gaussian Broadcast hannels with Independent and ommon Information Nihar Jindal and Andrea Goldsmith

More information

EE229B - Final Project. Capacity-Approaching Low-Density Parity-Check Codes

EE229B - Final Project. Capacity-Approaching Low-Density Parity-Check Codes EE229B - Final Project Capacity-Approaching Low-Density Parity-Check Codes Pierre Garrigues EECS department, UC Berkeley garrigue@eecs.berkeley.edu May 13, 2005 Abstract The class of low-density parity-check

More information

Information Theory - Entropy. Figure 3

Information Theory - Entropy. Figure 3 Concept of Information Information Theory - Entropy Figure 3 A typical binary coded digital communication system is shown in Figure 3. What is involved in the transmission of information? - The system

More information

A Comparison of Two Achievable Rate Regions for the Interference Channel

A Comparison of Two Achievable Rate Regions for the Interference Channel A Comparison of Two Achievable Rate Regions for the Interference Channel Hon-Fah Chong, Mehul Motani, and Hari Krishna Garg Electrical & Computer Engineering National University of Singapore Email: {g030596,motani,eleghk}@nus.edu.sg

More information

Degrees of Freedom Region of the Gaussian MIMO Broadcast Channel with Common and Private Messages

Degrees of Freedom Region of the Gaussian MIMO Broadcast Channel with Common and Private Messages Degrees of Freedom Region of the Gaussian MIMO Broadcast hannel with ommon and Private Messages Ersen Ekrem Sennur Ulukus Department of Electrical and omputer Engineering University of Maryland, ollege

More information

Performance-based Security for Encoding of Information Signals. FA ( ) Paul Cuff (Princeton University)

Performance-based Security for Encoding of Information Signals. FA ( ) Paul Cuff (Princeton University) Performance-based Security for Encoding of Information Signals FA9550-15-1-0180 (2015-2018) Paul Cuff (Princeton University) Contributors Two students finished PhD Tiance Wang (Goldman Sachs) Eva Song

More information

Multiaccess Channels with State Known to One Encoder: A Case of Degraded Message Sets

Multiaccess Channels with State Known to One Encoder: A Case of Degraded Message Sets Multiaccess Channels with State Known to One Encoder: A Case of Degraded Message Sets Shivaprasad Kotagiri and J. Nicholas Laneman Department of Electrical Engineering University of Notre Dame Notre Dame,

More information

224 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 58, NO. 1, JANUARY 2012

224 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 58, NO. 1, JANUARY 2012 224 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL 58, NO 1, JANUARY 2012 Linear-Feedback Sum-Capacity for Gaussian Multiple Access Channels Ehsan Ardestanizadeh, Member, IEEE, Michèle Wigger, Member, IEEE,

More information

The Gallager Converse

The Gallager Converse The Gallager Converse Abbas El Gamal Director, Information Systems Laboratory Department of Electrical Engineering Stanford University Gallager s 75th Birthday 1 Information Theoretic Limits Establishing

More information

Joint Source-Channel Coding for the Multiple-Access Relay Channel

Joint Source-Channel Coding for the Multiple-Access Relay Channel Joint Source-Channel Coding for the Multiple-Access Relay Channel Yonathan Murin, Ron Dabora Department of Electrical and Computer Engineering Ben-Gurion University, Israel Email: moriny@bgu.ac.il, ron@ee.bgu.ac.il

More information

Dirty Paper Coding vs. TDMA for MIMO Broadcast Channels

Dirty Paper Coding vs. TDMA for MIMO Broadcast Channels TO APPEAR IEEE INTERNATIONAL CONFERENCE ON COUNICATIONS, JUNE 004 1 Dirty Paper Coding vs. TDA for IO Broadcast Channels Nihar Jindal & Andrea Goldsmith Dept. of Electrical Engineering, Stanford University

More information

LECTURE 18. Lecture outline Gaussian channels: parallel colored noise inter-symbol interference general case: multiple inputs and outputs

LECTURE 18. Lecture outline Gaussian channels: parallel colored noise inter-symbol interference general case: multiple inputs and outputs LECTURE 18 Last time: White Gaussian noise Bandlimited WGN Additive White Gaussian Noise (AWGN) channel Capacity of AWGN channel Application: DS-CDMA systems Spreading Coding theorem Lecture outline Gaussian

More information

ON THE BOUNDEDNESS BEHAVIOR OF THE SPECTRAL FACTORIZATION IN THE WIENER ALGEBRA FOR FIR DATA

ON THE BOUNDEDNESS BEHAVIOR OF THE SPECTRAL FACTORIZATION IN THE WIENER ALGEBRA FOR FIR DATA ON THE BOUNDEDNESS BEHAVIOR OF THE SPECTRAL FACTORIZATION IN THE WIENER ALGEBRA FOR FIR DATA Holger Boche and Volker Pohl Technische Universität Berlin, Heinrich Hertz Chair for Mobile Communications Werner-von-Siemens

More information

Lattices for Distributed Source Coding: Jointly Gaussian Sources and Reconstruction of a Linear Function

Lattices for Distributed Source Coding: Jointly Gaussian Sources and Reconstruction of a Linear Function Lattices for Distributed Source Coding: Jointly Gaussian Sources and Reconstruction of a Linear Function Dinesh Krithivasan and S. Sandeep Pradhan Department of Electrical Engineering and Computer Science,

More information

Information Theory Meets Game Theory on The Interference Channel

Information Theory Meets Game Theory on The Interference Channel Information Theory Meets Game Theory on The Interference Channel Randall A. Berry Dept. of EECS Northwestern University e-mail: rberry@eecs.northwestern.edu David N. C. Tse Wireless Foundations University

More information

On the Capacity and Degrees of Freedom Regions of MIMO Interference Channels with Limited Receiver Cooperation

On the Capacity and Degrees of Freedom Regions of MIMO Interference Channels with Limited Receiver Cooperation On the Capacity and Degrees of Freedom Regions of MIMO Interference Channels with Limited Receiver Cooperation Mehdi Ashraphijuo, Vaneet Aggarwal and Xiaodong Wang 1 arxiv:1308.3310v1 [cs.it] 15 Aug 2013

More information

Wideband Fading Channel Capacity with Training and Partial Feedback

Wideband Fading Channel Capacity with Training and Partial Feedback Wideband Fading Channel Capacity with Training and Partial Feedback Manish Agarwal, Michael L. Honig ECE Department, Northwestern University 145 Sheridan Road, Evanston, IL 6008 USA {m-agarwal,mh}@northwestern.edu

More information

Dispersion of the Gilbert-Elliott Channel

Dispersion of the Gilbert-Elliott Channel Dispersion of the Gilbert-Elliott Channel Yury Polyanskiy Email: ypolyans@princeton.edu H. Vincent Poor Email: poor@princeton.edu Sergio Verdú Email: verdu@princeton.edu Abstract Channel dispersion plays

More information

ASIGNIFICANT research effort has been devoted to the. Optimal State Estimation for Stochastic Systems: An Information Theoretic Approach

ASIGNIFICANT research effort has been devoted to the. Optimal State Estimation for Stochastic Systems: An Information Theoretic Approach IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL 42, NO 6, JUNE 1997 771 Optimal State Estimation for Stochastic Systems: An Information Theoretic Approach Xiangbo Feng, Kenneth A Loparo, Senior Member, IEEE,

More information

Lecture Notes 1: Vector spaces

Lecture Notes 1: Vector spaces Optimization-based data analysis Fall 2017 Lecture Notes 1: Vector spaces In this chapter we review certain basic concepts of linear algebra, highlighting their application to signal processing. 1 Vector

More information

Mismatched Multi-letter Successive Decoding for the Multiple-Access Channel

Mismatched Multi-letter Successive Decoding for the Multiple-Access Channel Mismatched Multi-letter Successive Decoding for the Multiple-Access Channel Jonathan Scarlett University of Cambridge jms265@cam.ac.uk Alfonso Martinez Universitat Pompeu Fabra alfonso.martinez@ieee.org

More information

6196 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 9, SEPTEMBER 2011

6196 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 9, SEPTEMBER 2011 6196 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 9, SEPTEMBER 2011 On the Structure of Real-Time Encoding and Decoding Functions in a Multiterminal Communication System Ashutosh Nayyar, Student

More information

5218 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 12, DECEMBER 2006

5218 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 12, DECEMBER 2006 5218 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 12, DECEMBER 2006 Source Coding With Limited-Look-Ahead Side Information at the Decoder Tsachy Weissman, Member, IEEE, Abbas El Gamal, Fellow,

More information

Optimal Sequences and Sum Capacity of Synchronous CDMA Systems

Optimal Sequences and Sum Capacity of Synchronous CDMA Systems Optimal Sequences and Sum Capacity of Synchronous CDMA Systems Pramod Viswanath and Venkat Anantharam {pvi, ananth}@eecs.berkeley.edu EECS Department, U C Berkeley CA 9470 Abstract The sum capacity of

More information

IN this paper, we consider the capacity of sticky channels, a

IN this paper, we consider the capacity of sticky channels, a 72 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 54, NO. 1, JANUARY 2008 Capacity Bounds for Sticky Channels Michael Mitzenmacher, Member, IEEE Abstract The capacity of sticky channels, a subclass of insertion

More information

An Outer Bound for the Gaussian. Interference channel with a relay.

An Outer Bound for the Gaussian. Interference channel with a relay. An Outer Bound for the Gaussian Interference Channel with a Relay Ivana Marić Stanford University Stanford, CA ivanam@wsl.stanford.edu Ron Dabora Ben-Gurion University Be er-sheva, Israel ron@ee.bgu.ac.il

More information

Interactive Interference Alignment

Interactive Interference Alignment Interactive Interference Alignment Quan Geng, Sreeram annan, and Pramod Viswanath Coordinated Science Laboratory and Dept. of ECE University of Illinois, Urbana-Champaign, IL 61801 Email: {geng5, kannan1,

More information

A Half-Duplex Cooperative Scheme with Partial Decode-Forward Relaying

A Half-Duplex Cooperative Scheme with Partial Decode-Forward Relaying A Half-Duplex Cooperative Scheme with Partial Decode-Forward Relaying Ahmad Abu Al Haija, and Mai Vu, Department of Electrical and Computer Engineering McGill University Montreal, QC H3A A7 Emails: ahmadabualhaija@mailmcgillca,

More information

Lecture 5: Channel Capacity. Copyright G. Caire (Sample Lectures) 122

Lecture 5: Channel Capacity. Copyright G. Caire (Sample Lectures) 122 Lecture 5: Channel Capacity Copyright G. Caire (Sample Lectures) 122 M Definitions and Problem Setup 2 X n Y n Encoder p(y x) Decoder ˆM Message Channel Estimate Definition 11. Discrete Memoryless Channel

More information

Lecture 4 Channel Coding

Lecture 4 Channel Coding Capacity and the Weak Converse Lecture 4 Coding I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw October 15, 2014 1 / 16 I-Hsiang Wang NIT Lecture 4 Capacity

More information

IN this paper, we show that the scalar Gaussian multiple-access

IN this paper, we show that the scalar Gaussian multiple-access 768 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 50, NO. 5, MAY 2004 On the Duality of Gaussian Multiple-Access and Broadcast Channels Nihar Jindal, Student Member, IEEE, Sriram Vishwanath, and Andrea

More information

Gaussian channel. Information theory 2013, lecture 6. Jens Sjölund. 8 May Jens Sjölund (IMT, LiU) Gaussian channel 1 / 26

Gaussian channel. Information theory 2013, lecture 6. Jens Sjölund. 8 May Jens Sjölund (IMT, LiU) Gaussian channel 1 / 26 Gaussian channel Information theory 2013, lecture 6 Jens Sjölund 8 May 2013 Jens Sjölund (IMT, LiU) Gaussian channel 1 / 26 Outline 1 Definitions 2 The coding theorem for Gaussian channel 3 Bandlimited

More information

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 4, APRIL

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 4, APRIL IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 4, APRIL 2006 1545 Bounds on Capacity Minimum EnergyPerBit for AWGN Relay Channels Abbas El Gamal, Fellow, IEEE, Mehdi Mohseni, Student Member, IEEE,

More information

Feedback Stabilization over a First Order Moving Average Gaussian Noise Channel

Feedback Stabilization over a First Order Moving Average Gaussian Noise Channel Feedback Stabiliation over a First Order Moving Average Gaussian Noise Richard H. Middleton Alejandro J. Rojas James S. Freudenberg Julio H. Braslavsky Abstract Recent developments in information theory

More information

On Gaussian MIMO Broadcast Channels with Common and Private Messages

On Gaussian MIMO Broadcast Channels with Common and Private Messages On Gaussian MIMO Broadcast Channels with Common and Private Messages Ersen Ekrem Sennur Ulukus Department of Electrical and Computer Engineering University of Maryland, College Park, MD 20742 ersen@umd.edu

More information

An Extended Fano s Inequality for the Finite Blocklength Coding

An Extended Fano s Inequality for the Finite Blocklength Coding An Extended Fano s Inequality for the Finite Bloclength Coding Yunquan Dong, Pingyi Fan {dongyq8@mails,fpy@mail}.tsinghua.edu.cn Department of Electronic Engineering, Tsinghua University, Beijing, P.R.

More information

Lecture 6: Gaussian Channels. Copyright G. Caire (Sample Lectures) 157

Lecture 6: Gaussian Channels. Copyright G. Caire (Sample Lectures) 157 Lecture 6: Gaussian Channels Copyright G. Caire (Sample Lectures) 157 Differential entropy (1) Definition 18. The (joint) differential entropy of a continuous random vector X n p X n(x) over R is: Z h(x

More information

Lecture 8: Shannon s Noise Models

Lecture 8: Shannon s Noise Models Error Correcting Codes: Combinatorics, Algorithms and Applications (Fall 2007) Lecture 8: Shannon s Noise Models September 14, 2007 Lecturer: Atri Rudra Scribe: Sandipan Kundu& Atri Rudra Till now we have

More information

Statistical Signal Processing Detection, Estimation, and Time Series Analysis

Statistical Signal Processing Detection, Estimation, and Time Series Analysis Statistical Signal Processing Detection, Estimation, and Time Series Analysis Louis L. Scharf University of Colorado at Boulder with Cedric Demeure collaborating on Chapters 10 and 11 A TT ADDISON-WESLEY

More information

Capacity of a Two-way Function Multicast Channel

Capacity of a Two-way Function Multicast Channel Capacity of a Two-way Function Multicast Channel 1 Seiyun Shin, Student Member, IEEE and Changho Suh, Member, IEEE Abstract We explore the role of interaction for the problem of reliable computation over

More information

Universal Anytime Codes: An approach to uncertain channels in control

Universal Anytime Codes: An approach to uncertain channels in control Universal Anytime Codes: An approach to uncertain channels in control paper by Stark Draper and Anant Sahai presented by Sekhar Tatikonda Wireless Foundations Department of Electrical Engineering and Computer

More information

Lecture 1: The Multiple Access Channel. Copyright G. Caire 12

Lecture 1: The Multiple Access Channel. Copyright G. Caire 12 Lecture 1: The Multiple Access Channel Copyright G. Caire 12 Outline Two-user MAC. The Gaussian case. The K-user case. Polymatroid structure and resource allocation problems. Copyright G. Caire 13 Two-user

More information

Energy State Amplification in an Energy Harvesting Communication System

Energy State Amplification in an Energy Harvesting Communication System Energy State Amplification in an Energy Harvesting Communication System Omur Ozel Sennur Ulukus Department of Electrical and Computer Engineering University of Maryland College Park, MD 20742 omur@umd.edu

More information