Transmitter optimization for distributed Gaussian MIMO channels

Similar documents
Sum-Power Iterative Watefilling Algorithm

Under sum power constraint, the capacity of MIMO channels

Optimum Power Allocation in Fading MIMO Multiple Access Channels with Partial CSI at the Transmitters

Parallel Additive Gaussian Channels

On the Optimality of Multiuser Zero-Forcing Precoding in MIMO Broadcast Channels

Morning Session Capacity-based Power Control. Department of Electrical and Computer Engineering University of Maryland

Lagrange Duality. Daniel P. Palomar. Hong Kong University of Science and Technology (HKUST)

Transmit Directions and Optimality of Beamforming in MIMO-MAC with Partial CSI at the Transmitters 1

A Proof of the Converse for the Capacity of Gaussian MIMO Broadcast Channels

Single-User MIMO systems: Introduction, capacity results, and MIMO beamforming

12.4 Known Channel (Water-Filling Solution)

Determinant maximization with linear. S. Boyd, L. Vandenberghe, S.-P. Wu. Information Systems Laboratory. Stanford University

ELE539A: Optimization of Communication Systems Lecture 15: Semidefinite Programming, Detection and Estimation Applications

Input Optimization for Multi-Antenna Broadcast Channels with Per-Antenna Power Constraints

Duality, Achievable Rates, and Sum-Rate Capacity of Gaussian MIMO Broadcast Channels

Pilot Optimization and Channel Estimation for Multiuser Massive MIMO Systems

Multiuser Capacity in Block Fading Channel

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 2, FEBRUARY Uplink Downlink Duality Via Minimax Duality. Wei Yu, Member, IEEE (1) (2)

Lecture 5: Antenna Diversity and MIMO Capacity Theoretical Foundations of Wireless Communications 1. Overview. CommTh/EES/KTH

Upper Bounds on MIMO Channel Capacity with Channel Frobenius Norm Constraints

MULTI-INPUT multi-output (MIMO) channels, usually

Conjugate Gradient Projection Approach for Multi-Antenna Gaussian Broadcast Channels

Capacity optimization for Rician correlated MIMO wireless channels

ELEC E7210: Communication Theory. Lecture 10: MIMO systems

Lecture 7 MIMO Communica2ons

When does vectored Multiple Access Channels (MAC) optimal power allocation converge to an FDMA solution?

Optimal Power Control in Decentralized Gaussian Multiple Access Channels

12. Interior-point methods

Dirty Paper Coding vs. TDMA for MIMO Broadcast Channels

Sparse Covariance Selection using Semidefinite Programming

Tight Lower Bounds on the Ergodic Capacity of Rayleigh Fading MIMO Channels

2318 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 54, NO. 6, JUNE Mai Vu, Student Member, IEEE, and Arogyaswami Paulraj, Fellow, IEEE

Lecture 8: Linear Algebra Background

Degrees of Freedom Region of the Gaussian MIMO Broadcast Channel with Common and Private Messages

POWER ALLOCATION AND OPTIMAL TX/RX STRUCTURES FOR MIMO SYSTEMS

Iterative Water-filling for Gaussian Vector Multiple Access Channels

A Note on the Convexity of logdet(i +KX 1 ) and its Constrained Optimization Representation

USING multiple antennas has been shown to increase the

LECTURE 18. Lecture outline Gaussian channels: parallel colored noise inter-symbol interference general case: multiple inputs and outputs

Anatoly Khina. Joint work with: Uri Erez, Ayal Hitron, Idan Livni TAU Yuval Kochman HUJI Gregory W. Wornell MIT

Multiple Antennas for MIMO Communications - Basic Theory

Fidelity and trace norm

Multiple Antennas in Wireless Communications

Information Theory for Wireless Communications, Part II:

Optimal Data and Training Symbol Ratio for Communication over Uncertain Channels

Optimal Transmit Strategies in MIMO Ricean Channels with MMSE Receiver

Optimal Sequences, Power Control and User Capacity of Synchronous CDMA Systems with Linear MMSE Multiuser Receivers

Homework 4. Convex Optimization /36-725

Schur-convexity of the Symbol Error Rate in Correlated MIMO Systems with Precoding and Space-time Coding

On the Capacity of the Multiple Antenna Broadcast Channel

Advances in Convex Optimization: Theory, Algorithms, and Applications

Signaling Design of Two-Way MIMO Full-Duplex Channel: Optimality Under Imperfect Transmit Front-End Chain

A Single-letter Upper Bound for the Sum Rate of Multiple Access Channels with Correlated Sources

A Dual Decomposition Approach to the Sum Power Gaussian Vector Multiple Access Channel Sum Capacity Problem

Per-Antenna Power Constrained MIMO Transceivers Optimized for BER

A Low-Complexity Algorithm for Worst-Case Utility Maximization in Multiuser MISO Downlink

Vector Channel Capacity with Quantized Feedback

12. Interior-point methods

Limited Feedback in Wireless Communication Systems

Multiuser Downlink Beamforming: Rank-Constrained SDP

Noncooperative Optimization of Space-Time Signals in Ad hoc Networks

LMI MODELLING 4. CONVEX LMI MODELLING. Didier HENRION. LAAS-CNRS Toulouse, FR Czech Tech Univ Prague, CZ. Universidad de Valladolid, SP March 2009

On the Symbol Asynchronous Gaussian Multiple-Access Channel

HEURISTIC METHODS FOR DESIGNING UNIMODULAR CODE SEQUENCES WITH PERFORMANCE GUARANTEES

Optimum Transmission Scheme for a MISO Wireless System with Partial Channel Knowledge and Infinite K factor

A Comparison of Two Achievable Rate Regions for the Interference Channel

Optimization of Multiuser MIMO Networks with Interference

Minimax MMSE Estimator for Sparse System

Feasibility Conditions for Interference Alignment

Fast Algorithms for SDPs derived from the Kalman-Yakubovich-Popov Lemma

Secure Degrees of Freedom of the MIMO Multiple Access Wiretap Channel

A Nash Equilibrium Analysis for Interference Coupled Wireless Systems

CHANNEL FEEDBACK QUANTIZATION METHODS FOR MISO AND MIMO SYSTEMS

A New SLNR-based Linear Precoding for. Downlink Multi-User Multi-Stream MIMO Systems

WIRELESS networking constitutes an important component

Cooperative Communication with Feedback via Stochastic Approximation

MIMO Capacities : Eigenvalue Computation through Representation Theory

ELE539A: Optimization of Communication Systems Lecture 6: Quadratic Programming, Geometric Programming, and Applications

EE 227A: Convex Optimization and Applications October 14, 2008

SPARSE signal representations have gained popularity in recent

Fundamentals of Multi-User MIMO Communications

PERFORMANCE COMPARISON OF DATA-SHARING AND COMPRESSION STRATEGIES FOR CLOUD RADIO ACCESS NETWORKS. Pratik Patil, Binbin Dai, and Wei Yu

On the Duality between Multiple-Access Codes and Computation Codes

Characterization of Convex and Concave Resource Allocation Problems in Interference Coupled Wireless Systems

for Multi-Hop Amplify-and-Forward MIMO Relaying Systems

The Dirty MIMO Multiple-Access Channel

Optimized Beamforming and Backhaul Compression for Uplink MIMO Cloud Radio Access Networks

Space-Time Coding for Multi-Antenna Systems

Optimal Power Allocation for Cognitive Radio under Primary User s Outage Loss Constraint

A priori bounds on the condition numbers in interior-point methods

Convex Optimization of Graph Laplacian Eigenvalues

Optimum MIMO-OFDM receivers with imperfect channel state information

On the Duality of Gaussian Multiple-Access and Broadcast Channels

1402 IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, VOL. 25, NO. 7, SEPTEMBER 2007

An Alternative Proof for the Capacity Region of the Degraded Gaussian MIMO Broadcast Channel

Primal-Dual Interior-Point Methods. Javier Peña Convex Optimization /36-725

EIGENVALUES AND SINGULAR VALUE DECOMPOSITION

Weighted Sum Rate Optimization for Cognitive Radio MIMO Broadcast Channels

System Identification by Nuclear Norm Minimization

Convex Optimization & Lagrange Duality

Transcription:

Transmitter optimization for distributed Gaussian MIMO channels Hon-Fah Chong Electrical & Computer Eng Dept National University of Singapore Email: chonghonfah@ieeeorg Mehul Motani Electrical & Computer Eng Dept National University of Singapore Email: motani@nusedusg Feng Nan Singapore-MIT Alliance National University of Singapore Email: fengnanmj@gmailcom Abstract In this paper, we consider the transmitter optimization problem for a point-to-point communication system with multiple base-stations cooperating to transmit to a single user Each base-station is equipped with multiple antennas with a separate average power constraint imposed on it First, we determine certain sufficient conditions for optimality which, in turn, motivate the development of a simple water-filling algorithm that can be applied if H T H is positive definite (H is the composite channel matrix) The water-filling algorithm is applied to each individual base-station and the symmetric matrix thus obtained is the optimal transmit covariance matrix if it is also positive semi-definite and none of its diagonal elements are zero Secondly, we show that for a special class of distributed Gaussian MIMO channels, where the individual channel matrices have the same unitary left singular vector matrix, the transmitter optimization problem may be reduced from one of determining the optimal transmit covariance matrix of size N N to that of determining an optimal vector of length N Finally, we propose a low-complexity, greedy water-filling algorithm for general channel matrices The proposed algorithm is shown to attain near optimal rates in various scenarios through numerical simulations I INTRODUCTION A distributed MIMO channel is a model for studying MIMO cellular networks where perfect cooperation between basestations is allowed The entire cellular network can then be viewed as a single transmitter with a distributed network of antennas For a standard MIMO channel, the transmitter is subject to an average power constraint across all transmit antennas In contrast, for a distributed MIMO channel, an average power constraint is imposed on the transmit antennas for each base-station It is well-known that for the standard Gaussian MIMO channel where the channel is constant and known perfectly at both the transmitter and the receiver, the optimal transmit covariance matrix may be easily determined by a water-filling algorithm [] However, the water-filling algorithm cannot be applied to a distributed Gaussian MIMO channel due to the per-base-station power constraints In general, the capacity and transmitter optimization problem for a distributed Gaussian MIMO channel may be posed as a determinant maximization problem with linear matrix inequality (LMI) constraints The determinant maximization problem with LMI constraints is a concave maximization problem and one may apply the MAXDET program [] which makes use of a polynomial-time interior-point algorithm In practice, interior-point methods are known to be both computationally-intensive and memory-intensive as they do not exploit the structure of the problem In this paper, our main focus is to exploit the structure of the distributed Gaussian MIMO channel in the transmitter optimization problem We also propose an efficient greedy water-filling algorithm to compute an achievable rate as well as its corresponding transmit covariance matrix The paper is organized as follows: In Section II, we describe the model for the distributed Gaussian MIMO channel In Section III, we determine certain sufficient conditions for optimality Furthermore, the sufficient conditions motivate the development of a water-filling algorithm that can be applied when H T H is positive definite, where H is the composite channel matrix We show that the symmetricmatrix computedby the water-filling algorithm is the optimal transmit covariance matrix if it is also positive semi-definite and none of its diagonal elements are zero In Section IV, we consider a special class of distributed Gaussian MIMO channels, where the unitary left singular vector matrices of all the individual channel matrices are the same We obtain the capacity as well as the optimal transmit covariance matrix (of dimension N N) viaa concave vector maximization problem (of dimension not greater than N), rather than as a determinant maximization problem with LMI constraints In Section V, we propose an efficient greedy waterfilling algorithm to compute an achievable rate for the distributed Gaussian MIMO channel The proposed algorithm is much simpler than the MAXDET program in terms of computational complexity and memory requirements It is also shown to achieve near optimal rates in different scenarios through numerical simulations A Notation In our notation, A,B, denote matrices while, a, b, denote vectors We indicate the (i, j) th element of the matrix A by A (i, j) and the i th element of the vector a by a (i) Wealso use I to denote the identity matrix, R to denote the set of real numbers, R + to denote the set of non-negative real numbers and R n to denote the set of real n-vectors

II CHANNEL MODEL We consider K base-stations, each equipped with M tk transmit antennas, k =,,, K, and a single user equipped with M r receive antennas We can then write the channel output as follows: y = [ ] T y,y,,y Mr K = H k x k + z () k= where the individual channel matrix between base-station k and the user is given by H k, the input signal transmitted by base-station k is given by x k =[ x k (), x k (),, x k (M tk )] T, k =,,, K () and the noise vector at the receiver is given by z T = [ z (),, z (M r ) ] (3) where z (l), l =,,, M r, are statistically independent Gaussian noise with zero mean and variance E [ z (l) ] = σ l Furthermore, we impose an individual average power constraint for each base-station E[ x T k x k] P k, k =,,, K (4) For the sake of compactness, we denote the length of the composite input vector by N = M tk, the composite channel k matrix by H = [H ; H ; ; H K ], the composite input vector of length N by x T =[ x T ; xt ; ; xt K ], the composite transmit covariance matrix by X = E[ x x T ] and the crosscovariance matrix between the input vector for base-station i and the input vector for base-station j by X ij = E[ x i x T j ] Finally, we define the following function: r (i, j) =min ( M ti,m tj ) (5) Remark By scaling each element y (l), l =,,, M r,of the channel output vector by σ l, we obtain a new distributed Gaussian MIMO channel where the noise vector at the receiver consists of statistically independent Gaussian noise with zero mean and variance one Furthermore, the capacity of the new distributed Gaussian MIMO channel is the same as the original channel Hence, without loss of generality, we assume that σ l =, l =,,, M r, throughout the paper III SUFFICIENT CONDITIONS FOR OPTIMALITY The capacity of the distributed Gaussian MIMO channel given by () subject to the constraints (4) can be formulated as the following maximization problem: maximize log det(i + HXHT ) subject to Tr(X kk ) P k, k =,, K X 0 (6) As mentioned, the maximization problem (6) is concave and belongs to the class of determinant maximization problems with LMI constraints [], minimize c T x +logdet(g ( x) ) subject to G ( x) 0 (7) F ( x) 0 This class of problems can be solved by the MAXDET program described in [] The MAXDET program implements a primal-dual long step interior-point algorithm and involves a predictor step after each centering step using the Newton s method The predictor step is based on the tangent to the primal and dual central path However, interior-point algorithms, in general, do not exploit the structure of the problem; they are computationally expensive and cannot handle large problems While the standard Gaussian MIMO channel problem can be solved using a water-filling method, transmitter optimization of the distributed Gaussian MIMO channel does not admit such a simple solution In this section, we exploit the structure of the distributed Gaussian MIMO channel to determine certain sufficient conditions for optimality Even though the sufficient conditions are not necessary, they motivate the development of a simple water-filling algorithm which can be applied if H T H is positive definite Furthermore, the symmetric matrix determined by the water-filling algorithm satisfies the sufficient conditions if it is positive semi-definite and none of its diagonal elements are zero The sufficient conditions for optimality are given by the following theorem: Theorem If H T H 0, the matrix X is the optimal transmit covariance matrix if it satisfies the following conditions: X + ( H T H ) = λ k Λ k (8) k Tr (X kk )=P k, k =,,, K (9) X 0 (0) where λ k 0 and 0 0 0 Λ k = 0 0 0 is the N N matrix with all the entries 0 except the diagonal s corresponding to the transmit antennas of base-station k Proof: We note that if H T H 0, then there exists a positive definite matrix L of size N N such that H T H = LL [3, Theorem 76] By Sylvester s determinant theorem,

the concave maximization problem (6) can be reformulated as follows: maximize log det(i + LXL) subject to Tr(X kk ) P k, k =,, K () X 0 Since () is a concave maximization problem and satisfies Slater s condition, the KKT conditions are both sufficient and necessary: L(I + LXL) L + θ = K k= λ k Λ k Tr(X kk )=P k, k =,, K Tr (Xθ)=0 X, θ 0 () Theorem follows directly if θ =0and the conditions are then sufficient but not necessary Theorem motivates the following simple water-filling algorithm if H T H 0 Thewater-filling algorithm is applied to each base-station individually where the noise levels in the various channels for each base-station are given by the corresponding diagonal elements of the matrix ( H T H ) Algorithm ) Set the non-diagonal elements of X to be equal to the negative of the non-diagonal elements of ( H T H ) ) Perform single-user water-filling for each individual base-station which is equivalent to determining an appropriate λ k such that M tk λ k ( H T H ) k l= j= k M tj + l, j= M tj + l = P k, k =,,, K (3) where (x) + denotes the positive part of x 3) Set X kk (l, l) = λ k ( H T H ) k k M tj + l, M tj + l j= j= (4) where l =,,, M tk and k =,,, K Remark The matrix determined by Algorithm is a symmetric matrix and satisfies the conditions of Theorem if it is also positive semi-definite and none of its diagonal elements are zero Remark 3 We note that Algorithm can only be applied if H T H 0 This condition can be readily satisfied if ) the number of receive antennas is not less than the number of transmit antennas and ) the individual elements of the composite channel matrix H are generated iid from a Gaussian distribution We also note that the symmetric matrix determined by Algorithm may not necessarily be a valid + + covariance matrix with all of its diagonal elements being non-zero However, this condition can be readily satisfied if the individual power constraint for each base-station is large enough IV A SPECIAL CLASS OF DISTRIBUTED GAUSSIAN MIMO CHANNELS In this section, we consider the transmitter optimization for a special class of distributed Gaussian MIMO channels where the unitary left singular vector matrices of all the individual channels are the same Specifically, applying singular value decomposition to each of the individual channel matrices H k, k =,,, K, we obtain H k = U k Λ k Vk T (5) We are assuming that U = U = U = = U K Remark 4 This includes the case where the individual channel matrices are the same, ie, H = H = = H K This also includes the class of distributed Gaussian vector MIMO channels [4] For the class of distributed Gaussian vector MIMO channels, the user has one receiving antenna and each base-station transmits a vector-input at each antenna while the user receives a vector-output at its single antenna in each time instance For this special class of distributed Gaussian MIMO channels, the composite transmit covariance matrix (of size N N) which attains capacity can be determined by a concave vector (of length not greater than N) maximization problem This is given by the following theorem: Theorem When the unitary left singular vector matrices for all the individual channel matrices are the same, the capacity of the distributed Gaussian MIMO channel is given by the solution of the following concave vector maximization problem: maximize f ( e,, e K )= log + Λ i (l, l) e i (l) e j (l)λ j (l, l) l=m r l= subject to M tk l= st l r() e k (l) P k e k R Mt k + (6) where UΛ k Vk T, k =,,, K, is the singular value decomposition of H k ( Denoting the ) N-vector achieving the maximum in (6) by e, e,, e K, the optimal composite transmit covariance matrix X is given by V 0 V T 0 0 V 0 X 0 V T 0 (7) 0 V K 0 VK T

where X ij, i, j =,, K, is a diagonal matrix whose diagonal elements are given by X ij e (l, l) = i (l) e j (l), l =,,, r (i, j) Proof: Applying singular value decomposition to each of the individual channel matrices, we obtain H = [ ] H H H K V T 0 0 = [ ] 0 V T 0 UΛ UΛ UΛ K 0 0 VK T (8) Next, we note that the matrix to the extreme right of (8) is a unitary matrix and hence, the following matrix: V T 0 0 V 0 0 0 V T 0 X 0 V 0 (9) 0 0 VK T 0 0 V K is a positive semi-definite matrix satisfying the individual power constraint for each base-station This follows from the fact that Tr [ Vk T X ] [ ] kkv k = Tr Xkk V k Vk T = Tr[Xkk ], k =,,, K Therefore, (6) can be reformulated as follows: ( maximize log det I + ) UΛ i X ij Λ T j U T (0) subject to Tr(X kk ) P k, k =, K X 0 Furthermore, the objective function of (0) can be simplified as follows: log det I + UΛ i X ij Λ j U = log det I + Λ i X ij Λ T j + log det ( UU T ) = log det I + Λ i X ij Λ T j (a) l=m r log + Λ i (l, l) X ij (l, l) Λ j (l, l) l= (b) l=m r log + l= st l r() st l r() Λ i (l, l) ) X ii (l, l) X jj (l, l)λ j (l, l) () The inequality (a) follows from the Hadamard s determinant theorem which states that the determinant of a positive definite matrix is upper bounded by the product of its diagonal elements Denoting Λ = [Λ ; Λ ; ; Λ K ], we see that Λ i X ij Λ T j = ΛXΛ T is a positive semi-definite matrix and hence, I + Λ i X ij Λ T j is a positive definite matrix We also note that Λ k are diagonal matrices, ie, l l, Λ k (l,l )=0,wherel =,,, M r and l =,,, M tk The inequality (b) follows from the property of positive semidefinite matrices that X ij (l,l ) X ii (l,l ) X jj (l,l ), where l =,, M ti and l =,, M tj We have (a) to be an equality if we choose X ij to be a diagonal matrix, ie, l l, X ij (l,l )=0,wherel =,,, M ti and l =,,, M tj We also have (b) as an equality if we choose X ij (l, l) = X ii (l, l) X jj (l, l),where l =,,, r (i, j) Furthermore, this maintains the positive semi-definite property { of the matrix X We see this from the fact that for l,,, max M tk }, we may choose x k (l) = k a k w l, k st M tk l, wherea k Rand w l s are independent Gaussian random variables Finally, we note that the optimization problem in (6) is a concave maximization problem Let E k (P k ), k =,,, K, denote the space of non-negative vectors of length M tk satisfying the power constraint for base-station k, ie, e k (l) l=m tk l= P k LetE(P,P,, P K ) denote the product space E (P ) E (P ) E K (P K )Forany e, e E (P,P,, P K ) and t [0, ], we note that e = t e +( t) e E (P,P,, P K ) The concavity of the objective function of (6) follows from the following inequalities: ( tf e ) +( t) f ( e ) (a) l=m r log + Λ i (l, l) Λ j (l, l) l= (b) l=m r = f l= [t e i st l r() [ t e i (l) e j (l)+( t) log + Λ i (l, l) Λ j (l, l) st l r() e i (l) e j (l) ]) (l)+( t) e i (l)][ t e j (l)+( t) e j (l)]) ( e ) () where (a) follows from the fact that the determinant of a matrix is log-concave over the space of positive definite matrices and (b) follows from the fact that the geometric mean on R + is concave Remark 5 In general, one can apply Newton s method to solve the concave maximization problem in (6) In[4,

Section V], we propose a low-complexity, iterative numerical algorithm to obtain the global optimum V GREEDY WATER-FILLING ALGORITHM In Section III and Section IV, we dealt with special cases of the distributed Gaussian MIMO channel However, one must still resort to interior-point algorithms to solve the general case This motivates our development of a simple heuristic algorithm, called the greedy water-filling algorithm, for the general case The greedy water-filling algorithm can be thought of as a greedy approach to the maximization problem (6) by successively applying the single-user waterfilling algorithm First, we apply the single-user water-filling algorithm across all the base-stations until one of them satisfies its individual power constraint Specifically, let H = UΛV T be the singular value decomposition of H and let us denote V T X 0 V by ˆX 0 Similar to the case of single-user water-filling, we determine the diagonal matrix ˆX 0 = diag{d,d,, d N } such that d n + h = w, if n h <w, (3) n d n =0, if h w, (4) n where n =,,, min (M r,n), h n s are the singular values of H and w is the water-filling level We also set d n =0, if n>min (M r,n) (5) The water-filling level w is chosen such that X 0 = V ˆX 0 V T satisfies at least one of the power constraints with none of the other power constraints violated The base-station that satisfies its power constraint is then said to be filled Next, the channel matrix corresponding to the filled basestation(s), say it is the j th station, is taken out of the composite channel matrix H to form a new composite channel matrix H R Mr N,whereN = N M tj The objective function in (6) then becomes log det(i + HX 0 H T + H X H T ) (6) We note that S z = I + HX 0 H T is positive definite and hence, we may write S z = L,whereL is positive definite Denoting L H by Ĥ, we can then rewrite (6) as the following: log det(i + Ĥ X Ĥ T )+logdet( L ) We determine X (of size N N ) in the same manner as X 0 by applying water-filling across the remaining basestations until at least one of the remaining power constraints is satisfied and none of the other power constraints are violated At each stage of the algorithm, at least one base-station is filled and hence, the algorithm ends in no more than K stages The final computed covariance matrix is a result of summing all the intermediate X k s according to the appropriate indices An outline of the greedy water-filling algorithm is as follows: Algorithm ) Initialize k =0, Ĥ 0 = H and N 0 = N ) Let U k Λ k Vk T be the SVD of Ĥ k Determine the diagonal matrix ˆd k, 0 0 ˆX k = 0 ˆdk, (7) 0 0 ˆdk,Nk such that ˆd k,n + ĥ k,n = w k, if ˆd k,n =0, if ĥ k,n ĥ k,n <w k, (8) w k, (9) for n =,,, min (M r,n k ) and ˆd k,n =0, n > min (M r,n k ) (30) and where ĥk,n s are the singular values of Ĥ k and w k is the water-filling level such that X k = V k ˆXk Vk T (3) satisfies at least one of the remaining power constraints with none of the other power constraints being violated 3) If all the base-stations are filled, stop Otherwise, set k = k +LetH k be the composite channel matrix with columns corresponding to the filled base-stations removed and set where Repeat Step ) Ĥ k = L k H k (3) k L k = I + H j X j Hj T (33) j=0 We can always determine an appropriate water-level at each step of the greedy water-filling algorithm such that at least one of the remaining power constraints is satisfied while the other power constraints are not violated To see this, we note, from Lemma below, that we can always increase the water-level from 0 in each step of Algorithm until the condition is met Lemma The diagonal components of X k are continuous, monotonically increasing functions of the water-level w k Proof: From (8) and (9), we note that the diagonal components of ˆX k are continuous, monotonically increasing functions of the water-level w k Expanding the matrix multiplication for the diagonal of V k ˆX k Vk T we have X k (j, j) = Vk (j, ) ˆd k, + Vk (j, ) ˆd k, + + Vk (j, N k ) ˆd k,nk (34)

35 45 Comparison of greedy water filling and true optimal rates with upper bounds (increasing power level for base stations: p*(:k)) 30 40 35 achievable rate 5 0 achievable rate 30 5 greedy water filling rate true optimal rate upper bound based on duality upper bound based on single trace constraint sum rate 5 0 greedy water filling true optimal rate upper bound based on duality gap upper bound based on single trace constraint sum rate 0 0 5 0 5 0 5 30 power constraint level 5 0 5 0 5 0 5 30 power constraint level Fig Comparison of GWF and true optimal rates with random H R 4 averaged over 30 runs (equal increasing power constraint level for all base-stations) Fig Comparison of GWF and true optimal rates with random H R 4 averaged over 30 runs (increasing power constraint levels for each base-station, p [ : K] as p is varied from to 30) The lemma follows directly from the fact that the addition of continuous, monotonically increasing functions of w k is again a continuous, monotonically increasing function of w k An important property of the proposed greedy water-filling algorithm is that the rate is improved at each stage We state this formally in the following lemma: Lemma In the greedy water-filling algorithm, the objective function log det(i + HXH T ) is increased at each stage as new base-stations are filled Proof: This follows directly from the single-user waterfilling algorithm A Numerical Simulations To compare the performance of the greedy water-filling algorithm with the capacity, numerical simulations were carried out and the results are shown in Figure and Here, we have 6 base-stations, the number of transmitting antennas for each base-station is 4 and the number of receiving antennas is The channel matrix H is of size 4 and the individual entries of H are generated iid from a zeromean Gaussian distribution with unit variance For fixed power constraint levels, we take the average rates after 30 runs We also include the sum-rate capacity for the MIMO MAC where there is no cooperation between the base-stations [5], as well as two upper bounds, one based on duality theory and the other based on combining the individual power constraints into a single trace power constraint [6] The power constraint levels in Figure are set to be equal among all the base-stations and are varied from to 30 for each base-station The power constraint levels in Figure are set to be p [,,, K] and p is again varied from to 30 From Figure, we see that when the power constraint levels are set to be equal among all the base-stations, the rate achieved by the greedy water-filling algorithm is almost identical with the optimal performance From Figure, when the power constraint levels vary among the base-stations, we see that there is only a slight gap between the optimal performanceand thegreedywater-filling algorithm The results show that the greedy water-filling algorithm achieves rates close to capacity and in both cases, outperforms the sum-rate capacity for the MIMO MAC where there is no cooperation among the base-stations ACKNOWLEDGMENT This work was supported in part by the National University of Singapore and by the National Research Foundation under Grant No NRF008NRF-POC00-078 (NUS WBS No R- 63-000-537-8) REFERENCES [] RGGallager,Information Theory and Reliable Communication John Wiley and Sons, Inc, 968 [] L Vandenberghe, S Boyd, and SP Wu, Determinant maximization with linear matrix inequality constraints, SIAM Journal on Matrix Analysis and Applications, vol 9, pp 499 533, 996 [3] R A Horn and C R Johnson, Matrix Analysis Cambridge University Press, 985 [4] H F Chong, F Nan and M Motani, Capacity of the Distributed Gaussian Vector MISO Channel, Submitted to ISIT 00 [5] W Yu, W Rhee, S Boyd, and J M Cioffi, Iterative Water-Filling for Gaussian Vector Multiple-Access Channels, IEEE Trans Inform Theory, vol 50, no, pp 45 5, Jan 004 [6] F Nan, H F Chong and M Motani, Greedy Water-filling for Multicell MIMO Channel Transmitter Optimization, Submitted to ISIT 00