Mathematical methods in communication June 16th, Lecture 12

Similar documents
ELEC E7210: Communication Theory. Lecture 10: MIMO systems

Maths for Signals and Systems Linear Algebra in Engineering

Information Theory for Wireless Communications, Part II:

Parallel Additive Gaussian Channels

Multiple Antennas for MIMO Communications - Basic Theory

ELEC546 MIMO Channel Capacity

Gaussian channel. Information theory 2013, lecture 6. Jens Sjölund. 8 May Jens Sjölund (IMT, LiU) Gaussian channel 1 / 26

Multiple Antennas in Wireless Communications

12.4 Known Channel (Water-Filling Solution)

Lecture 7 MIMO Communica2ons

LECTURE 18. Lecture outline Gaussian channels: parallel colored noise inter-symbol interference general case: multiple inputs and outputs

Advanced Topics in Digital Communications Spezielle Methoden der digitalen Datenübertragung

Vector Channel Capacity with Quantized Feedback

forms Christopher Engström November 14, 2014 MAA704: Matrix factorization and canonical forms Matrix properties Matrix factorization Canonical forms

A Proof of the Converse for the Capacity of Gaussian MIMO Broadcast Channels

Singular Value Decomposition (SVD)

Lecture 6 Channel Coding over Continuous Channels

Ergodic and Outage Capacity of Narrowband MIMO Gaussian Channels

Lecture 5: Antenna Diversity and MIMO Capacity Theoretical Foundations of Wireless Communications 1. Overview. CommTh/EES/KTH

Principal Component Analysis

Ergodic and Outage Capacity of Narrowband MIMO Gaussian Channels

Lecture 6 Sept Data Visualization STAT 442 / 890, CM 462

A L T O SOLO LOWCLL. MICHIGAN, THURSDAY. DECEMBER 10,1931. ritt. Mich., to T h e Heights. Bos" l u T H I S COMMl'NiTY IN Wilcox

Pithy P o i n t s Picked I ' p and Patljr Put By Our P e r i p a tetic Pencil Pusher VOLUME X X X X. Lee Hi^h School Here Friday Ni^ht

Transmit Directions and Optimality of Beamforming in MIMO-MAC with Partial CSI at the Transmitters 1

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.

Capacity of multiple-input multiple-output (MIMO) systems in wireless communications

The Capacity Region of the Gaussian MIMO Broadcast Channel

On the Use of Division Algebras for Wireless Communication

Information Theory for Wireless Communications. Lecture 10 Discrete Memoryless Multiple Access Channel (DM-MAC): The Converse Theorem

18.06 Problem Set 10 - Solutions Due Thursday, 29 November 2007 at 4 pm in

Solutions to Homework Set #4 Differential Entropy and Gaussian Channel

5. Random Vectors. probabilities. characteristic function. cross correlation, cross covariance. Gaussian random vectors. functions of random vectors

The Singular Value Decomposition

Lecture 8: MIMO Architectures (II) Theoretical Foundations of Wireless Communications 1. Overview. Ragnar Thobaben CommTh/EES/KTH

Singular Value Decomposition

Lecture 13: Simple Linear Regression in Matrix Format. 1 Expectations and Variances with Vectors and Matrices

The Singular Value Decomposition

Unsupervised Machine Learning and Data Mining. DS 5230 / DS Fall Lecture 7. Jan-Willem van de Meent

Single-User MIMO systems: Introduction, capacity results, and MIMO beamforming

CLASSIFICATION OF COMPLETELY POSITIVE MAPS 1. INTRODUCTION

Exercise 1. = P(y a 1)P(a 1 )

Channel capacity estimation using free probability theory

ELEC633: Graphical Models

Lecture 7 (Weeks 13-14)

Space-Time Coding for Multi-Antenna Systems

Optimal Power Allocation for Parallel Gaussian Broadcast Channels with Independent and Common Information

Morning Session Capacity-based Power Control. Department of Electrical and Computer Engineering University of Maryland

be a Householder matrix. Then prove the followings H = I 2 uut Hu = (I 2 uu u T u )u = u 2 uut u

10-725/36-725: Convex Optimization Prerequisite Topics

Designing Information Devices and Systems II Spring 2017 Murat Arcak and Michel Maharbiz Homework 10

Lecture 8: Linear Algebra Background

Eigenvalues and diagonalization

DSP Applications for Wireless Communications: Linear Equalisation of MIMO Channels

Data Mining Lecture 4: Covariance, EVD, PCA & SVD

STA141C: Big Data & High Performance Statistical Computing

Degrees of Freedom Region of the Gaussian MIMO Broadcast Channel with Common and Private Messages

Duality, Achievable Rates, and Sum-Rate Capacity of Gaussian MIMO Broadcast Channels

STA141C: Big Data & High Performance Statistical Computing

ECE Information theory Final (Fall 2008)

MIMO Capacities : Eigenvalue Computation through Representation Theory

Lecture 7 Spectral methods

The QR Decomposition

Dimensionality Reduction: PCA. Nicholas Ruozzi University of Texas at Dallas

Note that the new channel is noisier than the original two : and H(A I +A2-2A1A2) > H(A2) (why?). min(c,, C2 ) = min(1 - H(a t ), 1 - H(A 2 )).

Linear Algebra, part 3. Going back to least squares. Mathematical Models, Analysis and Simulation = 0. a T 1 e. a T n e. Anna-Karin Tornberg

Eigenvectors and SVD 1

CHANNEL FEEDBACK QUANTIZATION METHODS FOR MISO AND MIMO SYSTEMS

Upper Bounds on MIMO Channel Capacity with Channel Frobenius Norm Constraints

Physical-Layer MIMO Relaying

A New SLNR-based Linear Precoding for. Downlink Multi-User Multi-Stream MIMO Systems

Title. Author(s)Tsai, Shang-Ho. Issue Date Doc URL. Type. Note. File Information. Equal Gain Beamforming in Rayleigh Fading Channels

Linear Least Squares. Using SVD Decomposition.

Singular Value Decomposition and Principal Component Analysis (PCA) I

EE 5407 Part II: Spatial Based Wireless Communications

Lecture 2. Capacity of the Gaussian channel

Lecture 13: Simple Linear Regression in Matrix Format

Singular Value Decomposition

On Gaussian MIMO Broadcast Channels with Common and Private Messages

33AH, WINTER 2018: STUDY GUIDE FOR FINAL EXAM

On the Optimality of Multiuser Zero-Forcing Precoding in MIMO Broadcast Channels

Singular Value Decomposition

UNIT 6: The singular value decomposition.

Anatoly Khina. Joint work with: Uri Erez, Ayal Hitron, Idan Livni TAU Yuval Kochman HUJI Gregory W. Wornell MIT

Linear Algebra Review. Vectors

Stat 159/259: Linear Algebra Notes

Optimum Power Allocation in Fading MIMO Multiple Access Channels with Partial CSI at the Transmitters

B553 Lecture 5: Matrix Algebra Review

COMP 558 lecture 18 Nov. 15, 2010

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2017 LECTURE 5

EECS 275 Matrix Computation

The Singular Value Decomposition and Least Squares Problems

Exercise Sheet 1. 1 Probability revision 1: Student-t as an infinite mixture of Gaussians

EE263 homework 9 solutions

Notes on Eigenvalues, Singular Values and QR

IEEE C80216m-09/0079r1

Optimal Sequences, Power Control and User Capacity of Synchronous CDMA Systems with Linear MMSE Multiuser Receivers

Sum-Power Iterative Watefilling Algorithm

Capacity Region of Reversely Degraded Gaussian MIMO Broadcast Channel

Random Matrix Theory Lecture 1 Introduction, Ensembles and Basic Laws. Symeon Chatzinotas February 11, 2013 Luxembourg

Transcription:

2- Mathematical methods in communication June 6th, 20 Lecture 2 Lecturer: Haim Permuter Scribe: Eynan Maydan and Asaf Aharon I. MIMO - MULTIPLE INPUT MULTIPLE OUTPUT MIMO is the use of multiple antennas at both the transmitter and reviver to improve communication performance. It is one of serveral forms of smart antenna technology. MIMO technology has attracted attention in wireless communication, because it offers significant increases in data throughput and link range without additional bandwidth or transmit power. T X R X h h 2 h 3 h 2 2 h 22 2 h 23 h 3 h32 3 3 h 33 Fig.. MIMO illustration We have seen in last lectures that for a Gaussian channel with power constraint, n n i= X2 i P, and Gaussian noise,z N(0,σ 2 ), the capacity is defined by P(X),E(X 2 ) P I(X,Y) = 2 log(+ P σ 2). Let s check the case of parallel several channels for independent noise vectors Z,Z 2,..,Z m. In the parallel case, the power constraint is given by P i = n n j= X2 ij, where, m i= P i P with probability one, and the capacity is defined by f(x,x 2,...,X K),E( k i= X2 i ) P I(X,X 2,...,X K ;Y,Y 2,...,Y K )

2-2 Fig. 2. Parallel Gaussian channels. After some calculations (described in lecture 8), we get that m 2 log(+ P i ). i= According to Water-filling algorithm, P i can be found by P i = [ ν σ2 i ]+ (0, ν σ2 i ), where P = P i. In MIMO systems, each output is affected by all channel inputs. The power constraint for this model is given by r t i= r j= X2 i,j following: σ 2 i P, with probability. The channel model can be presented as G Z X Y G 2 G 2 X 2 G 22 Z 2 Y 2 Fig. 3. MIMO Note: In this section we suppose that G is deterministic, therefore we can assume that G is known at the decoder and encoder. Mathematically, Y r is defined by Y r = G r t X t +Z r, where X t is the channel input, G r t is a constant channel gain matrix and Z N(0,K z ) is the Gaussian noise. The element G jk representing the gain of the channel from the transmitter antenna j to the receiver antenna k. Lets assume, without loss of generality, that Z N(0,) ( We will see later that this assumption do not

2-3 lead to any kind of losses). Indeed, the channel Y = GX +Z with a general K z 0 can be transformed into the channel Ỹ = K z 2 Y = Kz 2 GX + Z Z = K z 2 Z N(0, r ). Since K z is a covariance matrix, it is a symmetric matrix, i.e K z = Kz T. Therefore, we can write K z as: K z = QΛQ T Q is unitary matrix ( QQ T = r ) and Λ is the eigen values matrix which can be described as: λ 0.. 0 0 λ 2 Λ =.... 0... λ n Note that since K z is a symmetric matrix, the transpose matrix is equal to the inverted matrix, i.e. λ 0.. 0 0 λ2 Q T = Q. Similarly, K 2 z = QΛ 2Q T, where, Λ 2 =.... 0... λn As we defined above: Ỹ = K z 2 Y = Kz 2 GX +Kz 2 Z Now, let s calculate the covariance matrix of Z = K 2 Z Z K z = E[ Z Z T ] = E[K 2 z ZZ T K 2 z ] = k 2 z K z K (a) 2 z = k 2 z K 2 z K 2 z K 2 z = r r (a) follows from K z = QΛQ T = QΛ 2Q T QΛ 2Q T = K 2 z K 2 z. The notation r r is the identity matrix of dimension r r. Accordingly, we can conclude that the noise covariance can be assumed, without loss of generality, to be the unit matrix. Now, let s find the capacity of the model: I(X;Y) Pi P (a) tr(k x) P I(X;Y) tr(k x P h(y) h(y X)

2-4 tr(k x) P h(y) h(z) (b) 2 log[(2πe)r K Y ] 2 log[(2πe)r ] = 2 log K Y (a) follows from P i = E[Xi 2 ], where, X is a vector of dimension t. (b) follows from h(y) 2 log[(2πe)r K Y ], where, r is the dimension of Y, and from the fact that Z has Noraml distribution. In order to find the capacity, we need to imize the expression 2 log K Y,or actually, to imize K Y : K Y = E[(GX +Z)(GX +Z) T ] = GK X G T + For that reason tr(k x)) P 2 log K Y tr(k x) P 2 log GK XG T + Note: SVD (Single Value Decomposition) - We know that every squared matrix can be decomposed as A n n = QΛQ. The SVD allows us to decompose unsquared matrix i.e. G r t = U r r Σ r t V T t t where U,V are unitary matrices (UU T =, VV T = ), and Σ is a diagonal matrix. Therefore, GG T = UΣV T VΣ T U T = UΣΣ T U T G T G = VΣ T U T UΣV T = VΣ T ΣV T Note that U and V can be calculated from the eigen vectors of GG T and G T G, respectively. Now, using the SVD, we can finally imize K Y : tr(k x)) P 2 log K Y tr(k x) P 2 log UΣV T K X VΣ T U T + (a) tr(k x) P 2 log UT UΣV T K X VΣ T U T U +U T U (b) tr( K x) P 2 log Σ K X Σ T + tr( K X) P 2 log Σ K X Σ T + (a) follows from the property from linear algebra A = UAU T = U A U T for square matrices. (b) follows from the fact that U is unitary matrix, therefore, U T U =. In addition, Kx is defined by

2-5 K x = V T K x V. Note that according to the trace properties tr(k X ) = tr(vv T K X ) = tr(v T K X V) = tr( K X ) Remark - Assuming K is a nonnegative definite symmetric n n matrix and K denote the determinant of K, then it follows K Π n i= k ii Using the fact that X has a Noraml distribution ( X n N(0,K) ) and the remark above, we can continue imize 2 log K Y as following: Where: tr(k X) P 2 log Σ K X Σ T + (a) min(r,t) tr( K X) P i= log(γ 2 i P i +) (b) min(r,t) log(γi( K 2 X ) ii +) tr( K X) P i= min(r,t) (c) log(γ 2 P i i +) Pi P (a) follows from the fact that X has Normal distribution X n N(0,K) and from Remark. In addition γ i = Σ ii. (b) follows from P i = ( K X ) ii (c) follows from P i = ( K X ) ii which follows that Σ P i P Note that knowing the γ i s let us solve the problem using Water-filling algorithm with the given constraints. Example - Let s assume SinceG = UΣV T the G matrix can be written by: 2 3 0 G 4x2 = 5 8 3 2 K 0 0 K G 4x2 = U 2 0 0 0 0 i= V T In this example, the SVD decomposition gives us tow eigenvalues K and K 2, which can be easily found by Matlab (K = 0.3559,K 2 = 2.9590). These tow eigenvalues let us find P i using Water-filling algorithm, and then the diagonal matrix K X. Then it s easy to extract K X by K X = V K X V T.

2-6 A. Alternative proof of capacity There is another way to calculate the capacity by transforming the MIMO channel into a parallel channel. The MIMO channel is given by Y = UΣV T X +Z Now, define X = V T X Note that E[ X T X] = E[X T X]. In addition, let s define (a) Y = UΣV T X +Z. (b) Since U is unitary, U T U =. Ỹ = U T Y (a) = U T UΣV T X +U T Z (b) = ΣV T X +U T Z = Σ X + Z Remark - Since U and V are unitar matrices, multiplying in U T or V T does not add any power to channel. Lemma The MIMO channel in figure 4 has the same capacity as in figure 5 Z X G Y Fig. 4. MIMO Channel Z X V X G Y U T Ỹ Fig. 5. Parallel Channel Solution to the MIMO channel can be done by the following:

2-7 ) Solve the parallel channel in figure 5 using the water-filling. 2) The input to the MIMO is obtain by X = V X. Having this definitions let us analyze the channel from X to Ỹ (parallel channel), where Ỹ = UT Y, instead of analize the MIMO channel. II. MIMO WITH FADING Now, we assume that G is random. This assumption follows 3 cases: ) G is not known to the transmitter and the receiver. 2) G is known only to the receiver. 3) G is known to transmitter and receiver. Case I : In this case, it s not necessarily that G will get the imum capacity because it doesn t depend on G s distribution. Case II : C tr(k X) P I(X;Y) Case III : I(X;Y G) tr(k X)) P tr(k X)) P (h(y G) h(y G,X)) ΣP(g)h(Y G = g) h(z) tr(k X)) P ΣP(g)h(Xg +Z) tr(k X)) P I(X;Y G) P(x g):tr(x)) P