Proof of Two Conclusions Associated Linear Minimum Mean Square Estimation b Matri Inverse Lemma 1 Jianping Zheng State e Lab of IS, Xidian Universit, Xi an, 7171, P. R. China jpzheng@idian.edu.cn Ma 6, 17 Linear minimum mean square Error (LMMSE) estimation is a classic estimate algorithm. ere, the matri inverse lemma is applied to proof two associated conclusions. The first is that the joint LMMSE estimation is information theoreticall optimal in linear Gaussian channels. The second is that the LMMSE estimation is equivalent to the estimation consisting of noise whitening and match filter (W-MF) in linear Gaussian channels. I. LMMSE for Linear Gaussian Channels Consider the linear Gaussian channel presented b =+w (1) ere, is an -dimensional state vector, is a M-dimensional (M ) observation vector, is observation matri, and w is a M-dimensional additive Gaussian white noise (AWG) vector w I., M Given the first two moments of and, the LMMSE estimation of can be epressed b, from [1], 1 ˆ μ μ () ere, μ E and E μ are the epectation values of and, respectivel, 1 Citation: Jianping Zheng, Proof of Two Conclusions Associated Linear Minimum Mean Square Estimation b Matri Inverse Lemma, Lecture ote, http://web.idian.edu.cn/jpzheng/teach.html
E μ μ is the cross-correlation matri of and, and E μ μ is the autocorrelation matri of. II. Matri Inverse Lemma Lemma 1 (matri inverse lemma []) For matrices A, B, C and D proper sizes, it has 1 1 1 1 1 1 A BCD A A B C DA B DA (3) if A is invertible. A simplified version of this matri inverse lemma is 1 1 c A bb A ca bb A 1 1 1 c b (4) where b is a vector, c is a scalar, and b is the Euclidean norm of b. III. The Optimalit of Joint LMMSE The information theoretic optimalit of LMMSE successive decoding in the linear Gaussian channel has been discussed in several scenarios [3]-[6]. owever, the direct proof of the optimalit of joint LMMSE detection has not been reported, to the best of our nowledge. ere, a proof is given b utilizing the relation of differential entrop and the determinant of autocorrelation matri and the matri inverse lemma. In this proof, the linear Gaussian multiple-input multiple-output (MIMO) is taen as the practical eample to facilitate the presentation. Theorem 1: The Joint LMMSE estimation is information theoreticall optimal for linear Gaussian channels. Proof: Tae the linear Gaussian MIMO channel as eample. The, and in (1) can be interpreted as the transmit signal, receive signal, and MIMO channel matri, respectivel. The mutual information of and conditioned on is, from [7], denoting the determinant of the matri argument. 1 I ; log IM (5)
On the other hand, as shown in Fig. 1, consider the concatenated sstem b the linear Gaussian channel and LMMSE estimator. The input and the output variables are and ˆ, respectivel. Therefore, the proof of Theorem 1 can be replaced b the proof of I ; I ; ˆ, i.e., the LMMSE estimator is information-looseness. Linear Gaussian Channel LMMSE Estimator ˆ Define the estimation error as Then, ; ˆ Fig. 1 LMMSE estimator in fied Gaussian MIMO channel. I can be computed b ; ˆ ˆ, hhˆ ˆ hh I h h ere h( ) denotes the differential entrop of the argument, ˆ (6) (7) 1 log log 1 e t e log and are the autocorrelation matrices of and, respectivel. The fourth line follows from that the differential entrop per compleit dimension is h log e 1 In the LMMSE estimation, from (1), it has h log e 1 (8) (9) 1 (1) I (11) M Then, (7) can be further computed b
I ; ˆ log log (i) 1 log I 1 M log I I 1 M log I I 1 1 1 1 1 M = log I (ii) = log (iii) (iv) 1 1 1 1 1 1 = log I 1 1 1 = log I 1 1 1 M log I I ; M 1 1 I 1 1 1 1 1 1 IM (1) ere, (i) and (iv) follow from IAB IBA, (ii) is from the matri inverse lemma (3) A I, 1 1 B, M C I and D, and (iii) is from 1 1 1 A A. IV. The Equivalence of LMMSE and W-MF Define as the -th entr of the -dimensional state vector, i.e., =( 1,,, ) T. Consider the LMMSE estimation of. It is more convenient to rewrite (1) as h h w h j j j z z h j j w. Without loss of generalit, is assumed to be zero-mean, i.e., μ. j Define the autocorrelation matri of z b E P (13) E zμ z μ zz h h I (14) z z z j j j M j Pj=E j. In [7], the LMMSE estimate of is realized b the W-MF described as follows. First, the W is performed as h z (15) 1 1 1 z z z
Then, a MF is used as 1 1 1 1 1 z z z z z h h h h z h h h z 1 1 z z (16) From (15) and (16), the W-MF estimate vector is v h since W-MF 1 z 1 v h z. The declaration that W-MF is equivalent to LMMSE is based on the relation between signal-to-noise-ratio (SR) and mean square error (MSE) (Eercise 8.18, [7]). ere, this conclusion is presented as Theorem and an alternative proof is given. Theorem : The W-MF estimation is equivalent to the LMMSE estimation in the linear Gaussian channel. Proof: With the zero-mean assumption, the standard LMMSE estimate vector is v LMMSE 1 from (). Therefore, the proof of Theorem is equivalent to the proof that vlmmse v W-MF a scalar. ote that E E P μ h z h (17) E E μ μ E h z h z Ph h z (18) It has Using the matri inverse lemma (4), it has P 1 v Ph h h (19) 1 LMMSE z P h h 1 1 1 1 z z hh z z 1 P h P () Then, P h h P h h h v h h 1P h 1P h 1 1 1 1 1 z z 1 z z LMMSE P z P z P h h P 1 z 1 1 h z h z vw-mf 1 P h (1)
P h h P 1 1 z P h () V. Conclusions Appling the matri inverse lemma, two conclusions associated LMMSE were proved. Specificall, first, the information theoretical optimalit of LMMSE in the linear Gaussian channel was proved based on the relation of the differential entrop the determinant of the autocorrelation matri. Second, an alternative proof of the equivalence of W-MF and LMMSE was given based on the linear relation of the two estimation vectors. References [1] Van Trees,. L. Detection, Estimation, and Modulation Theor, Part I. ew Yor: Wile, 1968. [] icholas J. igham, Accurac and Stabilit of umerical Algorithms (nd ed.), PA: SIAM,. [3] G. D. Forne, Jr., Shannon meets wiener II: On MMSE estimation in successive decoding schemes, in Proceedings of 4 Allerton Conference, Monticello, IL, Oct. 4. [4] J. M. Cioffi and G. D. Forne, Jr., Generalized decision-feedbac equalization for pacet transmission ISI and Gaussian noise, in Communications, Computation, Control and Signal Processing (A. Paulraj et al., eds.), pp. 79-17. [5] J. M. Cioffi, G. P. Dudevoir, M. V. Euboglu and G. D. Forne, Jr., MMSE decision-feedbac equalizers and coding-part I: Equalization results; Part II: Coding results, IEEE Trans. Commun., vol. 43, pp. 581-64, Oct. 1995. [6] T. Guess and M.. Varanasi, An information-theoretic framewor for deriving canonical decision-feedbac receivers in Gaussian channels, IEEE Trans. Inf. Theor, vol. 51, no. 1, pp. 173-187, Jan. 5. [7] D. Tse, P. Viswanath, Fundamentals of Wireless Communication, Cambridge Universit Press, 5.