Estimation techniques

Size: px
Start display at page:

Download "Estimation techniques"

Transcription

1 Estimation techniques March 2, 2006 Contents 1 Problem Statement 2 2 Bayesian Estimation Techniques Minimum Mean Squared Error (MMSE) estimation General formulation Important special case: Gaussian model Example: Linear Gaussian Model Linear estimation Linear MMSE Estimator Orthogonality Condition Principle Example: Linear Model Non-Bayesian Estimation Techniques The BLU estimator Example: Linear Gaussian Model The LS estimator Orthogonality condition Extension to vector estimation 8 5 Further problems Problem 1: relations for Linear Gaussian Models No a priori information: MMSE LS Uncorrelated noise: BLU LS Zero-noise: MMSE LS Problem 2: orthogonality Practical example: multi-antenna communication Problem Solution

2 1 Problem Statement We are interested in estimating a continuous scalar 1 parameter a A from a vector observation r. The observation the parameter are related through a probabilistic mapping p R A (r a). a may or may not be a sample of a rom variable. This leads to Bayesian non-bayesian estimators, respectively. Special cases Noisy observation model: r = f (a, n) where n (the noise component) is independent of a, has a pdf p N (n). Linear model: for a known h, with n independent of a r = ha + n Linear Gaussian model: linear model with N N (0, Σ N ), A N (m A, Σ A ) 2 Bayesian Estimation Techniques Here, a A has a known a priori distribution p A (a). The most important Bayesian estimators are the MAP (maximum a posteriori) estimator the MMSE (minimum mean squared error) estimator the linear MMSE estimator The latter two will be covered in this section. We remind that the MAP estimate is given by â MAP (r) = arg max a A p A R (a r). 2.1 Minimum Mean Squared Error (MMSE) estimation General formulation The MMSE estimator minimizes the expected estimation error C = da (a â (r)) 2 p A R (a r) p R (r) dr A R { (A â (R)) 2}. Taking the derivative (assuming this is possible) w.r.t. â (r) setting the result to zero yields â (r) = a p A R (a r) da. 1 We will only consider estimation of scalars. Extension to vectors is straightforward. A 2

3 2.1.2 Important special case: Gaussian model When A R are jointly Gaussian, we can write where [A, R] N (m, Σ) Σ = m = [ ma m R ] [ ] ΣA Σ T AR. Σ AR Σ R In that case it can be shown (after some straightforward manipulation), that A R N ( m A + Σ T ARΣ 1 R (r m R), Σ A Σ T ARΣ 1 R Σ ) AR so that Note that â MMSE (r) = m A + Σ T ARΣ 1 R (r m R) (1) A posteriori (after observing r), the uncertainty w.r.t. a is reduced from Σ A to Σ A Σ T AR Σ 1 R Σ AR < Σ A. When A R are independent Σ AR = 0, so that â MMSE (r) = m A For A R jointly Gaussian, â MAP (r) = â MMSE (r). The estimator is linear in the observation Example: Linear Gaussian Model Problem: Derive the MMSE estimator for the Linear Gaussian Model Solution: We know that R = ha + N with A N (m A, Σ A ), N N (0, Σ N ) with A N independent. We also know that the MMSE estimator is given by: Let us compute m R, Σ R Σ AR. â MMSE (r) = m A + Σ T ARΣ 1 R (r m R). m R {R} = hm A Σ AR {(R m R ) (A m A )} {(ha + N hm A ) (A m A )} {(h (A m A ) + N) (A m A )} { = he (A m A ) 2} + E {N (A m A )} = hσ A Σ R {(R } m R ) (R m R ) T {(h } (A m A ) + N) (h (A m A ) + N) T = he {(A m A ) 2} h T + E { NN T } = hσ A h T + Σ N. 3

4 This leads to â MMSE (r) = m A + Σ T ARΣ 1 R (r m R) = m A + Σ A h T ( hσ A h T + Σ N (r hma ) 2.2 Linear estimation In some cases, it is preferred to have an estimator which is a linear function of the observation: â (r) = b T r + c so that â (r) is obtained through an affine transformation of the observation. Clearly, when A R are jointly Gaussian, the MMSE estimator is a linear estimator with b T = Σ T ARΣ 1 R c = m A + Σ T ARΣ 1 R (m R). 2.3 Linear MMSE Estimator When A R are not jointly Gaussian, we can still use (1) as an estimator. This is known as the Linear MMSE (LMMSE) estimator: â LMMSE (r) = m A + Σ T ARΣ 1 R (r m R) Theorem: We wish to find an estimator with the following properties the estimator must be linear in the observation E {â (r)} = m A (this can be interpreted as a form of unbiasedness) the estimator has the minimum MSE among all linear estimators Then the estimator is the LMMSE estimator. Proof: Given a generic linear estimator â (r) = b T r + c (2) we would like to find b c such that the resulting estimator is unbiased has minimum variance. Clearly E {â (r)} = b T m R + c so that b c have to satisfy We also know that b T m R + c = m A c = m A b T m R c 2 = m 2 A + b T m R m T Rb 2m A b T m R Σ R {(R } m R ) (R m R ) T { RR T } m R m T R Σ AR {(R m R ) (A m A )} {RA} m R m A. 4

5 Let us now look at the variance of the estimation error (the estimation error is zero-mean): V {(â (R) A) 2} { (b T R + c A ) } 2 = b T E { RR T } b + E { A 2} + c 2 + 2cb T m R 2cm A 2b T E {RA} = b T ( Σ R + m R m T R) b + ΣA + m 2 A + c 2 + 2c ( b T ) m R m A 2b T (Σ AR + m R m A ) = b T ( Σ R + m R m T R) b + ΣA + m 2 A c 2 2b T (Σ AR + m R m A ) = b T Σ R b + Σ A 2b T Σ AR where we have used the fact that c 2 = m 2 A + bt m R m T R b 2m Ab T m R. Taking derivative w.r.t. b equating the result to zero, yields 2 2Σ R b 2Σ AR = 0 thus (since Σ 1 R is symmetric) b = Σ 1 R Σ AR (3) Substitution of (3) (4) into (2) gives us the final result which is clearly the desired result. QED 2.4 Orthogonality Condition Principle c = m A b T m R = m A Σ T ARΣ 1 R m R (4) â (r) = m A + Σ T ARΣ 1 R (r m R) The orthogonality condition is useful in deriving (L-)MMSE estimators. Theorem: for LMMSE estimation, the estimation error is (statistically) orthogonal to the observation when m R = 0 m A = 0: Proof: QED Example: Linear Model E {R (â (R) A)} { R ( Σ T ARΣ 1 R R A)} { RR T } ( Σ 1 R Σ AR) E {RA} = Σ R ( Σ 1 R Σ AR) ΣAR = 0. Problem: Verify the orthogonality condition assuming m A = 0. Solution: For m A = 0, we need to verify that E {R (â (R) A)} = 0, we see that we need to verify that E {Râ (R)} {RA} = hσ A 2 To see this requires some knowledge of vector calculus: differentiation w.r.t. x gives us x T Ax 2Ax x T y y. 5

6 { E {Râ (R)} R (Σ A h T ( hσ A h T )} + Σ N R { RR T } ( hσ A h T + Σ N ΣA h = ( hσ A h T + Σ N ) ( hσa h T + Σ N ΣA h = Σ A h 3 Non-Bayesian Estimation Techniques The above techniques cannot be applied when we do not consider A to be a rom variable. Another approach is called for, where we would like unbiased estimators with small variances. The most important non-bayesian estimators are the ML (maximum likelihood) estimator the BLU (best linear unbiased) estimator the LS (least squares) estimator The latter two will be covered in this section. All expectations are taken for a given value of a. We remind that the ML estimate is given by â ML (r) = arg max a A p R A (r a). 3.1 The BLU estimator As before, we consider a generic linear estimator: â (r) = b T r + c. To obtain an unbiased estimator, we need to assume that c = 0 that E {R} = ha for some known vector h. We introduce Σ R {(R } E {R}) (R E {R}) T { RR T } a 2 hh T which is assumed to be known to the estimator. Our goal is to find a b that leads to an unbiased estimator with minimal variance for all a. Unbiased An unbiased estimator must satisfy E {â (r)} = a, for all a A, so that b T h = 1. 6

7 Variance of Estimation Error The variance of the estimation error is given by V a {(â (R) a) 2} { (b T r a ) } 2 { (b T (r ha) ) } 2 { = b T E (r ha) 2} b = b T Σ R b. This leads us to the following optimization problem: find b that minimizes V a, subject to b T h = 1. Using a Lagrange multiplier technique, we find we need to minimize b T Σ R b λ ( b T h 1 ). This leads to With the constraint b T h = 1 giving rise to b = Σ 1 R hλ. λh T Σ 1 R h = 1 so that b = Σ 1 R h ( h T Σ 1 R h V a = ( h T Σ 1 R h h T Σ 1 R Σ RΣ 1 R h ( h T Σ 1 R h = ( h T Σ 1 R h Example: Linear Gaussian Model Problem: derive the BLU estimator for the Linear Gaussian Model Solution: where Hence â BLU (r) = ( h T Σ 1 R h h T Σ 1 R r Σ R {(R } m R ) (R m R ) T {(R } ha) (R ha) T {(ha } + N ha) (ha + N ha) T = Σ N. â BLU (r) = ( h T Σ 1 N h h T Σ 1 N r 7

8 3.2 The LS estimator If we refer back to our problem statement, the LS estimator tries to find a that minimizes the distance the between the observation the reconstructed noiseless observation: { d r h (a, 0) 2}. When h (a, 0) = ha, we find that d { r ha 2} { r T r } + a 2 h T h 2ar T h Minimizing w.r.t. a gives us finally 2ah T h 2r T h = Orthogonality condition â LS (r) = ( h T h h T r. The reconstruction error is orthogonal to the parameter: (r hâ LS (r)) T a = ( r h ( h T h h T r) T a 4 Extension to vector estimation When = (r r) T a = 0 r = Ha + n with r n N 1 vectors, H a N M matrix, a an M 1 vector. The MMSE, LS BLU estimates are then given by 5 Further problems â LMMSE (r) = m A + Σ A H T ( HΣ A H T + Σ N (r HmA ) = m A + ( H T Σ 1 N H + Σ 1 A H T Σ 1 N (r Hm A) â LS (r) = ( H T H H T r â BLU (r) = ( H T Σ 1 N H H T Σ 1 N r 5.1 Problem 1: relations for Linear Gaussian Models How are MMSE, LS BLU related? Consider the scalar case. The estimators are given by â MMSE (r) = m A + Σ A h T ( hσ A h T + Σ N (r hma ) â LS (r) = ( h T h h T r â BLU (r) = ( h T Σ 1 N h h T Σ 1 N r 8

9 5.1.1 No a priori information: MMSE LS When m A = 0 Σ 1 A = 0 we get Uncorrelated noise: BLU LS When Σ N = σ 2 I, Zero-noise: MMSE LS For the MMSE estimator, when Σ N = 0, 5.2 Problem 2: orthogonality â MMSE (r) = h T ( hh T r = ( hh T h T r = â LS (r) â BLU (r) = ( h T h h T r = â LS (r) â MMSE (r) = m A + h T ( hh T (r hma ) = h T ( hh T r = â LS (r). Problem: Derive the LMMSE estimator, starting from the orthogonality condition. Solution: We introduce Q = A m A, S = R m R = R hm A, find the LMMSE estimate of Q from the observation S. The orthogonality principle then tells us that with, for some (as yet undetermined) b: so that Hence The estimate of A is then given by E {Sˆq (S)} {SQ} = hσ A E {Sˆq (S)} { Sb T S } = ( hσ A h T + Σ N ) b b = ( hσ A h T + Σ N ΣA h. ˆq LMMSE (s) = Σ A h T ( hσ A h T + Σ N s. â LMMSE (r) = ˆq LMMSE (r m R ) + m A = m A + Σ A h T ( hσ A h T + Σ N (r hma ). 5.3 Practical example: multi-antenna communication In this problem, we show how LMMSE LS can be used when ML or MAP leads to algorithms which are too complex to implement. 9

10 5.3.1 Problem We are interested in the following problem: we transmit a vector of n T iid data symbols a A = Ω n T over an AWGN channel. Here n T is the number of transmit antennas. The receiver is equipped with n R receive antennas. We can write the observation as r = Ha + n where H is a (known) n R n T channel matrix N N ( 0, σ 2 I nr ). In any practical communication scheme, Ω is a finite set of size M (e.g., BPSK signaling with Ω = { 1, +1}). The symbols are iid, uniformly distributed over Ω. 1. Determine the ML MAP estimates of a. Determine the complexity of the receiver (number of operations to estimate a) as a function of n T. 2. How can we use LMMSE LS be used to estimate a? We assume Σ A = I nt m A = 0. Determine the complexity of the resulting estimators as a function of n T Solution Solution - part 1 The MAP ML estimators are considered to be optimal in the case of estimating discrete parameters, in a sense of minimizing the error probability. â ML (r) = arg max p R A (r a) = arg max log p R A (r a) = arg max 1 r Ha 2 σ2 = arg min r Ha 2 â MAP (r) = arg max p A R (a r) = arg max p R A (r a) p A (a) p R (r) = arg max p R A (r a) = â ML (r) Complexity: for each a Ω n T, we need to compute r Ha 2. Hence, the complexity is exponential in the number of transmit antennas. Special case: Let us assume Ω = { 1, +1} n T = 1, n R > 1. Then â ML (r) = arg max a Ω ( ah T r ) In this case we combine the observations from multiple receive antennas. Each observation is weighted with the channel gain. This means that when the channel is unreliable on a given antenna (so that h k is small), this has only a small contribution to our decision statistic. 10

11 Solution - part 2 Let us temporarily forget that a lives in a discrete space. We can then introduce the LMMSE LS estimates as follows: â LMMSE (r) = H T ( HH T + σ 2 I nr r â LS (r) = ( H T H H T r. Note that â LMMSE (r) â LS (r) may not belong to Ω n T. Hence, we need to quantize these estimates (mapping the estimate to the closest point in Ω n T ); this can be done on a symbol-per-symbol basis. Generally the LS estimator will have poor performance when HH T is close to singular. Note that the LMMSE estimator requires knowledge of σ 2, while the MAP ML estimators do not. Complexity: now the complexity is of the order n R n T (since any matrix not depending on r can be pre-computed by the receiver), which is linear in the number of transmit antennas. Special case: -=-=-Comment: there was a mistake in the lecture. These are the correct results-=-=- Let us assume that we use more transmit that receive antennas: n T > n R. In that case the the n R n R matrix HH T is necessarily singular, so that LS estimation will not work. The LMMSE estimator requires the inversion of HH T + σ 2 I nr. When σ 2 0, this matrix is always invertible. This is a strange situation, where noise is actually helpful in the estimation process. 11

2 Statistical Estimation: Basic Concepts

2 Statistical Estimation: Basic Concepts Technion Israel Institute of Technology, Department of Electrical Engineering Estimation and Identification in Dynamical Systems (048825) Lecture Notes, Fall 2009, Prof. N. Shimkin 2 Statistical Estimation:

More information

Ch. 12 Linear Bayesian Estimators

Ch. 12 Linear Bayesian Estimators Ch. 1 Linear Bayesian Estimators 1 In chapter 11 we saw: the MMSE estimator takes a simple form when and are jointly Gaussian it is linear and used only the 1 st and nd order moments (means and covariances).

More information

DS-GA 1002 Lecture notes 11 Fall Bayesian statistics

DS-GA 1002 Lecture notes 11 Fall Bayesian statistics DS-GA 100 Lecture notes 11 Fall 016 Bayesian statistics In the frequentist paradigm we model the data as realizations from a distribution that depends on deterministic parameters. In contrast, in Bayesian

More information

Bayesian statistics. DS GA 1002 Statistical and Mathematical Models. Carlos Fernandez-Granda

Bayesian statistics. DS GA 1002 Statistical and Mathematical Models.   Carlos Fernandez-Granda Bayesian statistics DS GA 1002 Statistical and Mathematical Models http://www.cims.nyu.edu/~cfgranda/pages/dsga1002_fall15 Carlos Fernandez-Granda Frequentist vs Bayesian statistics In frequentist statistics

More information

Parameter Estimation

Parameter Estimation 1 / 44 Parameter Estimation Saravanan Vijayakumaran sarva@ee.iitb.ac.in Department of Electrical Engineering Indian Institute of Technology Bombay October 25, 2012 Motivation System Model used to Derive

More information

Lecture 9: Diversity-Multiplexing Tradeoff Theoretical Foundations of Wireless Communications 1

Lecture 9: Diversity-Multiplexing Tradeoff Theoretical Foundations of Wireless Communications 1 : Diversity-Multiplexing Tradeoff Theoretical Foundations of Wireless Communications 1 Rayleigh Friday, May 25, 2018 09:00-11:30, Kansliet 1 Textbook: D. Tse and P. Viswanath, Fundamentals of Wireless

More information

Lecture 9: Diversity-Multiplexing Tradeoff Theoretical Foundations of Wireless Communications 1. Overview. Ragnar Thobaben CommTh/EES/KTH

Lecture 9: Diversity-Multiplexing Tradeoff Theoretical Foundations of Wireless Communications 1. Overview. Ragnar Thobaben CommTh/EES/KTH : Diversity-Multiplexing Tradeoff Theoretical Foundations of Wireless Communications 1 Rayleigh Wednesday, June 1, 2016 09:15-12:00, SIP 1 Textbook: D. Tse and P. Viswanath, Fundamentals of Wireless Communication

More information

Assignment 4 (Solutions) NPTEL MOOC (Bayesian/ MMSE Estimation for MIMO/OFDM Wireless Communications)

Assignment 4 (Solutions) NPTEL MOOC (Bayesian/ MMSE Estimation for MIMO/OFDM Wireless Communications) Assignment 4 Solutions NPTEL MOOC Bayesian/ MMSE Estimation for MIMO/OFDM Wireless Communications The system model can be written as, y hx + The MSE of the MMSE estimate ĥ of the aboe mentioned system

More information

Advanced Signal Processing Introduction to Estimation Theory

Advanced Signal Processing Introduction to Estimation Theory Advanced Signal Processing Introduction to Estimation Theory Danilo Mandic, room 813, ext: 46271 Department of Electrical and Electronic Engineering Imperial College London, UK d.mandic@imperial.ac.uk,

More information

Estimation Tasks. Short Course on Image Quality. Matthew A. Kupinski. Introduction

Estimation Tasks. Short Course on Image Quality. Matthew A. Kupinski. Introduction Estimation Tasks Short Course on Image Quality Matthew A. Kupinski Introduction Section 13.3 in B&M Keep in mind the similarities between estimation and classification Image-quality is a statistical concept

More information

ECE 541 Stochastic Signals and Systems Problem Set 9 Solutions

ECE 541 Stochastic Signals and Systems Problem Set 9 Solutions ECE 541 Stochastic Signals and Systems Problem Set 9 Solutions Problem Solutions : Yates and Goodman, 9.5.3 9.1.4 9.2.2 9.2.6 9.3.2 9.4.2 9.4.6 9.4.7 and Problem 9.1.4 Solution The joint PDF of X and Y

More information

Variations. ECE 6540, Lecture 10 Maximum Likelihood Estimation

Variations. ECE 6540, Lecture 10 Maximum Likelihood Estimation Variations ECE 6540, Lecture 10 Last Time BLUE (Best Linear Unbiased Estimator) Formulation Advantages Disadvantages 2 The BLUE A simplification Assume the estimator is a linear system For a single parameter

More information

ECE531 Lecture 8: Non-Random Parameter Estimation

ECE531 Lecture 8: Non-Random Parameter Estimation ECE531 Lecture 8: Non-Random Parameter Estimation D. Richard Brown III Worcester Polytechnic Institute 19-March-2009 Worcester Polytechnic Institute D. Richard Brown III 19-March-2009 1 / 25 Introduction

More information

DS-GA 1002 Lecture notes 10 November 23, Linear models

DS-GA 1002 Lecture notes 10 November 23, Linear models DS-GA 2 Lecture notes November 23, 2 Linear functions Linear models A linear model encodes the assumption that two quantities are linearly related. Mathematically, this is characterized using linear functions.

More information

Parametric Models. Dr. Shuang LIANG. School of Software Engineering TongJi University Fall, 2012

Parametric Models. Dr. Shuang LIANG. School of Software Engineering TongJi University Fall, 2012 Parametric Models Dr. Shuang LIANG School of Software Engineering TongJi University Fall, 2012 Today s Topics Maximum Likelihood Estimation Bayesian Density Estimation Today s Topics Maximum Likelihood

More information

Detection and Estimation Theory

Detection and Estimation Theory Detection and Estimation Theory Instructor: Prof. Namrata Vaswani Dept. of Electrical and Computer Engineering Iowa State University http://www.ece.iastate.edu/ namrata Slide 1 What is Estimation and Detection

More information

SYDE 372 Introduction to Pattern Recognition. Probability Measures for Classification: Part I

SYDE 372 Introduction to Pattern Recognition. Probability Measures for Classification: Part I SYDE 372 Introduction to Pattern Recognition Probability Measures for Classification: Part I Alexander Wong Department of Systems Design Engineering University of Waterloo Outline 1 2 3 4 Why use probability

More information

Bayesian Decision Theory

Bayesian Decision Theory Bayesian Decision Theory Dr. Shuang LIANG School of Software Engineering TongJi University Fall, 2012 Today s Topics Bayesian Decision Theory Bayesian classification for normal distributions Error Probabilities

More information

Estimation theory. Parametric estimation. Properties of estimators. Minimum variance estimator. Cramer-Rao bound. Maximum likelihood estimators

Estimation theory. Parametric estimation. Properties of estimators. Minimum variance estimator. Cramer-Rao bound. Maximum likelihood estimators Estimation theory Parametric estimation Properties of estimators Minimum variance estimator Cramer-Rao bound Maximum likelihood estimators Confidence intervals Bayesian estimation 1 Random Variables Let

More information

Outline Lecture 2 2(32)

Outline Lecture 2 2(32) Outline Lecture (3), Lecture Linear Regression and Classification it is our firm belief that an understanding of linear models is essential for understanding nonlinear ones Thomas Schön Division of Automatic

More information

Introduction to Systems Analysis and Decision Making Prepared by: Jakub Tomczak

Introduction to Systems Analysis and Decision Making Prepared by: Jakub Tomczak Introduction to Systems Analysis and Decision Making Prepared by: Jakub Tomczak 1 Introduction. Random variables During the course we are interested in reasoning about considered phenomenon. In other words,

More information

ECE531 Lecture 10b: Maximum Likelihood Estimation

ECE531 Lecture 10b: Maximum Likelihood Estimation ECE531 Lecture 10b: Maximum Likelihood Estimation D. Richard Brown III Worcester Polytechnic Institute 05-Apr-2011 Worcester Polytechnic Institute D. Richard Brown III 05-Apr-2011 1 / 23 Introduction So

More information

COMPLEX CONSTRAINED CRB AND ITS APPLICATION TO SEMI-BLIND MIMO AND OFDM CHANNEL ESTIMATION. Aditya K. Jagannatham and Bhaskar D.

COMPLEX CONSTRAINED CRB AND ITS APPLICATION TO SEMI-BLIND MIMO AND OFDM CHANNEL ESTIMATION. Aditya K. Jagannatham and Bhaskar D. COMPLEX CONSTRAINED CRB AND ITS APPLICATION TO SEMI-BLIND MIMO AND OFDM CHANNEL ESTIMATION Aditya K Jagannatham and Bhaskar D Rao University of California, SanDiego 9500 Gilman Drive, La Jolla, CA 92093-0407

More information

A Simple Example Binary Hypothesis Testing Optimal Receiver Frontend M-ary Signal Sets Message Sequences. possible signals has been transmitted.

A Simple Example Binary Hypothesis Testing Optimal Receiver Frontend M-ary Signal Sets Message Sequences. possible signals has been transmitted. Introduction I We have focused on the problem of deciding which of two possible signals has been transmitted. I Binary Signal Sets I We will generalize the design of optimum (MPE) receivers to signal sets

More information

Problem Set 7 Due March, 22

Problem Set 7 Due March, 22 EE16: Probability and Random Processes SP 07 Problem Set 7 Due March, Lecturer: Jean C. Walrand GSI: Daniel Preda, Assane Gueye Problem 7.1. Let u and v be independent, standard normal random variables

More information

Lecture 19: Bayesian Linear Estimators

Lecture 19: Bayesian Linear Estimators ECE 830 Fall 2010 Statistical Signal Processing instructor: R Nowa, scribe: I Rosado-Mendez Lecture 19: Bayesian Linear Estimators 1 Linear Minimum Mean-Square Estimator Suppose our data is set X R n,

More information

Detecting Parametric Signals in Noise Having Exactly Known Pdf/Pmf

Detecting Parametric Signals in Noise Having Exactly Known Pdf/Pmf Detecting Parametric Signals in Noise Having Exactly Known Pdf/Pmf Reading: Ch. 5 in Kay-II. (Part of) Ch. III.B in Poor. EE 527, Detection and Estimation Theory, # 5c Detecting Parametric Signals in Noise

More information

Pattern Recognition. Parameter Estimation of Probability Density Functions

Pattern Recognition. Parameter Estimation of Probability Density Functions Pattern Recognition Parameter Estimation of Probability Density Functions Classification Problem (Review) The classification problem is to assign an arbitrary feature vector x F to one of c classes. The

More information

Lessons in Estimation Theory for Signal Processing, Communications, and Control

Lessons in Estimation Theory for Signal Processing, Communications, and Control Lessons in Estimation Theory for Signal Processing, Communications, and Control Jerry M. Mendel Department of Electrical Engineering University of Southern California Los Angeles, California PRENTICE HALL

More information

ECE 564/645 - Digital Communications, Spring 2018 Homework #2 Due: March 19 (In Lecture)

ECE 564/645 - Digital Communications, Spring 2018 Homework #2 Due: March 19 (In Lecture) ECE 564/645 - Digital Communications, Spring 018 Homework # Due: March 19 (In Lecture) 1. Consider a binary communication system over a 1-dimensional vector channel where message m 1 is sent by signaling

More information

L11: Pattern recognition principles

L11: Pattern recognition principles L11: Pattern recognition principles Bayesian decision theory Statistical classifiers Dimensionality reduction Clustering This lecture is partly based on [Huang, Acero and Hon, 2001, ch. 4] Introduction

More information

List Decoding: Geometrical Aspects and Performance Bounds

List Decoding: Geometrical Aspects and Performance Bounds List Decoding: Geometrical Aspects and Performance Bounds Maja Lončar Department of Information Technology Lund University, Sweden Summer Academy: Progress in Mathematics for Communication Systems Bremen,

More information

DS-GA 1002 Lecture notes 12 Fall Linear regression

DS-GA 1002 Lecture notes 12 Fall Linear regression DS-GA Lecture notes 1 Fall 16 1 Linear models Linear regression In statistics, regression consists of learning a function relating a certain quantity of interest y, the response or dependent variable,

More information

Lecture Notes 4 Vector Detection and Estimation. Vector Detection Reconstruction Problem Detection for Vector AGN Channel

Lecture Notes 4 Vector Detection and Estimation. Vector Detection Reconstruction Problem Detection for Vector AGN Channel Lecture Notes 4 Vector Detection and Estimation Vector Detection Reconstruction Problem Detection for Vector AGN Channel Vector Linear Estimation Linear Innovation Sequence Kalman Filter EE 278B: Random

More information

EE6604 Personal & Mobile Communications. Week 15. OFDM on AWGN and ISI Channels

EE6604 Personal & Mobile Communications. Week 15. OFDM on AWGN and ISI Channels EE6604 Personal & Mobile Communications Week 15 OFDM on AWGN and ISI Channels 1 { x k } x 0 x 1 x x x N- 2 N- 1 IDFT X X X X 0 1 N- 2 N- 1 { X n } insert guard { g X n } g X I n { } D/A ~ si ( t) X g X

More information

Problem Set 2. MAS 622J/1.126J: Pattern Recognition and Analysis. Due: 5:00 p.m. on September 30

Problem Set 2. MAS 622J/1.126J: Pattern Recognition and Analysis. Due: 5:00 p.m. on September 30 Problem Set 2 MAS 622J/1.126J: Pattern Recognition and Analysis Due: 5:00 p.m. on September 30 [Note: All instructions to plot data or write a program should be carried out using Matlab. In order to maintain

More information

LINEAR MMSE ESTIMATION

LINEAR MMSE ESTIMATION LINEAR MMSE ESTIMATION TERM PAPER FOR EE 602 STATISTICAL SIGNAL PROCESSING By, DHEERAJ KUMAR VARUN KHAITAN 1 Introduction Linear MMSE estimators are chosen in practice because they are simpler than the

More information

MMSE DECODING FOR ANALOG JOINT SOURCE CHANNEL CODING USING MONTE CARLO IMPORTANCE SAMPLING

MMSE DECODING FOR ANALOG JOINT SOURCE CHANNEL CODING USING MONTE CARLO IMPORTANCE SAMPLING MMSE DECODING FOR ANALOG JOINT SOURCE CHANNEL CODING USING MONTE CARLO IMPORTANCE SAMPLING Yichuan Hu (), Javier Garcia-Frias () () Dept. of Elec. and Comp. Engineering University of Delaware Newark, DE

More information

Basic concepts in estimation

Basic concepts in estimation Basic concepts in estimation Random and nonrandom parameters Definitions of estimates ML Maimum Lielihood MAP Maimum A Posteriori LS Least Squares MMS Minimum Mean square rror Measures of quality of estimates

More information

Lecture 5 Channel Coding over Continuous Channels

Lecture 5 Channel Coding over Continuous Channels Lecture 5 Channel Coding over Continuous Channels I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw November 14, 2014 1 / 34 I-Hsiang Wang NIT Lecture 5 From

More information

Applications of Lattices in Telecommunications

Applications of Lattices in Telecommunications Applications of Lattices in Telecommunications Dept of Electrical and Computer Systems Engineering Monash University amin.sakzad@monash.edu Oct. 2013 1 Sphere Decoder Algorithm Rotated Signal Constellations

More information

Linear Regression and Its Applications

Linear Regression and Its Applications Linear Regression and Its Applications Predrag Radivojac October 13, 2014 Given a data set D = {(x i, y i )} n the objective is to learn the relationship between features and the target. We usually start

More information

Linear regression. DS GA 1002 Statistical and Mathematical Models. Carlos Fernandez-Granda

Linear regression. DS GA 1002 Statistical and Mathematical Models.   Carlos Fernandez-Granda Linear regression DS GA 1002 Statistical and Mathematical Models http://www.cims.nyu.edu/~cfgranda/pages/dsga1002_fall15 Carlos Fernandez-Granda Linear models Least-squares estimation Overfitting Example:

More information

Outline lecture 2 2(30)

Outline lecture 2 2(30) Outline lecture 2 2(3), Lecture 2 Linear Regression it is our firm belief that an understanding of linear models is essential for understanding nonlinear ones Thomas Schön Division of Automatic Control

More information

Lecture 18: Gaussian Channel

Lecture 18: Gaussian Channel Lecture 18: Gaussian Channel Gaussian channel Gaussian channel capacity Dr. Yao Xie, ECE587, Information Theory, Duke University Mona Lisa in AWGN Mona Lisa Noisy Mona Lisa 100 100 200 200 300 300 400

More information

Universität Potsdam Institut für Informatik Lehrstuhl Maschinelles Lernen. Bayesian Learning. Tobias Scheffer, Niels Landwehr

Universität Potsdam Institut für Informatik Lehrstuhl Maschinelles Lernen. Bayesian Learning. Tobias Scheffer, Niels Landwehr Universität Potsdam Institut für Informatik Lehrstuhl Maschinelles Lernen Bayesian Learning Tobias Scheffer, Niels Landwehr Remember: Normal Distribution Distribution over x. Density function with parameters

More information

Lecture 5 Least-squares

Lecture 5 Least-squares EE263 Autumn 2008-09 Stephen Boyd Lecture 5 Least-squares least-squares (approximate) solution of overdetermined equations projection and orthogonality principle least-squares estimation BLUE property

More information

Statistics: Learning models from data

Statistics: Learning models from data DS-GA 1002 Lecture notes 5 October 19, 2015 Statistics: Learning models from data Learning models from data that are assumed to be generated probabilistically from a certain unknown distribution is a crucial

More information

LECTURE 5 NOTES. n t. t Γ(a)Γ(b) pt+a 1 (1 p) n t+b 1. The marginal density of t is. Γ(t + a)γ(n t + b) Γ(n + a + b)

LECTURE 5 NOTES. n t. t Γ(a)Γ(b) pt+a 1 (1 p) n t+b 1. The marginal density of t is. Γ(t + a)γ(n t + b) Γ(n + a + b) LECTURE 5 NOTES 1. Bayesian point estimators. In the conventional (frequentist) approach to statistical inference, the parameter θ Θ is considered a fixed quantity. In the Bayesian approach, it is considered

More information

QUANTIZATION FOR DISTRIBUTED ESTIMATION IN LARGE SCALE SENSOR NETWORKS

QUANTIZATION FOR DISTRIBUTED ESTIMATION IN LARGE SCALE SENSOR NETWORKS QUANTIZATION FOR DISTRIBUTED ESTIMATION IN LARGE SCALE SENSOR NETWORKS Parvathinathan Venkitasubramaniam, Gökhan Mergen, Lang Tong and Ananthram Swami ABSTRACT We study the problem of quantization for

More information

Lecture : Probabilistic Machine Learning

Lecture : Probabilistic Machine Learning Lecture : Probabilistic Machine Learning Riashat Islam Reasoning and Learning Lab McGill University September 11, 2018 ML : Many Methods with Many Links Modelling Views of Machine Learning Machine Learning

More information

Chapter 9 Fundamental Limits in Information Theory

Chapter 9 Fundamental Limits in Information Theory Chapter 9 Fundamental Limits in Information Theory Information Theory is the fundamental theory behind information manipulation, including data compression and data transmission. 9.1 Introduction o For

More information

System Identification

System Identification System Identification Lecture : Statistical properties of parameter estimators, Instrumental variable methods Roy Smith 8--8. 8--8. Statistical basis for estimation methods Parametrised models: G Gp, zq,

More information

IEEE C80216m-09/0079r1

IEEE C80216m-09/0079r1 Project IEEE 802.16 Broadband Wireless Access Working Group Title Efficient Demodulators for the DSTTD Scheme Date 2009-01-05 Submitted M. A. Khojastepour Ron Porat Source(s) NEC

More information

Bayesian Decision Theory

Bayesian Decision Theory Bayesian Decision Theory Selim Aksoy Department of Computer Engineering Bilkent University saksoy@cs.bilkent.edu.tr CS 551, Fall 2017 CS 551, Fall 2017 c 2017, Selim Aksoy (Bilkent University) 1 / 46 Bayesian

More information

Maximum Likelihood Estimation. only training data is available to design a classifier

Maximum Likelihood Estimation. only training data is available to design a classifier Introduction to Pattern Recognition [ Part 5 ] Mahdi Vasighi Introduction Bayesian Decision Theory shows that we could design an optimal classifier if we knew: P( i ) : priors p(x i ) : class-conditional

More information

Least Squares and Kalman Filtering Questions: me,

Least Squares and Kalman Filtering Questions:  me, Least Squares and Kalman Filtering Questions: Email me, namrata@ece.gatech.edu Least Squares and Kalman Filtering 1 Recall: Weighted Least Squares y = Hx + e Minimize Solution: J(x) = (y Hx) T W (y Hx)

More information

Lecture 3. Linear Regression II Bastian Leibe RWTH Aachen

Lecture 3. Linear Regression II Bastian Leibe RWTH Aachen Advanced Machine Learning Lecture 3 Linear Regression II 02.11.2015 Bastian Leibe RWTH Aachen http://www.vision.rwth-aachen.de/ leibe@vision.rwth-aachen.de This Lecture: Advanced Machine Learning Regression

More information

SOLUTIONS TO ECE 6603 ASSIGNMENT NO. 6

SOLUTIONS TO ECE 6603 ASSIGNMENT NO. 6 SOLUTIONS TO ECE 6603 ASSIGNMENT NO. 6 PROBLEM 6.. Consider a real-valued channel of the form r = a h + a h + n, which is a special case of the MIMO channel considered in class but with only two inputs.

More information

Recursive Estimation

Recursive Estimation Recursive Estimation Raffaello D Andrea Spring 08 Problem Set 3: Extracting Estimates from Probability Distributions Last updated: April 9, 08 Notes: Notation: Unless otherwise noted, x, y, and z denote

More information

ECE521 week 3: 23/26 January 2017

ECE521 week 3: 23/26 January 2017 ECE521 week 3: 23/26 January 2017 Outline Probabilistic interpretation of linear regression - Maximum likelihood estimation (MLE) - Maximum a posteriori (MAP) estimation Bias-variance trade-off Linear

More information

Problem Set 3 Due Oct, 5

Problem Set 3 Due Oct, 5 EE6: Random Processes in Systems Lecturer: Jean C. Walrand Problem Set 3 Due Oct, 5 Fall 6 GSI: Assane Gueye This problem set essentially reviews detection theory. Not all eercises are to be turned in.

More information

Machine Learning CSE546 Sham Kakade University of Washington. Oct 4, What about continuous variables?

Machine Learning CSE546 Sham Kakade University of Washington. Oct 4, What about continuous variables? Linear Regression Machine Learning CSE546 Sham Kakade University of Washington Oct 4, 2016 1 What about continuous variables? Billionaire says: If I am measuring a continuous variable, what can you do

More information

Massachusetts Institute of Technology

Massachusetts Institute of Technology Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science 6.011: Introduction to Communication, Control and Signal Processing QUIZ, April 1, 010 QUESTION BOOKLET Your

More information

Communication Theory II

Communication Theory II Communication Theory II Lecture 15: Information Theory (cont d) Ahmed Elnakib, PhD Assistant Professor, Mansoura University, Egypt March 29 th, 2015 1 Example: Channel Capacity of BSC o Let then: o For

More information

Parametric Inference Maximum Likelihood Inference Exponential Families Expectation Maximization (EM) Bayesian Inference Statistical Decison Theory

Parametric Inference Maximum Likelihood Inference Exponential Families Expectation Maximization (EM) Bayesian Inference Statistical Decison Theory Statistical Inference Parametric Inference Maximum Likelihood Inference Exponential Families Expectation Maximization (EM) Bayesian Inference Statistical Decison Theory IP, José Bioucas Dias, IST, 2007

More information

Bayesian decision theory Introduction to Pattern Recognition. Lectures 4 and 5: Bayesian decision theory

Bayesian decision theory Introduction to Pattern Recognition. Lectures 4 and 5: Bayesian decision theory Bayesian decision theory 8001652 Introduction to Pattern Recognition. Lectures 4 and 5: Bayesian decision theory Jussi Tohka jussi.tohka@tut.fi Institute of Signal Processing Tampere University of Technology

More information

Machine Learning 4771

Machine Learning 4771 Machine Learning 4771 Instructor: Tony Jebara Topic 11 Maximum Likelihood as Bayesian Inference Maximum A Posteriori Bayesian Gaussian Estimation Why Maximum Likelihood? So far, assumed max (log) likelihood

More information

EE6604 Personal & Mobile Communications. Week 13. Multi-antenna Techniques

EE6604 Personal & Mobile Communications. Week 13. Multi-antenna Techniques EE6604 Personal & Mobile Communications Week 13 Multi-antenna Techniques 1 Diversity Methods Diversity combats fading by providing the receiver with multiple uncorrelated replicas of the same information

More information

ELEG 5633 Detection and Estimation Minimum Variance Unbiased Estimators (MVUE)

ELEG 5633 Detection and Estimation Minimum Variance Unbiased Estimators (MVUE) 1 ELEG 5633 Detection and Estimation Minimum Variance Unbiased Estimators (MVUE) Jingxian Wu Department of Electrical Engineering University of Arkansas Outline Minimum Variance Unbiased Estimators (MVUE)

More information

STA414/2104 Statistical Methods for Machine Learning II

STA414/2104 Statistical Methods for Machine Learning II STA414/2104 Statistical Methods for Machine Learning II Murat A. Erdogdu & David Duvenaud Department of Computer Science Department of Statistical Sciences Lecture 3 Slide credits: Russ Salakhutdinov Announcements

More information

Estimation Theory. as Θ = (Θ 1,Θ 2,...,Θ m ) T. An estimator

Estimation Theory. as Θ = (Θ 1,Θ 2,...,Θ m ) T. An estimator Estimation Theory Estimation theory deals with finding numerical values of interesting parameters from given set of data. We start with formulating a family of models that could describe how the data were

More information

Simultaneous SDR Optimality via a Joint Matrix Decomp.

Simultaneous SDR Optimality via a Joint Matrix Decomp. Simultaneous SDR Optimality via a Joint Matrix Decomposition Joint work with: Yuval Kochman, MIT Uri Erez, Tel Aviv Uni. May 26, 2011 Model: Source Multicasting over MIMO Channels z 1 H 1 y 1 Rx1 ŝ 1 s

More information

Lecture 8: MIMO Architectures (II) Theoretical Foundations of Wireless Communications 1. Overview. Ragnar Thobaben CommTh/EES/KTH

Lecture 8: MIMO Architectures (II) Theoretical Foundations of Wireless Communications 1. Overview. Ragnar Thobaben CommTh/EES/KTH MIMO : MIMO Theoretical Foundations of Wireless Communications 1 Wednesday, May 25, 2016 09:15-12:00, SIP 1 Textbook: D. Tse and P. Viswanath, Fundamentals of Wireless Communication 1 / 20 Overview MIMO

More information

Machine Learning. Lecture 4: Regularization and Bayesian Statistics. Feng Li. https://funglee.github.io

Machine Learning. Lecture 4: Regularization and Bayesian Statistics. Feng Li. https://funglee.github.io Machine Learning Lecture 4: Regularization and Bayesian Statistics Feng Li fli@sdu.edu.cn https://funglee.github.io School of Computer Science and Technology Shandong University Fall 207 Overfitting Problem

More information

Least Squares Estimation Namrata Vaswani,

Least Squares Estimation Namrata Vaswani, Least Squares Estimation Namrata Vaswani, namrata@iastate.edu Least Squares Estimation 1 Recall: Geometric Intuition for Least Squares Minimize J(x) = y Hx 2 Solution satisfies: H T H ˆx = H T y, i.e.

More information

EE-597 Notes Quantization

EE-597 Notes Quantization EE-597 Notes Quantization Phil Schniter June, 4 Quantization Given a continuous-time and continuous-amplitude signal (t, processing and storage by modern digital hardware requires discretization in both

More information

Brief Review on Estimation Theory

Brief Review on Estimation Theory Brief Review on Estimation Theory K. Abed-Meraim ENST PARIS, Signal and Image Processing Dept. abed@tsi.enst.fr This presentation is essentially based on the course BASTA by E. Moulines Brief review on

More information

A Theoretical Overview on Kalman Filtering

A Theoretical Overview on Kalman Filtering A Theoretical Overview on Kalman Filtering Constantinos Mavroeidis Vanier College Presented to professors: IVANOV T. IVAN STAHN CHRISTIAN Email: cmavroeidis@gmail.com June 6, 208 Abstract Kalman filtering

More information

Statistics for scientists and engineers

Statistics for scientists and engineers Statistics for scientists and engineers February 0, 006 Contents Introduction. Motivation - why study statistics?................................... Examples..................................................3

More information

LECTURE 18. Lecture outline Gaussian channels: parallel colored noise inter-symbol interference general case: multiple inputs and outputs

LECTURE 18. Lecture outline Gaussian channels: parallel colored noise inter-symbol interference general case: multiple inputs and outputs LECTURE 18 Last time: White Gaussian noise Bandlimited WGN Additive White Gaussian Noise (AWGN) channel Capacity of AWGN channel Application: DS-CDMA systems Spreading Coding theorem Lecture outline Gaussian

More information

The Optimality of Beamforming: A Unified View

The Optimality of Beamforming: A Unified View The Optimality of Beamforming: A Unified View Sudhir Srinivasa and Syed Ali Jafar Electrical Engineering and Computer Science University of California Irvine, Irvine, CA 92697-2625 Email: sudhirs@uciedu,

More information

12.4 Known Channel (Water-Filling Solution)

12.4 Known Channel (Water-Filling Solution) ECEn 665: Antennas and Propagation for Wireless Communications 54 2.4 Known Channel (Water-Filling Solution) The channel scenarios we have looed at above represent special cases for which the capacity

More information

Frequentist-Bayesian Model Comparisons: A Simple Example

Frequentist-Bayesian Model Comparisons: A Simple Example Frequentist-Bayesian Model Comparisons: A Simple Example Consider data that consist of a signal y with additive noise: Data vector (N elements): D = y + n The additive noise n has zero mean and diagonal

More information

Density Estimation: ML, MAP, Bayesian estimation

Density Estimation: ML, MAP, Bayesian estimation Density Estimation: ML, MAP, Bayesian estimation CE-725: Statistical Pattern Recognition Sharif University of Technology Spring 2013 Soleymani Outline Introduction Maximum-Likelihood Estimation Maximum

More information

arxiv: v1 [math.st] 13 Dec 2016

arxiv: v1 [math.st] 13 Dec 2016 BEST WIDELY LINEAR UNBIASED ESTIMATOR FOR REAL VALUED PARAMETER VECTORS O. Lang and M. Huemer arxiv:1612.04060v1 [math.st 13 Dec 2016 ABSTRACT For classical estimation with an underlying linear model the

More information

EE 574 Detection and Estimation Theory Lecture Presentation 8

EE 574 Detection and Estimation Theory Lecture Presentation 8 Lecture Presentation 8 Aykut HOCANIN Dept. of Electrical and Electronic Engineering 1/14 Chapter 3: Representation of Random Processes 3.2 Deterministic Functions:Orthogonal Representations For a finite-energy

More information

Classical Estimation Topics

Classical Estimation Topics Classical Estimation Topics Namrata Vaswani, Iowa State University February 25, 2014 This note fills in the gaps in the notes already provided (l0.pdf, l1.pdf, l2.pdf, l3.pdf, LeastSquares.pdf). 1 Min

More information

Lecture 3. STAT161/261 Introduction to Pattern Recognition and Machine Learning Spring 2018 Prof. Allie Fletcher

Lecture 3. STAT161/261 Introduction to Pattern Recognition and Machine Learning Spring 2018 Prof. Allie Fletcher Lecture 3 STAT161/261 Introduction to Pattern Recognition and Machine Learning Spring 2018 Prof. Allie Fletcher Previous lectures What is machine learning? Objectives of machine learning Supervised and

More information

Linear Models. DS-GA 1013 / MATH-GA 2824 Optimization-based Data Analysis.

Linear Models. DS-GA 1013 / MATH-GA 2824 Optimization-based Data Analysis. Linear Models DS-GA 1013 / MATH-GA 2824 Optimization-based Data Analysis http://www.cims.nyu.edu/~cfgranda/pages/obda_fall17/index.html Carlos Fernandez-Granda Linear regression Least-squares estimation

More information

Efficient Equalization for Wireless Communications in Hostile Environments

Efficient Equalization for Wireless Communications in Hostile Environments Efficient Equalization for Wireless Communications in Hostile Environments Thomas Strohmer Department of Mathematics University of California, Davis, USA strohmer@math.ucdavis.edu http://math.ucdavis.edu/

More information

MMSE Equalizer Design

MMSE Equalizer Design MMSE Equalizer Design Phil Schniter March 6, 2008 [k] a[m] P a [k] g[k] m[k] h[k] + ṽ[k] q[k] y [k] P y[m] For a trivial channel (i.e., h[k] = δ[k]), e kno that the use of square-root raisedcosine (SRRC)

More information

Lecture Notes 1 Probability and Random Variables. Conditional Probability and Independence. Functions of a Random Variable

Lecture Notes 1 Probability and Random Variables. Conditional Probability and Independence. Functions of a Random Variable Lecture Notes 1 Probability and Random Variables Probability Spaces Conditional Probability and Independence Random Variables Functions of a Random Variable Generation of a Random Variable Jointly Distributed

More information

Lecture Notes 1 Probability and Random Variables. Conditional Probability and Independence. Functions of a Random Variable

Lecture Notes 1 Probability and Random Variables. Conditional Probability and Independence. Functions of a Random Variable Lecture Notes 1 Probability and Random Variables Probability Spaces Conditional Probability and Independence Random Variables Functions of a Random Variable Generation of a Random Variable Jointly Distributed

More information

Lecture: Gaussian Process Regression. STAT 6474 Instructor: Hongxiao Zhu

Lecture: Gaussian Process Regression. STAT 6474 Instructor: Hongxiao Zhu Lecture: Gaussian Process Regression STAT 6474 Instructor: Hongxiao Zhu Motivation Reference: Marc Deisenroth s tutorial on Robot Learning. 2 Fast Learning for Autonomous Robots with Gaussian Processes

More information

ENGR352 Problem Set 02

ENGR352 Problem Set 02 engr352/engr352p02 September 13, 2018) ENGR352 Problem Set 02 Transfer function of an estimator 1. Using Eq. (1.1.4-27) from the text, find the correct value of r ss (the result given in the text is incorrect).

More information

ELEC546 MIMO Channel Capacity

ELEC546 MIMO Channel Capacity ELEC546 MIMO Channel Capacity Vincent Lau Simplified Version.0 //2004 MIMO System Model Transmitter with t antennas & receiver with r antennas. X Transmitted Symbol, received symbol Channel Matrix (Flat

More information

Improved Detected Data Processing for Decision-Directed Tracking of MIMO Channels

Improved Detected Data Processing for Decision-Directed Tracking of MIMO Channels Improved Detected Data Processing for Decision-Directed Tracking of MIMO Channels Emna Eitel and Joachim Speidel Institute of Telecommunications, University of Stuttgart, Germany Abstract This paper addresses

More information

Computer Vision & Digital Image Processing

Computer Vision & Digital Image Processing Computer Vision & Digital Image Processing Image Restoration and Reconstruction I Dr. D. J. Jackson Lecture 11-1 Image restoration Restoration is an objective process that attempts to recover an image

More information

Appendix B Information theory from first principles

Appendix B Information theory from first principles Appendix B Information theory from first principles This appendix discusses the information theory behind the capacity expressions used in the book. Section 8.3.4 is the only part of the book that supposes

More information