Technique for Numerical Computation of Cramér-Rao Bound using MATLAB

Similar documents
Basic concepts in estimation

Review Quiz. 1. Prove that in a one-dimensional canonical exponential family, the complete and sufficient statistic achieves the

Detection & Estimation Lecture 1

EIE6207: Estimation Theory

EIE6207: Maximum-Likelihood and Bayesian Estimation

Detection & Estimation Lecture 1

Variations. ECE 6540, Lecture 10 Maximum Likelihood Estimation

DOA Estimation using MUSIC and Root MUSIC Methods

Motivation CRLB and Regularity Conditions CRLLB A New Bound Examples Conclusions

Estimation Tasks. Short Course on Image Quality. Matthew A. Kupinski. Introduction

Estimation and Detection

ELEG 5633 Detection and Estimation Minimum Variance Unbiased Estimators (MVUE)

A Few Notes on Fisher Information (WIP)

Advanced Signal Processing Minimum Variance Unbiased Estimation (MVU)

2 Statistical Estimation: Basic Concepts

Module 2. Random Processes. Version 2, ECE IIT, Kharagpur

Rowan University Department of Electrical and Computer Engineering

Estimation Theory Fredrik Rusek. Chapters

Outline. Detection Theory. NonGaussian NoiseCharacteristics. Example: Kurtosis: Chapter 10: NonGaussian Noise Reza Meraji Dec. 14, 2010.

FAST AND ACCURATE DIRECTION-OF-ARRIVAL ESTIMATION FOR A SINGLE SOURCE

ECE 275A Homework 6 Solutions

Chapter 8: Least squares (beginning of chapter)

Measure-Transformed Quasi Maximum Likelihood Estimation

Inferring from data. Theory of estimators

Parametric Inference Maximum Likelihood Inference Exponential Families Expectation Maximization (EM) Bayesian Inference Statistical Decison Theory

Problem Set 2. MAS 622J/1.126J: Pattern Recognition and Analysis. Due: 5:00 p.m. on September 30

Intelligent Embedded Systems Uncertainty, Information and Learning Mechanisms (Part 1)

Frames and Phase Retrieval

Detection and Estimation Theory

Inadequacies of the Fisher Matrix in Gravitational-wave Parameter Estimation

Maximum Likelihood Estimation

Module 1 - Signal estimation

Fundamentals of Statistical Signal Processing Volume II Detection Theory

6.4 Determinants and Cramer s Rule

Advanced Signal Processing Introduction to Estimation Theory

1 Inner Product and Orthogonality

Problem Selected Scores

Timing Recovery at Low SNR Cramer-Rao bound, and outperforming the PLL

Proof In the CR proof. and

Estimation Theory Fredrik Rusek. Chapters 6-7

Posterior Cramer-Rao Lower Bound for Mobile Tracking in Mixed Line-of-Sight/Non Line-of-Sight Conditions

Signal detection theory

Final Exam. 1. (6 points) True/False. Please read the statements carefully, as no partial credit will be given.

McGill University. Faculty of Science. Department of Mathematics and Statistics. Part A Examination. Statistics: Theory Paper

Closed-Form Expressions for the Exact Cramér-Rao Bound for Parameter Estimation of Arbitrary Square QAM-Modulated Signals

Random Matrix Theory Lecture 1 Introduction, Ensembles and Basic Laws. Symeon Chatzinotas February 11, 2013 Luxembourg

1 Data Arrays and Decompositions

Methods of evaluating estimators and best unbiased estimators Hamid R. Rabiee

Estimation, Detection, and Identification CMU 18752

Robust Range-rate Estimation of Passive Narrowband Sources in Shallow Water

All other items including (and especially) CELL PHONES must be left at the front of the room.

STATISTICS SYLLABUS UNIT I

Classical Estimation Topics

Lecture 18 Classical Iterative Methods

Example: An experiment can either result in success or failure with probability θ and (1 θ) respectively. The experiment is performed independently

A Simple Phase Unwrapping Algorithm and its Application to Phase- Based Frequency Estimation

Estimation theory. Parametric estimation. Properties of estimators. Minimum variance estimator. Cramer-Rao bound. Maximum likelihood estimators

STAT 100C: Linear models

EECE Adaptive Control

Terminology Suppose we have N observations {x(n)} N 1. Estimators as Random Variables. {x(n)} N 1

Lab 4: Quantization, Oversampling, and Noise Shaping

PART I INTRODUCTION The meaning of probability Basic definitions for frequentist statistics and Bayesian inference Bayesian inference Combinatorics

On one-dimensional random fields with fixed end values

Determinant of a Matrix

5682 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 55, NO. 12, DECEMBER /$ IEEE

Performance Bounds for Polynomial Phase Parameter Estimation with Nonuniform and Random Sampling Schemes

Solving Linear Systems Using Gaussian Elimination. How can we solve

On the Nature of Random System Matrices in Structural Dynamics

Multiple Bits Distributed Moving Horizon State Estimation for Wireless Sensor Networks. Ji an Luo

New Expression for the Functional Transformation of the Vector Cramér-Rao Lower Bound

Parametric Signal Modeling and Linear Prediction Theory 1. Discrete-time Stochastic Processes

1 Cricket chirps: an example

Many-Body Coherence in Quantum Thermodynamics

10-704: Information Processing and Learning Fall Lecture 24: Dec 7

NEAR-OPTIMAL FLIGHT LOAD SYNTHESIS USING NEURAL NETS

Lecture 10: Determinants and Cramer s Rule

Practical Spectral Estimation

Estimators as Random Variables

Signals and Spectra - Review

On the Cramér-Rao lower bound under model mismatch

User Guide for Hermir version 0.9: Toolbox for Synthesis of Multivariate Stationary Gaussian and non-gaussian Series

Progress In Electromagnetics Research, PIER 102, 31 48, 2010

ACOUSTIC SOURCE LOCALIZATION BY FUSING DISTRIBUTED MICROPHONE ARRAYS MEASUREMENTS

10. Linear Models and Maximum Likelihood Estimation

Channel Estimation with Low-Precision Analog-to-Digital Conversion

Information geometry of mirror descent

Linear models. x = Hθ + w, where w N(0, σ 2 I) and H R n p. The matrix H is called the observation matrix or design matrix 1.

CRAMÉR-RAO BOUNDS FOR RADAR ALTIMETER WAVEFORMS. Corinne Mailhes (1), Jean-Yves Tourneret (1), Jérôme Severini (1), and Pierre Thibaut (2)

Brief Review on Estimation Theory

Mathematics Ph.D. Qualifying Examination Stat Probability, January 2018

This is the published version of a paper presented at IEEE International Instrumentation and Measurement Technology Conference (I2MTC), 2013.

Vector Derivatives and the Gradient

MATRICES. a m,1 a m,n A =

Time-Delay Estimation *

First Year Examination Department of Statistics, University of Florida

1. (Regular) Exponential Family

MTH 2032 Semester II

Chapter 1 Linear Equations. 1.1 Systems of Linear Equations

Linear least squares problem: Example

Multiple Antennas for MIMO Communications - Basic Theory

Transcription:

Technique for Numerical Computation of Cramér-Rao Bound using MATLAB Hing Cheung So http://www.ee.cityu.edu.hk/~hcso Department of Electronic Engineering City University of Hong Kong H. C. So Page 1

The purpose of this note is to provide an effective means to compute the Cramér-Rao lower bound (CRLB) numerically with the use of symbolic computation in MATLAB. CRLB is the performance bound in terms of minimum achievable variance provided by any unbiased estimators. For deterministic parameter estimation, its derivation requires knowledge of the noise probability density function (PDF) which must have closed-form so that analytical differentiation is allowed. Although there are other variance bounds, CRLB is simplest. Suppose the PDF of is. Here, is a known linear/nonlinear function, is the parameter vector, and is noise vector. H. C. So Page 2

The CRLB for can be obtained in two steps: Compute the Fisher information matrix where CRLB for is the entry of,. H. C. So Page 3

Consider,, where is the parameter to be estimated and is zero-mean white Gaussian noise with variance. The CRLB is derived as: The CRLB can be obtained easily in closed-form for this simple example. H. C. So Page 4

How about for more complicated problems? We use another simple example for its numerical calculation as follows. Consider where and is zeromean white complex Gaussian noise with variance. Ignoring the irrelevant term, the log PDF is: H. C. So Page 5

We can see that the main computation is to find the secondorder derivatives for: With the use of symbolic computation of MATLAB, we define: syms w1 w2 p1 p2 A1 A2 n s1 s2 p; p=(s1-a1*exp(j*w1*n)*exp(j*p1)- A2*exp(j*w2*n)*exp(j*p2))*(s2-A1*exp(- j*w1*n)*exp(-j*p1)-a2*exp(-j*w2*n)*exp(-j*p2)); Here, the first 6 symbols correspond to, and the remaining symbols are time index (n), (s1), (s2) and (p). H. C. So Page 6

For example, in computing in analytical form, we use: p11=simple(diff(diff(p,w1),w1)) Note that simple is used to simplify the resultant expression. In a similar manner, we compute all other required secondorder derivatives. Note also that we only need to determine the upper triangular elements of as the matrix is symmetric. For detail, please refer to comp_diff.m or see below: http://www.ee.cityu.edu.hk/~hcso/comp_diff.m H. C. So Page 7

comp_diff.m %Parameter vector consists of symbols: %frequencies: w1 and w2; %initial phases: p1 and p2; %real-valued amplitudes: A1 and A2 %Other defined symbols are: %r[n] and its conjugate: s1 and s2; %time index: n; %log PDF: p %pij stands for the (i,j) entry of the FIM (individual and without expectation) %e.g., p11 corresponds to differentiation of p w.r.t. w1 twice syms w1 w2 p1 p2 A1 A2 n s1 s2 p; p=(s1-a1*exp(j*w1*n)*exp(j*p1)-a2*exp(j*w2*n)*exp(j*p2))*(s2-a1*exp(-j*w1*n)*exp(-j*p1)-a2*exp(-j*w2*n)*exp(-j*p2)); p11=simple(diff(diff(p,w1),w1)) p12=simple(diff(diff(p,w1),w2)) p13=simple(diff(diff(p,w1),p1)) p14=simple(diff(diff(p,w1),p2)) p15=simple(diff(diff(p,w1),a1)) p16=simple(diff(diff(p,w1),a2)) p22=simple(diff(diff(p,w2),w2)) p23=simple(diff(diff(p,w2),p1)) p24=simple(diff(diff(p,w2),p2)) p25=simple(diff(diff(p,w2),a1)) p26=simple(diff(diff(p,w2),a2)) p33=simple(diff(diff(p,p1),p1)) p34=simple(diff(diff(p,p1),p2)) p35=simple(diff(diff(p,p1),a1)) p36=simple(diff(diff(p,p1),a2)) p44=simple(diff(diff(p,p2),p2)) p45=simple(diff(diff(p,p2),a1)) p46=simple(diff(diff(p,p2),a2)) p55=simple(diff(diff(p,a1),a1)) p56=simple(diff(diff(p,a1),a2)) p66=simple(diff(diff(p,a2),a2)) H. C. So Page 8

Then we copy the resultant expressions to another MATLAB file for computation. Here, we assign: s(n)=a1*exp(j*w1*n)*exp(j*p1)+a2*exp(j*w2*n)*exp(j*p2); and replace s1 and s2 by s(n) and conj(s(n)), respectively. We need to assign for numerical computation, and without loss of generality, here all parameters are set to one. Note also that the expected value of is s(n). For detail, please refer to comp_fim.m or see below: http://www.ee.cityu.edu.hk/~hcso/comp_fim.m H. C. So Page 9

comp_fim.m %Finding FIM for two cisoids %As an illustration: %A1=A2=w1=w2=p1=p2=1 %Observation length is N=100 %Standard derivation of zero-mean white noise power is sigma=1 N=100; % data length sigma=1; % noise standard deviation fim=zeros(6,6); %initialize FIM A1=1; A2=1; w1=1; w2=2; p1=1; p2=2; for n=1:n s(n)=a1*exp(j*w1*n)*exp(j*p1)+a2*exp(j*w2*n)*exp(j*p2); fim(1,1)=fim(1,1) + A1*n^2*(-2*A2*cos(w1*n+p1-w2*n-p2)+exp(i*(w1*n+p1))*conj(s(n))+exp(-i*(w1*n+p1))*s(n)); fim(1,2)=fim(1,2) +2*A1*n^2*A2*cos(w1*n+p1-w2*n-p2); fim(1,3)=fim(1,3) +A1*n*(-2*A2*cos(w1*n+p1-w2*n-p2)+exp(i*(w1*n+p1))*conj(s(n))+exp(-i*(w1*n+p1))*s(n)); fim(1,4)=fim(1,4) +2*A1*n*A2*cos(w1*n+p1-w2*n-p2); fim(1,5)=fim(1,5)-n*(2*a2*sin(w1*n+p1-w2*n-p2)+i*conj(s(n))*exp(i*(w1*n+p1))-i*s(n)*exp(-i*(w1*n+p1))); fim(1,6)=fim(1,6)-2*n*a1*sin(w1*n+p1-w2*n-p2); fim(2,2)=fim(2,2) +A2*n^2*(-2*A1*cos(w1*n+p1-w2*n-p2)+exp(i*(w2*n+p2))*conj(s(n))+exp(-i*(w2*n+p2))*s(n)); fim(2,3)=fim(2,3)+2*a1*n*a2*cos(w1*n+p1-w2*n-p2); fim(2,4)=fim(2,4)+a2*n*(-2*a1*cos(w1*n+p1-w2*n-p2)+exp(i*(w2*n+p2))*conj(s(n))+exp(-i*(w2*n+p2))*s(n)); fim(2,5)=fim(2,5)+2*a2*n*sin(w1*n+p1-w2*n-p2); fim(2,6)=fim(2,6)-n*(-2*a1*sin(w1*n+p1-w2*n-p2)+i*conj(s(n))*exp(i*(w2*n+p2))-i*s(n)*exp(-i*(w2*n+p2))); fim(3,3)=fim(3,3)+a1*(-2*a2*cos(w1*n+p1-w2*n-p2)+exp(i*(w1*n+p1))*conj(s(n))+exp(-i*(w1*n+p1))*s(n)); fim(3,4)=fim(3,4)+2*a1*a2*cos(w1*n+p1-w2*n-p2); fim(3,5)=fim(3,5)-2*a2*sin(w1*n+p1-w2*n-p2)-i*conj(s(n))*exp(i*(w1*n+p1))+i*s(n)*exp(-i*(w1*n+p1)); fim(3,6)=fim(3,6)-2*a1*sin(w1*n+p1-w2*n-p2); H. C. So Page 10

fim(4,4)=fim(4,4)+a2*(-2*a1*cos(w1*n+p1-w2*n-p2)+exp(i*(w2*n+p2))*conj(s(n))+exp(-i*(w2*n+p2))*s(n)); fim(4,5)=fim(4,5)+2*a2*sin(w1*n+p1-w2*n-p2); fim(4,6)=fim(4,6)+2*a1*sin(w1*n+p1-w2*n-p2)-i*conj(s(n))*exp(i*(w2*n+p2))+i*s(n)*exp(-i*(w2*n+p2)); fim(5,5)=fim(5,5)+2; fim(5,6)=fim(5,6)+2*cos(w1*n+p1-w2*n-p2); fim(6,6)=fim(6,6)+2; end fim(2,1)=fim(1,2); fim(3,1)=fim(1,3); fim(3,2)=fim(2,3); fim(4,1)=fim(1,4); fim(4,2)=fim(2,4); fim(4,3)=fim(3,4); fim(5,1)=fim(1,5); fim(5,2)=fim(2,5); fim(5,3)=fim(3,5); fim(5,4)=fim(4,5); fim(6,1)=fim(1,6); fim(6,2)=fim(2,6); fim(6,3)=fim(3,6); fim(6,4)=fim(4,6); fim(6,5)=fim(5,6); fim=fim./sigma^2; H. C. So Page 11