Linear models. x = Hθ + w, where w N(0, σ 2 I) and H R n p. The matrix H is called the observation matrix or design matrix 1.

Size: px
Start display at page:

Download "Linear models. x = Hθ + w, where w N(0, σ 2 I) and H R n p. The matrix H is called the observation matrix or design matrix 1."

Transcription

1 Linear models As the first approach to estimator design, we consider the class of problems that can be represented by a linear model. In general, finding the MVUE is difficult. But if the linear model is valid, this task is straightforward. A model with parameters θ R p 1 and data x R n 1 is linear, if it is of the form x = Hθ + w, where w N(0, σ 2 I) and H R n p. The matrix H is called the observation matrix or design matrix

2 Linear models For example, the "DC level in WGN" problem belongs to this class with x = [x[0], x[1],..., x[n 1]] T w = [w[0], w[1],..., w[n 1]] T θ = [A] H = [1, 1,..., 1 } {{ } N times With these definitions, x[n] = A 1 + w[n] holds for all n = 0, 1,..., N 1. ] T

3 Linear models (cont.) Also fitting a straight line to a set of data belongs to this class. In this case the model is x[n] = A + Bn + w[n], n = 0, 1,..., N 1 and the problem is to find MVU estimators for A and B assuming w[n] N(0, σ 2 ).

4 Linear models (cont.) In matrix form x = Hθ + w, or 1 0 x[0] x[1] 1 1 w[0] ( ) = 1 2 A w[1] +. B.. } {{ }. x[n 1] θ w[n 1] } {{ } 1 N 1 } {{ } x } {{ } w H The matrix H is called the observation matrix.

5 Linear models: finding the MVUE The nice thing about linear models is that the MVUE given by the CRLB theorem is always found. More specifically, the factorization ln p(x;θ) θ = I(θ)(g(x) θ) can always be done. According to CRLB theorem for the vector parameter case, g(x) is then the MVUE. To see what does the factorization look like, let s calculate the derivative of the log-likelihood function.

6 Linear models: finding the MVUE (cont.) The likelihood function for each sample x[n] is now p(x[n]; θ) = [ 1 exp 1 ] 2πσ 2 2σ 2 (x[n] [Hθ] n) 2 and the joint probability for all samples p(x; θ) = N 1 n=0 [ 1 p(x[n]; θ) = exp (2πσ 2 ) N 2 1 2σ 2 ] N 1 (x[n] [Hθ] n ) 2 n=0 or in vector form 1 p(x; θ) = (2πσ 2 ) N 2 [ exp 1 ] 2σ 2 (x Hθ)T (x Hθ)

7 Linear models: finding the MVUE (cont.) After taking the logarithm and differentiating, we get ln p(x; θ) θ = [ ln(2πσ 2 ) N 1 2 θ = 1 2σ 2 ] 2σ 2 (x Hθ)T (x Hθ) ] [ x T x 2x T Hθ + θ T H T Hθ θ It can be shown that for any vector v and any symmetric matrix M the following differentiation rules hold. θ vt θ = v θ θt Mθ = 2Mθ.

8 Linear models: finding the MVUE (cont.) Using these, we can evaluate the above formula: ln p(x; θ) θ = 1 2σ 2 [ 2H T x + 2H T Hθ ] = 1 σ 2 [ H T x + H T Hθ ]

9 Linear models: finding the MVUE The MVUE g(x) is given by the following factorization: ln p(x; θ) θ = I(θ)(g(x) θ), If the square matrix H T H is invertible 2, we can cleverly multiply by the identity matrix, or I = H T H (H T H) 1 : ln p(x; θ) θ = HT H [ ] σ 2 (H T H) 1 H T x + θ = HT H [ ] σ 2 (H T H) 1 H T x θ 2 It usually is; we will return to this issue later.

10 Linear models: finding the MVUE Comparing this with the required factorization of the CRLB theorem, ln p(x; θ) θ = I(θ)(g(x) θ), we can see immediately, that the MVUE g(x) exists, and is given by ˆθ = g(x) = (H T H) 1 H T x.

11 Linear models: finding the MVUE (cont.) Furthermore, the Fisher information matrix is I(θ) = HT H σ 2, which means that the covariance matrix of the estimator is its inverse: C ˆθ = σ2 (H T H) 1.

12 Linear models: theorem MVU estimator for the linear model If the observed data can be modeled as x = Hθ + w (1) where x is an N 1 vector of observations, H is a known N p observation matrix (with N > p) and rank p, θ is a p 1 vector of parameters to be estimated, and w is an N 1 noise vector with pdf N(0, σ 2 I), then the MVU estimator is and the covariance matrix of ˆθ is ˆθ = (H T H) 1 H T x (2) C ˆθ = σ2 (H T H) 1 (3)

13 Linear models: theorem (cont.) Moreover, the MVU estimator is efficient in that it attains the CRLB. Proof. We have already proven everything except the fact that the estimator is unbiased. The unbiasedness is easily seen: E[ ˆθ] = E[(H T H) 1 H T x] = (H T H) 1 H T E[x] = (H T H) 1 H T Hθ = θ. (Here we used the fact that E[x] = Hθ + E[w] = Hθ.)

14 Examples: Line fitting In the line fitting case the equation was: 1 0 x[0] x[1] 1 1 w[0] ( ) = 1 2 A w[1] +. B.. } {{ }. x[n 1] θ w[n 1] } {{ } 1 N 1 } {{ } x } {{ } w H

15 Examples: Line fitting (cont.) Once we observe the data x and assume this model, the MVU estimator is ˆθ = (H T H) 1 H T x Writing the matrices open, we have: (Â ) ( ) ( ) = ˆB N N N 1 x[0] x[1]. x[n 1]

16 Examples: Line fitting (cont.) Now, ( ) ( H T N N 1 H = n=0 n N 1 n=0 n N 1 = n=0 n2 N N(N 1) 2 N(N 1) 2 N(N 1)(2N 1) 6 ) and we can show that the inverse is ( 2(2N 1) (H T H) 1 N(N+1) = 6 6 N(N+1) N(N+1) 12 N(N 2 1) )

17 Examples: Line fitting (cont.) Finally, ( ) = ˆB ( 2(2N 1) N(N+1) 6 6 N(N+1) N(N+1) 12 N(N 2 1) ) ( N 1 ) n=0 x[n] N 1 n=0 nx[n] Below is the result of one test run, with σ 2 = 1000, A = 1 and B = 2. In this realization, the result was  = and B =

18 Examples: Line fitting (cont.) The covariance matrix ( (or inverse of) the Fisher information matrix) is C ˆθ = This tells that the estimates  will have a lot higher variance than the estimates ˆB.

19 Examples: Line fitting (cont.) We can validate this by estimating the parameters from 1000 noise realizations. The histograms and the corresponding variances are plotted below. Estimates for A. Theoretical variance = Sample variance = Estimates for B. Theoretical variance = Sample variance =

20 Amplitude of a sinusoid So far we have considered problems, where the function was also linear (straight line or a constant). The model allows also other cases as long as the relationship between the parameters and the data is linear. These include for example estimation of the amplitude of a known sinusoid. Consider the model x[n] = A 1 cos(2πf 1 n + φ 1 ) + A 2 cos(2πf 2 n + φ 2 ) + B + w[n], for n = 0, 1,..., N 1, where f 1, f 2, φ 1, φ 2 are known and A 1, A 2 and B are the unknowns.

21 Amplitude of a sinusoid (cont.) Then the linear model is applicable with namely x = Hθ + w, x[0] cos(2πf 1 + φ 1 ) cos(2πf 2 + φ 2 ) 1 w[0] x[1] cos(4πf 1 + φ 1 ) cos(4πf 2 + φ 2 ) 1 = cos(6πf 1 + φ 1 ) cos(6πf 2 + φ 2 ) 1 A w[1] 1 A 2 +. x[n 1] B.... } {{ } w[n 1] } {{ } cos(2(n 1)πf 1 + φ 1 ) cos(2(n 1)πf 2 + φ 2 ) 1 θ } {{ } x } {{ } H w

22 Amplitude of a sinusoid (cont.) Again, the MVU estimator is ˆθ = (H T H) 1 H T x. The Matlab code for this is below. Note that now we re generating the simulated data exactly according to our model. It s interesting to see how deviations from the model affect the performance try it: Try also other curves instead of the sinusoids + lines.

23 Code % Let s generate a test case first: N = 200; n = (0:N-1) ; sigma_sq = 10; % Variance of WGN w = sqrt(sigma_sq)*randn(n,1); A = 1; % This is the unknown for the estimator B = -2; % This is the unknown for the estimator C = 10; % This is the unknown for the estimator f1 = 0.05; % This parameter the estimator knows f2 = 0.02; % This parameter the estimator knows theta = [A;B;C];

24 Code (cont.) H = [cos(2*pi*f1*n+pi/4), cos(2*pi*f2*n-pi/10), ones(n,1)]; x = H*theta + w; % This is the observed data. % Now lets try to estimate theta from the data x: % Note: Below is Matlab s preferred way for % thest = inv(h *H)*H *x thest = H \ x; plot(n,h*theta, b-, LineWidth, 2); hold on plot(n,x, go, LineWidth, 2); plot(n,h*thest, r-, LineWidth, 2); hold off

25 Amplitude of a sinusoid, results Below is the result of one example run True model Noisy data Estimated sinusoid In this case ˆθ = [1.2128, , ] T The true θ = [1, 2, 10] T.

26 Amplitude of a sinusoid, results (cont.) The covariance matrix is diagonal: C ˆθ =

27 Linear models other examples in Kay s book Curve fitting: For example the gravitational force can be modeled using a second order polynomial: x(t n ) = θ 1 + θ 2 t n + θ 3 t 2 n + w(t n ), n = 0,..., N 1 In matrix form, this is given by or x = Hθ + w

28 Linear models other examples in Kay s book (cont.) x(t 0 ) x(t 1 ). x(t N 1 ) 1 t 0 t t 1 t 2 1 =... 1 t N 1 t 2 N 1 θ 1 θ 2 + θ 3 w 0 w 1. w N 1 Notice, that for polynomial models, the matrix H has a special form, and is called Vandermonde matrix.

29 Linear models other examples in Kay s book (cont.) The nice property of the linear model is that you can try inserting whatever functions you can imagine, and let the formula decide if they are useful or not. As an example, below is an example of data with two linear models fitted into it.

30 Linear models other examples in Kay s book (cont.) 100 Model: y = 0.406*x MSE = Model: y = 0.002*x *x MSE = y(n) 50 y(n) x(n) x(n) Below are some additional models (although not very suitable ones).

31 Linear models other examples in Kay s book (cont.) 100 Model: y = 2.368*cos(2*pi*0.01*x) *(1+x) MSE = Model: y = *sqrt(x) *log(1+x)+0.000*x *x. MSE = y(n) 50 y(n) x(n) x(n) Note that the MSE is a good indicator of model suitability. We will discuss this later in the context of sequential least squares.

32 Linear models other examples in Kay s book (cont.) Fourier analysis x[n] = M k=1 a k cos ( 2πkn N with n = 0, 1,..., N 1. Now and ) + M k=1 b k sin ( 2πkn) + w[n], N θ = [a 1, a 2,, a M, b 1, b 2,, b M ] T

33 Linear models other examples in Kay s book (cont.) ( 1 ) ( 1 ) ( 1 ) ( 0 ) ( 0 ) ( 0 ) cos 2πN cos 4πN cos 2Mπ N sin 2πN sin 4πN sin 2Mπ ( ) ( ) ( ) ( ) ( ) ( N ) cos 2π 2 N cos 4π 2 N cos 2Mπ 2 N sin 2π 2 N sin 4π 2 N sin 2Mπ 2 N H = (. ) (. ) ( ) ( ) ( ) ( ) 2π(N 1) 4π(N 1) 2Mπ(N 1) 2π(N 1) 4π(N 1) 2Mπ(N 1) cos N cos N cos N sin N sin N sin N The MVU estimator results in the usual DFT coefficients, as one could expect.

34 Linear models other examples in Kay s book (cont.) System identification: Any linear process can be modeled using a FIR filter. In system identification context, we measure the input and the output of an unknown system ("black box"), and try to model its properties by a FIR filter. The problem is essentially estimating the FIR impulse response, and thus it s natural to formulate the problem as a linear model.

35 Linear models other examples in Kay s book (cont.) Denote the input by u[n], and the output by x[n], n = 0, 1,..., N 1. Also denote the FIR impulse response by h[k], k = 0, 1,..., p 1. Then our model for the measured output data is p 1 x[n] = h[k]u[n k] + w[n], n = 0, 1,..., N 1 k=0 or in matrix form u[0] 0 0 h[0] u[1] u[0] 0 h[1] x = w.. u[n 1] u[n 2] u[n p] h[p 1] } {{ }} {{ } H θ

36 Linear models other examples in Kay s book (cont.) Because this is in linear model form (assuming w[n] is WGN), the minimum variance FIR coefficient vector is ˆθ = (H T H) 1 H T x. Kay continues the discussion by asking: "What is the best selection for u[n]?" If we can select the input sequence, which one produces the smallest variance? Answer: any sequence whose covariance matrix is diagonal. That is, any (pseudo)random sequence.

37 Automatic Bacteria Counting from Microscope The next example considers automatic counting and measuring of DAPI stained bacteria from microscope image. DAPI a is a fluorescent stain molecule that binds strongly to DNA. When excited by ultraviolet light (wavelength near 358 nm), it starts to emit longer wavelengths (near 461 nm which is blue light). DAPI staining is widely used in biology and medicine for highlighting the cells for counting, tracking and other purposes. a 4,6-diamidino-2-phenylindole

38 Automatic Bacteria Counting from Microscope Traditionally (and even today) the number of cells is calculated manually. However, there are numerous automatic solutions available. At our department the software CellC was developed for this task. 3 The code is freely available at 3 J. Selinummi, J. Seppälä, O. Yli-Harja, and J. Puhakka, "Software for quantification of labeled bacteria from digital microscope images by automated image analysis," BioTechniques, Vol. 39, No 6, 2005, pp

39 CellC Operation The software consists of the following stages: Normalization of the background for variations in illumination Extraction of cells by thresholding Separation of clustered cells by marker-controlled watershed segmentation Finally, too small or large objects are discarded. The output is an excel file of cell sizes and locations together with a binary image of the segmented cells.

40 Background Correction Often the illumination is not homogeneous, but is more bright in the center. This can be corrected by fitting a two-dimensional quadratic surface and subtracting the result. Denote the image intensity at (x k, y k ) by z k. Then the quadratic model for the intensities is z = Hθ + w, or z 1 x 2 1 y 2 1 x 1 y 1 x 1 y 1 1 c 1 z 2. = x 2 2 y 2 2 x 2 y 2 x 2 y 2 1 c w. x 2 N y2 N x Ny N x N y N 1 c 6 z N

41 Background Correction Left: Blue channel with uneven illumination. Center: Fitted quadratic surface. Right: Difference image. z(x, y) = x y xy x y

42 Extension: 2D Measurements In another project we were required to model displacements on a 2D grid 4 The measurement data consisted of 2D vector displacement measurements. In other words, we know that the displacements at points (x 1, y 1 ), (x 2, y 2 ),..., (x N, y N ) are ( x 1, y 1 ), ( x 2, y 2 ),..., ( x N, y N ). 4 Manninen, T., Pekkanen, V., Rutanen, K., Ruusuvuori, P., Rönkkä, R. and Huttunen, H., "Alignment of individually adapted print patterns for ink jet printed electronics," Journal of Imaging Science and Technology, 54(5), Oct

43 Extension: 2D Measurements This case was also modeled using a 2nd order polynomial model: x 1 y 1 x 2 1 y 2 1 x 1 y 1 x 1 y 1 1 a 1 b 1 x 2 y 2.. = x 2 2 y 2 2 x 2 y 2 x 2 y 2 1 a 2 b w. x N y N x 2 N y2 N x Ny N x N y N 1 a 6 b 6 The familiar formula θ = (H T H) 1 H T x applies also in this case. Note that it would have been equivalent to separate this to two linear models; one for x k and another for y k.

44 Results Below is an example of a resulting vector field.

45 Linear Models Summary If a linear model (x = Hθ + w) can be assumed, the MVUE reaching the CRLB can be found in closed form: θ = (H T H) 1 H T x. Matlab calls it theta = H \ x, and Excel LINEST. We will continue discussion on this topic in chapter 8: Least Squares (LS). It turns out that the linear LS estimator has exactly the above formula. The difference is that LS assumes nothing about the distribution, and thus has no guarantees for optimality or unbiasedness. Additionally, LS has numerous extensions to be discussed later.

Rowan University Department of Electrical and Computer Engineering

Rowan University Department of Electrical and Computer Engineering Rowan University Department of Electrical and Computer Engineering Estimation and Detection Theory Fall 2013 to Practice Exam II This is a closed book exam. There are 8 problems in the exam. The problems

More information

PATTERN RECOGNITION AND MACHINE LEARNING

PATTERN RECOGNITION AND MACHINE LEARNING PATTERN RECOGNITION AND MACHINE LEARNING Slide Set 2: Estimation Theory January 2018 Heikki Huttunen heikki.huttunen@tut.fi Department of Signal Processing Tampere University of Technology Classical Estimation

More information

Advanced Signal Processing Minimum Variance Unbiased Estimation (MVU)

Advanced Signal Processing Minimum Variance Unbiased Estimation (MVU) Advanced Signal Processing Minimum Variance Unbiased Estimation (MVU) Danilo Mandic room 813, ext: 46271 Department of Electrical and Electronic Engineering Imperial College London, UK d.mandic@imperial.ac.uk,

More information

Chapter 8: Least squares (beginning of chapter)

Chapter 8: Least squares (beginning of chapter) Chapter 8: Least squares (beginning of chapter) Least Squares So far, we have been trying to determine an estimator which was unbiased and had minimum variance. Next we ll consider a class of estimators

More information

EIE6207: Estimation Theory

EIE6207: Estimation Theory EIE6207: Estimation Theory Man-Wai MAK Dept. of Electronic and Information Engineering, The Hong Kong Polytechnic University enmwmak@polyu.edu.hk http://www.eie.polyu.edu.hk/ mwmak References: Steven M.

More information

LINEAR MODELS IN STATISTICAL SIGNAL PROCESSING

LINEAR MODELS IN STATISTICAL SIGNAL PROCESSING TERM PAPER REPORT ON LINEAR MODELS IN STATISTICAL SIGNAL PROCESSING ANIMA MISHRA SHARMA, Y8104001 BISHWAJIT SHARMA, Y8104015 Course Instructor DR. RAJESH HEDGE Electrical Engineering Department IIT Kanpur

More information

ELEG 5633 Detection and Estimation Minimum Variance Unbiased Estimators (MVUE)

ELEG 5633 Detection and Estimation Minimum Variance Unbiased Estimators (MVUE) 1 ELEG 5633 Detection and Estimation Minimum Variance Unbiased Estimators (MVUE) Jingxian Wu Department of Electrical Engineering University of Arkansas Outline Minimum Variance Unbiased Estimators (MVUE)

More information

Detection theory. H 0 : x[n] = w[n]

Detection theory. H 0 : x[n] = w[n] Detection Theory Detection theory A the last topic of the course, we will briefly consider detection theory. The methods are based on estimation theory and attempt to answer questions such as Is a signal

More information

A Few Notes on Fisher Information (WIP)

A Few Notes on Fisher Information (WIP) A Few Notes on Fisher Information (WIP) David Meyer dmm@{-4-5.net,uoregon.edu} Last update: April 30, 208 Definitions There are so many interesting things about Fisher Information and its theoretical properties

More information

Advanced Signal Processing Introduction to Estimation Theory

Advanced Signal Processing Introduction to Estimation Theory Advanced Signal Processing Introduction to Estimation Theory Danilo Mandic, room 813, ext: 46271 Department of Electrical and Electronic Engineering Imperial College London, UK d.mandic@imperial.ac.uk,

More information

Detection & Estimation Lecture 1

Detection & Estimation Lecture 1 Detection & Estimation Lecture 1 Intro, MVUE, CRLB Xiliang Luo General Course Information Textbooks & References Fundamentals of Statistical Signal Processing: Estimation Theory/Detection Theory, Steven

More information

ECE531 Lecture 10b: Maximum Likelihood Estimation

ECE531 Lecture 10b: Maximum Likelihood Estimation ECE531 Lecture 10b: Maximum Likelihood Estimation D. Richard Brown III Worcester Polytechnic Institute 05-Apr-2011 Worcester Polytechnic Institute D. Richard Brown III 05-Apr-2011 1 / 23 Introduction So

More information

Detection & Estimation Lecture 1

Detection & Estimation Lecture 1 Detection & Estimation Lecture 1 Intro, MVUE, CRLB Xiliang Luo General Course Information Textbooks & References Fundamentals of Statistical Signal Processing: Estimation Theory/Detection Theory, Steven

More information

Classical Estimation Topics

Classical Estimation Topics Classical Estimation Topics Namrata Vaswani, Iowa State University February 25, 2014 This note fills in the gaps in the notes already provided (l0.pdf, l1.pdf, l2.pdf, l3.pdf, LeastSquares.pdf). 1 Min

More information

Module 2. Random Processes. Version 2, ECE IIT, Kharagpur

Module 2. Random Processes. Version 2, ECE IIT, Kharagpur Module Random Processes Version, ECE IIT, Kharagpur Lesson 9 Introduction to Statistical Signal Processing Version, ECE IIT, Kharagpur After reading this lesson, you will learn about Hypotheses testing

More information

Module 1 - Signal estimation

Module 1 - Signal estimation , Arraial do Cabo, 2009 Module 1 - Signal estimation Sérgio M. Jesus (sjesus@ualg.pt) Universidade do Algarve, PT-8005-139 Faro, Portugal www.siplab.fct.ualg.pt February 2009 Outline of Module 1 Parameter

More information

Estimation and Detection

Estimation and Detection stimation and Detection Lecture 2: Cramér-Rao Lower Bound Dr. ir. Richard C. Hendriks & Dr. Sundeep P. Chepuri 7//207 Remember: Introductory xample Given a process (DC in noise): x[n]=a + w[n], n=0,,n,

More information

ECE 275A Homework 6 Solutions

ECE 275A Homework 6 Solutions ECE 275A Homework 6 Solutions. The notation used in the solutions for the concentration (hyper) ellipsoid problems is defined in the lecture supplement on concentration ellipsoids. Note that θ T Σ θ =

More information

EECE Adaptive Control

EECE Adaptive Control EECE 574 - Adaptive Control Basics of System Identification Guy Dumont Department of Electrical and Computer Engineering University of British Columbia January 2010 Guy Dumont (UBC) EECE574 - Basics of

More information

PATTERN RECOGNITION AND MACHINE LEARNING

PATTERN RECOGNITION AND MACHINE LEARNING PATTERN RECOGNITION AND MACHINE LEARNING Slide Set 3: Detection Theory January 2018 Heikki Huttunen heikki.huttunen@tut.fi Department of Signal Processing Tampere University of Technology Detection theory

More information

Chapter 4 Linear Models

Chapter 4 Linear Models Chapter 4 Linear Models General Linear Model Recall signal + WG case: x[n] s[n;] + w[n] x s( + w Here, dependence on is general ow we consider a special case: Linear Observations : s( H + b known observation

More information

Estimation Theory Fredrik Rusek. Chapters

Estimation Theory Fredrik Rusek. Chapters Estimation Theory Fredrik Rusek Chapters 3.5-3.10 Recap We deal with unbiased estimators of deterministic parameters Performance of an estimator is measured by the variance of the estimate (due to the

More information

ECE 275A Homework 7 Solutions

ECE 275A Homework 7 Solutions ECE 275A Homework 7 Solutions Solutions 1. For the same specification as in Homework Problem 6.11 we want to determine an estimator for θ using the Method of Moments (MOM). In general, the MOM estimator

More information

Estimation Theory Fredrik Rusek. Chapters 6-7

Estimation Theory Fredrik Rusek. Chapters 6-7 Estimation Theory Fredrik Rusek Chapters 6-7 All estimation problems Summary All estimation problems Summary Efficient estimator exists All estimation problems Summary MVU estimator exists Efficient estimator

More information

Detection and Estimation Theory

Detection and Estimation Theory Detection and Estimation Theory Instructor: Prof. Namrata Vaswani Dept. of Electrical and Computer Engineering Iowa State University http://www.ece.iastate.edu/ namrata Slide 1 What is Estimation and Detection

More information

Detecting Parametric Signals in Noise Having Exactly Known Pdf/Pmf

Detecting Parametric Signals in Noise Having Exactly Known Pdf/Pmf Detecting Parametric Signals in Noise Having Exactly Known Pdf/Pmf Reading: Ch. 5 in Kay-II. (Part of) Ch. III.B in Poor. EE 527, Detection and Estimation Theory, # 5c Detecting Parametric Signals in Noise

More information

Centre for Mathematical Sciences HT 2017 Mathematical Statistics

Centre for Mathematical Sciences HT 2017 Mathematical Statistics Lund University Stationary stochastic processes Centre for Mathematical Sciences HT 2017 Mathematical Statistics Computer exercise 3 in Stationary stochastic processes, HT 17. The purpose of this exercise

More information

EIE6207: Maximum-Likelihood and Bayesian Estimation

EIE6207: Maximum-Likelihood and Bayesian Estimation EIE6207: Maximum-Likelihood and Bayesian Estimation Man-Wai MAK Dept. of Electronic and Information Engineering, The Hong Kong Polytechnic University enmwmak@polyu.edu.hk http://www.eie.polyu.edu.hk/ mwmak

More information

Variations. ECE 6540, Lecture 10 Maximum Likelihood Estimation

Variations. ECE 6540, Lecture 10 Maximum Likelihood Estimation Variations ECE 6540, Lecture 10 Last Time BLUE (Best Linear Unbiased Estimator) Formulation Advantages Disadvantages 2 The BLUE A simplification Assume the estimator is a linear system For a single parameter

More information

Statistics. Lecture 2 August 7, 2000 Frank Porter Caltech. The Fundamentals; Point Estimation. Maximum Likelihood, Least Squares and All That

Statistics. Lecture 2 August 7, 2000 Frank Porter Caltech. The Fundamentals; Point Estimation. Maximum Likelihood, Least Squares and All That Statistics Lecture 2 August 7, 2000 Frank Porter Caltech The plan for these lectures: The Fundamentals; Point Estimation Maximum Likelihood, Least Squares and All That What is a Confidence Interval? Interval

More information

Adaptive Systems Homework Assignment 1

Adaptive Systems Homework Assignment 1 Signal Processing and Speech Communication Lab. Graz University of Technology Adaptive Systems Homework Assignment 1 Name(s) Matr.No(s). The analytical part of your homework (your calculation sheets) as

More information

Midterm Exam. CS283, Computer Vision Harvard University. Nov. 20, 2009

Midterm Exam. CS283, Computer Vision Harvard University. Nov. 20, 2009 Midterm Exam CS283, Computer Vision Harvard University Nov. 2, 29 You have two hours to complete this exam. Show all of your work to get full credit, and write your work in the blue books provided. Work

More information

DETECTION theory deals primarily with techniques for

DETECTION theory deals primarily with techniques for ADVANCED SIGNAL PROCESSING SE Optimum Detection of Deterministic and Random Signals Stefan Tertinek Graz University of Technology turtle@sbox.tugraz.at Abstract This paper introduces various methods for

More information

EE482: Digital Signal Processing Applications

EE482: Digital Signal Processing Applications Professor Brendan Morris, SEB 3216, brendan.morris@unlv.edu EE482: Digital Signal Processing Applications Spring 2014 TTh 14:30-15:45 CBC C222 Lecture 11 Adaptive Filtering 14/03/04 http://www.ee.unlv.edu/~b1morris/ee482/

More information

10. Linear Models and Maximum Likelihood Estimation

10. Linear Models and Maximum Likelihood Estimation 10. Linear Models and Maximum Likelihood Estimation ECE 830, Spring 2017 Rebecca Willett 1 / 34 Primary Goal General problem statement: We observe y i iid pθ, θ Θ and the goal is to determine the θ that

More information

Estimation, Detection, and Identification

Estimation, Detection, and Identification Estimation, Detection, and Identification Graduate Course on the CMU/Portugal ECE PhD Program Spring 2008/2009 Chapter 5 Best Linear Unbiased Estimators Instructor: Prof. Paulo Jorge Oliveira pjcro @ isr.ist.utl.pt

More information

Detection theory 101 ELEC-E5410 Signal Processing for Communications

Detection theory 101 ELEC-E5410 Signal Processing for Communications Detection theory 101 ELEC-E5410 Signal Processing for Communications Binary hypothesis testing Null hypothesis H 0 : e.g. noise only Alternative hypothesis H 1 : signal + noise p(x;h 0 ) γ p(x;h 1 ) Trade-off

More information

ELEG 5633 Detection and Estimation Signal Detection: Deterministic Signals

ELEG 5633 Detection and Estimation Signal Detection: Deterministic Signals ELEG 5633 Detection and Estimation Signal Detection: Deterministic Signals Jingxian Wu Department of Electrical Engineering University of Arkansas Outline Matched Filter Generalized Matched Filter Signal

More information

Estimation theory. Parametric estimation. Properties of estimators. Minimum variance estimator. Cramer-Rao bound. Maximum likelihood estimators

Estimation theory. Parametric estimation. Properties of estimators. Minimum variance estimator. Cramer-Rao bound. Maximum likelihood estimators Estimation theory Parametric estimation Properties of estimators Minimum variance estimator Cramer-Rao bound Maximum likelihood estimators Confidence intervals Bayesian estimation 1 Random Variables Let

More information

Unbiased Estimation. Binomial problem shows general phenomenon. An estimator can be good for some values of θ and bad for others.

Unbiased Estimation. Binomial problem shows general phenomenon. An estimator can be good for some values of θ and bad for others. Unbiased Estimation Binomial problem shows general phenomenon. An estimator can be good for some values of θ and bad for others. To compare ˆθ and θ, two estimators of θ: Say ˆθ is better than θ if it

More information

Methods of evaluating estimators and best unbiased estimators Hamid R. Rabiee

Methods of evaluating estimators and best unbiased estimators Hamid R. Rabiee Stochastic Processes Methods of evaluating estimators and best unbiased estimators Hamid R. Rabiee 1 Outline Methods of Mean Squared Error Bias and Unbiasedness Best Unbiased Estimators CR-Bound for variance

More information

Estimation Theory. as Θ = (Θ 1,Θ 2,...,Θ m ) T. An estimator

Estimation Theory. as Θ = (Θ 1,Θ 2,...,Θ m ) T. An estimator Estimation Theory Estimation theory deals with finding numerical values of interesting parameters from given set of data. We start with formulating a family of models that could describe how the data were

More information

Unbiased Estimation. Binomial problem shows general phenomenon. An estimator can be good for some values of θ and bad for others.

Unbiased Estimation. Binomial problem shows general phenomenon. An estimator can be good for some values of θ and bad for others. Unbiased Estimation Binomial problem shows general phenomenon. An estimator can be good for some values of θ and bad for others. To compare ˆθ and θ, two estimators of θ: Say ˆθ is better than θ if it

More information

This examination consists of 11 pages. Please check that you have a complete copy. Time: 2.5 hrs INSTRUCTIONS

This examination consists of 11 pages. Please check that you have a complete copy. Time: 2.5 hrs INSTRUCTIONS THE UNIVERSITY OF BRITISH COLUMBIA Department of Electrical and Computer Engineering EECE 564 Detection and Estimation of Signals in Noise Final Examination 6 December 2006 This examination consists of

More information

EE482: Digital Signal Processing Applications

EE482: Digital Signal Processing Applications Professor Brendan Morris, SEB 3216, brendan.morris@unlv.edu EE482: Digital Signal Processing Applications Spring 2014 TTh 14:30-15:45 CBC C222 Lecture 11 Adaptive Filtering 14/03/04 http://www.ee.unlv.edu/~b1morris/ee482/

More information

6.867 Machine Learning

6.867 Machine Learning 6.867 Machine Learning Problem Set 2 Due date: Wednesday October 6 Please address all questions and comments about this problem set to 6867-staff@csail.mit.edu. You will need to use MATLAB for some of

More information

Parameter Estimation in a Moving Horizon Perspective

Parameter Estimation in a Moving Horizon Perspective Parameter Estimation in a Moving Horizon Perspective State and Parameter Estimation in Dynamical Systems Reglerteknik, ISY, Linköpings Universitet State and Parameter Estimation in Dynamical Systems OUTLINE

More information

26. Filtering. ECE 830, Spring 2014

26. Filtering. ECE 830, Spring 2014 26. Filtering ECE 830, Spring 2014 1 / 26 Wiener Filtering Wiener filtering is the application of LMMSE estimation to recovery of a signal in additive noise under wide sense sationarity assumptions. Problem

More information

Chirp Transform for FFT

Chirp Transform for FFT Chirp Transform for FFT Since the FFT is an implementation of the DFT, it provides a frequency resolution of 2π/N, where N is the length of the input sequence. If this resolution is not sufficient in a

More information

Linear & nonlinear classifiers

Linear & nonlinear classifiers Linear & nonlinear classifiers Machine Learning Hamid Beigy Sharif University of Technology Fall 1396 Hamid Beigy (Sharif University of Technology) Linear & nonlinear classifiers Fall 1396 1 / 44 Table

More information

[POLS 8500] Review of Linear Algebra, Probability and Information Theory

[POLS 8500] Review of Linear Algebra, Probability and Information Theory [POLS 8500] Review of Linear Algebra, Probability and Information Theory Professor Jason Anastasopoulos ljanastas@uga.edu January 12, 2017 For today... Basic linear algebra. Basic probability. Programming

More information

f(x θ)dx with respect to θ. Assuming certain smoothness conditions concern differentiating under the integral the integral sign, we first obtain

f(x θ)dx with respect to θ. Assuming certain smoothness conditions concern differentiating under the integral the integral sign, we first obtain 0.1. INTRODUCTION 1 0.1 Introduction R. A. Fisher, a pioneer in the development of mathematical statistics, introduced a measure of the amount of information contained in an observaton from f(x θ). Fisher

More information

Lecture 7 Introduction to Statistical Decision Theory

Lecture 7 Introduction to Statistical Decision Theory Lecture 7 Introduction to Statistical Decision Theory I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw December 20, 2016 1 / 55 I-Hsiang Wang IT Lecture 7

More information

ECE531 Homework Assignment Number 6 Solution

ECE531 Homework Assignment Number 6 Solution ECE53 Homework Assignment Number 6 Solution Due by 8:5pm on Wednesday 3-Mar- Make sure your reasoning and work are clear to receive full credit for each problem.. 6 points. Suppose you have a scalar random

More information

A NEW INFORMATION THEORETIC APPROACH TO ORDER ESTIMATION PROBLEM. Massachusetts Institute of Technology, Cambridge, MA 02139, U.S.A.

A NEW INFORMATION THEORETIC APPROACH TO ORDER ESTIMATION PROBLEM. Massachusetts Institute of Technology, Cambridge, MA 02139, U.S.A. A EW IFORMATIO THEORETIC APPROACH TO ORDER ESTIMATIO PROBLEM Soosan Beheshti Munther A. Dahleh Massachusetts Institute of Technology, Cambridge, MA 0239, U.S.A. Abstract: We introduce a new method of model

More information

Chapter 3 Engineering Solutions. 3.4 and 3.5 Problem Presentation

Chapter 3 Engineering Solutions. 3.4 and 3.5 Problem Presentation Chapter 3 Engineering Solutions 3.4 and 3.5 Problem Presentation Organize your work as follows (see book): Problem Statement Theory and Assumptions Solution Verification Tools: Pencil and Paper See Fig.

More information

Introduction to Probability and Stochastic Processes I

Introduction to Probability and Stochastic Processes I Introduction to Probability and Stochastic Processes I Lecture 3 Henrik Vie Christensen vie@control.auc.dk Department of Control Engineering Institute of Electronic Systems Aalborg University Denmark Slides

More information

Model-building and parameter estimation

Model-building and parameter estimation Luleå University of Technology Johan Carlson Last revision: July 27, 2009 Measurement Technology and Uncertainty Analysis - E7021E MATLAB homework assignment Model-building and parameter estimation Introduction

More information

Regression with Nonlinear Transformations

Regression with Nonlinear Transformations Regression with Nonlinear Transformations Joel S Steele Portland State University Abstract Gaussian Likelihood When data are drawn from a Normal distribution, N (µ, σ 2 ), we can use the Gaussian distribution

More information

A Probability Review

A Probability Review A Probability Review Outline: A probability review Shorthand notation: RV stands for random variable EE 527, Detection and Estimation Theory, # 0b 1 A Probability Review Reading: Go over handouts 2 5 in

More information

Z - Transform. It offers the techniques for digital filter design and frequency analysis of digital signals.

Z - Transform. It offers the techniques for digital filter design and frequency analysis of digital signals. Z - Transform The z-transform is a very important tool in describing and analyzing digital systems. It offers the techniques for digital filter design and frequency analysis of digital signals. Definition

More information

Lecture 8: Signal Detection and Noise Assumption

Lecture 8: Signal Detection and Noise Assumption ECE 830 Fall 0 Statistical Signal Processing instructor: R. Nowak Lecture 8: Signal Detection and Noise Assumption Signal Detection : X = W H : X = S + W where W N(0, σ I n n and S = [s, s,..., s n ] T

More information

Maximum Likelihood Estimation

Maximum Likelihood Estimation Connexions module: m11446 1 Maximum Likelihood Estimation Clayton Scott Robert Nowak This work is produced by The Connexions Project and licensed under the Creative Commons Attribution License Abstract

More information

[y i α βx i ] 2 (2) Q = i=1

[y i α βx i ] 2 (2) Q = i=1 Least squares fits This section has no probability in it. There are no random variables. We are given n points (x i, y i ) and want to find the equation of the line that best fits them. We take the equation

More information

ECE538 Final Exam Fall 2017 Digital Signal Processing I 14 December Cover Sheet

ECE538 Final Exam Fall 2017 Digital Signal Processing I 14 December Cover Sheet ECE58 Final Exam Fall 7 Digital Signal Processing I December 7 Cover Sheet Test Duration: hours. Open Book but Closed Notes. Three double-sided 8.5 x crib sheets allowed This test contains five problems.

More information

Data Mining. Linear & nonlinear classifiers. Hamid Beigy. Sharif University of Technology. Fall 1396

Data Mining. Linear & nonlinear classifiers. Hamid Beigy. Sharif University of Technology. Fall 1396 Data Mining Linear & nonlinear classifiers Hamid Beigy Sharif University of Technology Fall 1396 Hamid Beigy (Sharif University of Technology) Data Mining Fall 1396 1 / 31 Table of contents 1 Introduction

More information

ECE531 Homework Assignment Number 7 Solution

ECE531 Homework Assignment Number 7 Solution ECE53 Homework Assignment Number 7 Solution Due by 8:50pm on Wednesday 30-Mar-20 Make sure your reasoning and work are clear to receive full credit for each problem.. 4 points. Kay I: 2.6. Solution: I

More information

Final Exam January 31, Solutions

Final Exam January 31, Solutions Final Exam January 31, 014 Signals & Systems (151-0575-01) Prof. R. D Andrea & P. Reist Solutions Exam Duration: Number of Problems: Total Points: Permitted aids: Important: 150 minutes 7 problems 50 points

More information

Regression Clustering

Regression Clustering Regression Clustering In regression clustering, we assume a model of the form y = f g (x, θ g ) + ɛ g for observations y and x in the g th group. Usually, of course, we assume linear models of the form

More information

Parameter estimation! and! forecasting! Cristiano Porciani! AIfA, Uni-Bonn!

Parameter estimation! and! forecasting! Cristiano Porciani! AIfA, Uni-Bonn! Parameter estimation! and! forecasting! Cristiano Porciani! AIfA, Uni-Bonn! Questions?! C. Porciani! Estimation & forecasting! 2! Cosmological parameters! A branch of modern cosmological research focuses

More information

Lecture 19: Bayesian Linear Estimators

Lecture 19: Bayesian Linear Estimators ECE 830 Fall 2010 Statistical Signal Processing instructor: R Nowa, scribe: I Rosado-Mendez Lecture 19: Bayesian Linear Estimators 1 Linear Minimum Mean-Square Estimator Suppose our data is set X R n,

More information

Discrete-Time Gaussian Fourier Transform Pair, and Generating a Random Process with Gaussian PDF and Power Spectrum Mark A.

Discrete-Time Gaussian Fourier Transform Pair, and Generating a Random Process with Gaussian PDF and Power Spectrum Mark A. Discrete-Time Gaussian Fourier Transform Pair, and Generating a Random Process with Gaussian PDF and Power Spectrum Mark A. Richards October 3, 6 Updated April 5, Gaussian Transform Pair in Continuous

More information

Gaussian and Linear Discriminant Analysis; Multiclass Classification

Gaussian and Linear Discriminant Analysis; Multiclass Classification Gaussian and Linear Discriminant Analysis; Multiclass Classification Professor Ameet Talwalkar Slide Credit: Professor Fei Sha Professor Ameet Talwalkar CS260 Machine Learning Algorithms October 13, 2015

More information

Analog vs. discrete signals

Analog vs. discrete signals Analog vs. discrete signals Continuous-time signals are also known as analog signals because their amplitude is analogous (i.e., proportional) to the physical quantity they represent. Discrete-time signals

More information

STAT 100C: Linear models

STAT 100C: Linear models STAT 100C: Linear models Arash A. Amini June 9, 2018 1 / 56 Table of Contents Multiple linear regression Linear model setup Estimation of β Geometric interpretation Estimation of σ 2 Hat matrix Gram matrix

More information

Numerical Methods. Lecture Notes #08 Discrete Least Square Approximation

Numerical Methods. Lecture Notes #08 Discrete Least Square Approximation Numerical Methods Discrete Least Square Approximation Pavel Ludvík, March 30, 2016 Department of Mathematics and Descriptive Geometry VŠB-TUO http://homen.vsb.cz/ lud0016/ 1 / 23

More information

Differential Equations

Differential Equations This document was written and copyrighted by Paul Dawkins. Use of this document and its online version is governed by the Terms and Conditions of Use located at. The online version of this document is

More information

Problem Value Score No/Wrong Rec

Problem Value Score No/Wrong Rec GEORGIA INSTITUTE OF TECHNOLOGY SCHOOL of ELECTRICAL & COMPUTER ENGINEERING QUIZ #2 DATE: 14-Oct-11 COURSE: ECE-225 NAME: GT username: LAST, FIRST (ex: gpburdell3) 3 points 3 points 3 points Recitation

More information

Introduction to Statistical Inference

Introduction to Statistical Inference Structural Health Monitoring Using Statistical Pattern Recognition Introduction to Statistical Inference Presented by Charles R. Farrar, Ph.D., P.E. Outline Introduce statistical decision making for Structural

More information

Bayesian decision theory Introduction to Pattern Recognition. Lectures 4 and 5: Bayesian decision theory

Bayesian decision theory Introduction to Pattern Recognition. Lectures 4 and 5: Bayesian decision theory Bayesian decision theory 8001652 Introduction to Pattern Recognition. Lectures 4 and 5: Bayesian decision theory Jussi Tohka jussi.tohka@tut.fi Institute of Signal Processing Tampere University of Technology

More information

Introduction to Sparsity in Signal Processing

Introduction to Sparsity in Signal Processing 1 Introduction to Sparsity in Signal Processing Ivan Selesnick Polytechnic Institute of New York University Brooklyn, New York selesi@poly.edu 212 2 Under-determined linear equations Consider a system

More information

Empirical Mean and Variance!

Empirical Mean and Variance! Global Image Properties! Global image properties refer to an image as a whole rather than components. Computation of global image properties is often required for image enhancement, preceding image analysis.!

More information

Detection Theory. Composite tests

Detection Theory. Composite tests Composite tests Chapter 5: Correction Thu I claimed that the above, which is the most general case, was captured by the below Thu Chapter 5: Correction Thu I claimed that the above, which is the most general

More information

Introduction to Biomedical Engineering

Introduction to Biomedical Engineering Introduction to Biomedical Engineering Biosignal processing Kung-Bin Sung 6/11/2007 1 Outline Chapter 10: Biosignal processing Characteristics of biosignals Frequency domain representation and analysis

More information

Numerical Methods of Approximation

Numerical Methods of Approximation Contents 31 Numerical Methods of Approximation 31.1 Polynomial Approximations 2 31.2 Numerical Integration 28 31.3 Numerical Differentiation 58 31.4 Nonlinear Equations 67 Learning outcomes In this Workbook

More information

Engineering Part IIB: Module 4F10 Statistical Pattern Processing Lecture 5: Single Layer Perceptrons & Estimating Linear Classifiers

Engineering Part IIB: Module 4F10 Statistical Pattern Processing Lecture 5: Single Layer Perceptrons & Estimating Linear Classifiers Engineering Part IIB: Module 4F0 Statistical Pattern Processing Lecture 5: Single Layer Perceptrons & Estimating Linear Classifiers Phil Woodland: pcw@eng.cam.ac.uk Michaelmas 202 Engineering Part IIB:

More information

ECE531 Lecture 8: Non-Random Parameter Estimation

ECE531 Lecture 8: Non-Random Parameter Estimation ECE531 Lecture 8: Non-Random Parameter Estimation D. Richard Brown III Worcester Polytechnic Institute 19-March-2009 Worcester Polytechnic Institute D. Richard Brown III 19-March-2009 1 / 25 Introduction

More information

Lab 4: Quantization, Oversampling, and Noise Shaping

Lab 4: Quantization, Oversampling, and Noise Shaping Lab 4: Quantization, Oversampling, and Noise Shaping Due Friday 04/21/17 Overview: This assignment should be completed with your assigned lab partner(s). Each group must turn in a report composed using

More information

APPM/MATH 4/5520 Solutions to Exam I Review Problems. f X 1,X 2. 2e x 1 x 2. = x 2

APPM/MATH 4/5520 Solutions to Exam I Review Problems. f X 1,X 2. 2e x 1 x 2. = x 2 APPM/MATH 4/5520 Solutions to Exam I Review Problems. (a) f X (x ) f X,X 2 (x,x 2 )dx 2 x 2e x x 2 dx 2 2e 2x x was below x 2, but when marginalizing out x 2, we ran it over all values from 0 to and so

More information

University of Cambridge Engineering Part IIB Module 3F3: Signal and Pattern Processing Handout 2:. The Multivariate Gaussian & Decision Boundaries

University of Cambridge Engineering Part IIB Module 3F3: Signal and Pattern Processing Handout 2:. The Multivariate Gaussian & Decision Boundaries University of Cambridge Engineering Part IIB Module 3F3: Signal and Pattern Processing Handout :. The Multivariate Gaussian & Decision Boundaries..15.1.5 1 8 6 6 8 1 Mark Gales mjfg@eng.cam.ac.uk Lent

More information

Corner. Corners are the intersections of two edges of sufficiently different orientations.

Corner. Corners are the intersections of two edges of sufficiently different orientations. 2D Image Features Two dimensional image features are interesting local structures. They include junctions of different types like Y, T, X, and L. Much of the work on 2D features focuses on junction L,

More information

Linear Models for Classification

Linear Models for Classification Linear Models for Classification Oliver Schulte - CMPT 726 Bishop PRML Ch. 4 Classification: Hand-written Digit Recognition CHINE INTELLIGENCE, VOL. 24, NO. 24, APRIL 2002 x i = t i = (0, 0, 0, 1, 0, 0,

More information

Reminders. Thought questions should be submitted on eclass. Please list the section related to the thought question

Reminders. Thought questions should be submitted on eclass. Please list the section related to the thought question Linear regression Reminders Thought questions should be submitted on eclass Please list the section related to the thought question If it is a more general, open-ended question not exactly related to a

More information

LINEAR MMSE ESTIMATION

LINEAR MMSE ESTIMATION LINEAR MMSE ESTIMATION TERM PAPER FOR EE 602 STATISTICAL SIGNAL PROCESSING By, DHEERAJ KUMAR VARUN KHAITAN 1 Introduction Linear MMSE estimators are chosen in practice because they are simpler than the

More information

ISyE 6644 Fall 2014 Test 3 Solutions

ISyE 6644 Fall 2014 Test 3 Solutions 1 NAME ISyE 6644 Fall 14 Test 3 Solutions revised 8/4/18 You have 1 minutes for this test. You are allowed three cheat sheets. Circle all final answers. Good luck! 1. [4 points] Suppose that the joint

More information

Next is material on matrix rank. Please see the handout

Next is material on matrix rank. Please see the handout B90.330 / C.005 NOTES for Wednesday 0.APR.7 Suppose that the model is β + ε, but ε does not have the desired variance matrix. Say that ε is normal, but Var(ε) σ W. The form of W is W w 0 0 0 0 0 0 w 0

More information

Lecture 3. G. Cowan. Lecture 3 page 1. Lectures on Statistical Data Analysis

Lecture 3. G. Cowan. Lecture 3 page 1. Lectures on Statistical Data Analysis Lecture 3 1 Probability (90 min.) Definition, Bayes theorem, probability densities and their properties, catalogue of pdfs, Monte Carlo 2 Statistical tests (90 min.) general concepts, test statistics,

More information

Chapter 3: Maximum Likelihood Theory

Chapter 3: Maximum Likelihood Theory Chapter 3: Maximum Likelihood Theory Florian Pelgrin HEC September-December, 2010 Florian Pelgrin (HEC) Maximum Likelihood Theory September-December, 2010 1 / 40 1 Introduction Example 2 Maximum likelihood

More information

Random vectors X 1 X 2. Recall that a random vector X = is made up of, say, k. X k. random variables.

Random vectors X 1 X 2. Recall that a random vector X = is made up of, say, k. X k. random variables. Random vectors Recall that a random vector X = X X 2 is made up of, say, k random variables X k A random vector has a joint distribution, eg a density f(x), that gives probabilities P(X A) = f(x)dx Just

More information

Terminology Suppose we have N observations {x(n)} N 1. Estimators as Random Variables. {x(n)} N 1

Terminology Suppose we have N observations {x(n)} N 1. Estimators as Random Variables. {x(n)} N 1 Estimation Theory Overview Properties Bias, Variance, and Mean Square Error Cramér-Rao lower bound Maximum likelihood Consistency Confidence intervals Properties of the mean estimator Properties of the

More information