Estimation Theory Fredrik Rusek. Chapters 6-7

Size: px
Start display at page:

Download "Estimation Theory Fredrik Rusek. Chapters 6-7"

Transcription

1 Estimation Theory Fredrik Rusek Chapters 6-7

2 All estimation problems Summary

3 All estimation problems Summary Efficient estimator exists

4 All estimation problems Summary MVU estimator exists Efficient estimator exists

5 All estimation problems Summary MVU estimator exists Efficient estimator exists Linear signal model, Gaussian noise

6 All estimation problems Summary MVU estimator exists Efficient estimator exists Linear signal model, Gaussian noise CRLB exists

7 All estimation problems Summary MVU estimator exists Efficient estimator exists CRLB exists Linear signal model, Gaussian noise Today we will place the following piece: Linear signal in non-gaussian noise

8 All estimation problems Summary MVU estimator exists Efficient estimator exists CRLB exists Linear signal model, Gaussian noise Chapter 6 deals with non-gaussian noise This is not very well pointed out Today we will place the following piece: Linear signal in non-gaussian noise

9 Chapter 6 Best linear unbiased estimators Definition of the BLUE Linear in received data Many possible estimators possible. The BLUE is the one that is: Unbiased Smallest possible variance over all {a n }

10 Chapter 6 Best linear unbiased estimators Definition of the BLUE Linear in received data Many possible estimators possible. The BLUE is the one that is: Unbiased Smallest possible variance over all {a n } Optimal when MVU is linear in the data Suboptimal otherwise

11 Chapter 6 Best linear unbiased estimators Definition of the BLUE Linear in received data Examples: DC level in white noise MVU: = BLUE Many possible estimators possible. The BLUE is the one that is: Unbiased Smallest possible variance over all {a n } Optimal when MVU is linear in the data Suboptimal otherwise

12 Chapter 6 Best linear unbiased estimators Definition of the BLUE Linear in received data Examples: DC level in white noise MVU: = BLUE Many possible estimators possible. The BLUE is the one that is: Unbiased Smallest possible variance over all {a n } German tank problem MVU: Optimal when MVU is linear in the data Suboptimal otherwise BLUE: (can be shown) BLUE MVU

13 Chapter 6 Best linear unbiased estimators Definition of the BLUE Linear in received data Examples: DC level in white noise MVU: = BLUE Many possible estimators possible. The BLUE is the one that is: Unbiased Smallest possible variance over all {a n } German tank problem MVU: Optimal when MVU is linear in the data Suboptimal otherwise BLUE: (can be shown) BLUE MVU The MVU is often not linear, hence the name best linear

14 Chapter 6 Best linear unbiased estimators The MVU is often not linear, hence the name best linear

15 Chapter 6 Best linear unbiased estimators Definition of the BLUE Sometimes LE is very bad Noise power estimation (x[n]=w[n]) MVU BLUE

16 Chapter 6 Best linear unbiased estimators Definition of the BLUE Sometimes LE is very bad Noise power estimation (x[n]=w[n]) MVU BLUE BLUE + cleverness Transform data as y[n] = x 2 [n] Apply BLUE to y[n]

17 Chapter 6 Best linear unbiased estimators Finding the BLUE Constraint

18 Chapter 6 Best linear unbiased estimators Finding the BLUE Constraint Variance

19 Chapter 6 Best linear unbiased estimators Finding the BLUE Constraint Variance

20 Chapter 6 Best linear unbiased estimators Finding the BLUE Constraint Variance Use vector notation

21 Chapter 6 Best linear unbiased estimators Finding the BLUE Constraint Variance Use vector notation

22 Chapter 6 Best linear unbiased estimators Finding the BLUE Constraint Variance

23 Chapter 6 Best linear unbiased estimators Finding the BLUE Constraint Variance

24 Chapter 6 Best linear unbiased estimators Finding the BLUE Constraint Variance

25 Chapter 6 Best linear unbiased estimators Finding the BLUE Constraint Optimization problem Variance

26 Chapter 6 Best linear unbiased estimators Finding the BLUE Constraint Optimization problem To satisfy we must have

27 Chapter 6 Best linear unbiased estimators Finding the BLUE Constraint Optimization problem To satisfy we must have Now write

28 Chapter 6 Best linear unbiased estimators Finding the BLUE Constraint Optimization problem To satisfy we must have Now write Define

29 Chapter 6 Best linear unbiased estimators Finding the BLUE Constraint Optimization problem To satisfy we must have Now write Define Which yields w[n] not Gaussian, but zero-mean This is the linear model, but with non-gaussian noise

30 Chapter 6 Best linear unbiased estimators Finding the BLUE Constraint Optimization problem To satisfy we must have Now write We have Define Which yields

31 Chapter 6 Best linear unbiased estimators Finding the BLUE Constraint Optimization problem To satisfy we must have Now write We have Define Which yields

32 Chapter 6 Best linear unbiased estimators Finding the BLUE Lagrangian Optimization problem

33 Chapter 6 Best linear unbiased estimators Finding the BLUE Lagrangian Optimization problem

34 Chapter 6 Best linear unbiased estimators Finding the BLUE Lagrangian Optimization problem To find λ, use the constraint function with

35 Chapter 6 Best linear unbiased estimators Finding the BLUE Lagrangian Optimization problem To find λ, use the constraint function with

36 Chapter 6 Best linear unbiased estimators Finding the BLUE Lagrangian Optimization problem To find λ, use the constraint function with Plug back:

37 Chapter 6 Best linear unbiased estimators Finding the BLUE Lagrangian Optimization problem To find λ, use the constraint function with Variance Plug back:

38 Chapter 6 Best linear unbiased estimators To compute the BLUE, we need Know that E(x) = sθ Know s Know C Connections to the MVU estimator for the linear model will soon be made

39 Chapter 6 Best linear unbiased estimators BLUE for vector parameter

40 Chapter 6 Best linear unbiased estimators BLUE for vector parameter Constraint

41 Chapter 6 Best linear unbiased estimators BLUE for vector parameter Constraint Which becomes

42 Chapter 6 Best linear unbiased estimators BLUE for vector parameter Constraint Which becomes

43 Chapter 6 Best linear unbiased estimators BLUE for vector parameter Constraint Which becomes

44 Chapter 6 Best linear unbiased estimators BLUE for vector parameter Constraint Cost function to optimize Which becomes

45 Chapter 6 Best linear unbiased estimators BLUE for vector parameter Optimal solution

46 Chapter 6 Best linear unbiased estimators BLUE for vector parameter Optimal solution Summary For a model y=hθ+n, a linear estimator has the same form no matter the distribution of the noise (for the same covariance C of it)

47 Chapter 6 Best linear unbiased estimators BLUE for vector parameter Optimal solution Summary For a model y=hθ+n, a linear estimator has the same form no matter the distribution of the noise (for the same covariance C of it) If n is Gaussian, then a linear estimator is MVU

48 Chapter 6 Best linear unbiased estimators BLUE for vector parameter Optimal solution Summary For a model y=hθ+n, a linear estimator has the same form no matter the distribution of the noise (for the same covariance C of it) If n is Gaussian, then a linear estimator is MVU If n is not Gaussian, the BLUE is the best linear estimator, but not MVU

49 Chapter 6 Best linear unbiased estimators BLUE for vector parameter Optimal solution Summary For a model y=hθ+n, a linear estimator has the same form no matter the distribution of the noise (for the same covariance C of it) If n is Gaussian, then a linear estimator is MVU If n is not Gaussian, the BLUE is the best linear estimator, but not MVU Performance of the BLUE for non-gaussian noise is identical to the performance with Gaussian noise

50 Chapter 6 Best linear unbiased estimators Est. Problem with Gaussian noise Est. Problem with Non-Gaussian noise Variance of linear estimator

51 Chapter 6 Best linear unbiased estimators Est. Problem with Gaussian noise Est. Problem with Non-Gaussian noise Variance of linear estimator

52 Chapter 6 Best linear unbiased estimators Est. Problem with Gaussian noise Est. Problem with Non-Gaussian noise Variance of linear estimator Estimator is Optimal Estimator is not optimal

53 Chapter 6 Best linear unbiased estimators Est. Problem with Gaussian noise Est. Problem with Non-Gaussian noise Variance of linear estimator Estimator is Optimal Estimator is not optimal Variance of nonlinear estimator The same?? But smaller than Therefore, Gaussian noise is the worst noise possible

54 Chapter 6 Best linear unbiased estimators Est. Problem with Gaussian noise Est. Problem with Non-Gaussian noise Variance of linear estimator Estimator is Optimal Estimator is not optimal Variance of nonlinear estimator The same?? But smaller than Alternative proofs 1. Based on calculus of the variations 2. Based on link between Fisher information and Kullback-Leibler divergence (information theory)

55 All estimation problems Summary MVU estimator exists Efficient estimator exists CRLB exists Linear signal model, Gaussian noise Where to put?? Linear signal in non-gaussian noise

56 We do not have explicit info Summary???? MVU estimator exists Efficient estimator exists CRLB exists With uniform noise, the CRLB doesn t exist For Laplace noise it does No efficient estimator seems to exist see my Italian proof Perhaps MVU does not always exist.?

57 We do not have explicit info Summary???? MVU estimator exists???? Efficient estimator exists CRLB exists With uniform noise, the CRLB doesn t exist For Laplace noise it does No efficient estimator seems to exist see my Italian proof Perhaps MVU does not always exist.?

58 Chapter 7 Maximum Likelihood arg

59 Chapter 7 Maximum Likelihood arg Theorem. If an efficient estimator exists, then it is given by the MLE

60 Chapter 7 Maximum Likelihood arg Theorem. If an efficient estimator exists, then it is given by the MLE Proof.

61 Chapter 7 Maximum Likelihood arg Theorem. If an efficient estimator exists, then it is given by the MLE Proof. ML ML

62 Chapter 7 Maximum Likelihood Example 7.3. DC level in white Gaussian noise with variance = DC-level

63 Chapter 7 Maximum Likelihood Example 7.3. DC level in white Gaussian noise with variance = DC-level Find MLE Check point 1: Try the CRLB

64 Chapter 7 Maximum Likelihood Example 7.3. DC level in white Gaussian noise with variance = DC-level Find MLE Check point 2: Try Neyman-Fisher

65 Chapter 7 Maximum Likelihood Example 7.3. DC level in white Gaussian noise with variance = DC-level Find MLE Check point 2: Try Neyman-Fisher If is complete, then an unbiased function of T(x) is MVU

66 Chapter 7 Maximum Likelihood Example 7.3. DC level in white Gaussian noise with variance = DC-level Find MLE Check point 2: Try Neyman-Fisher If is complete, then an unbiased function of T(x) is MVU Not clear what function to choose

67 Chapter 7 Maximum Likelihood Example 7.3. DC level in white Gaussian noise with variance = DC-level Find MLE No more strategies to find the MVU proceed to MLE

68 Chapter 7 Maximum Likelihood Example 7.3. DC level in white Gaussian noise with variance = DC-level Possible to show: As N, As N, Asymptotically efficient

69 Chapter 7 Maximum Likelihood Example 7.3. DC level in white Gaussian noise with variance = DC-level Possible to show: As N, As N, Asymptotically efficient We saw this notation before, when we claimed that as N grows, we can estimate a transformation of a variable as the transformation of the estimate

70 Chapter 7 Maximum Likelihood Example 7.3. DC level in white Gaussian noise with variance = DC-level Possible to show: As N, As N, Asymptotically efficient By the CLT, we have that is Gaussian as N grows. Asymptotically, one can show that the MLE is a linear transformation of this Gaussian variable. Thus, the estimator is Gaussian distributed Linearity is not easy to see, but possible..

71 Chapter 7 Maximum Likelihood Example 7.3. DC level in white Gaussian noise with variance = DC-level Possible to show: As N, As N, Asymptotically efficient By the CLT, we have that is Gaussian as N grows. Asymptotically, one can show that the MLE is a linear transformation of this Gaussian variable. Thus, the estimator is Gaussian distributed But since the variance of the estimator is given by the CRLB (asympt.),

72 Effect of N Chapter 7 Maximum Likelihood

73 Chapter 7 Maximum Likelihood

74 Chapter 7 Maximum Likelihood Proof of mean (conistency): Let θ 0 be the true value of θ

75 Chapter 7 Maximum Likelihood Proof of mean (conistency): Let θ 0 be the true value of θ By the CLT, we have

76 Chapter 7 Maximum Likelihood Proof of mean (conistency): Let θ 0 be the true value of θ By the CLT, we have

77 Chapter 7 Maximum Likelihood Proof of mean (conistency): Let θ 0 be the true value of θ By the CLT, we have Maximized for θ= θ 0

78 Chapter 7 Maximum Likelihood Proof of mean (conistency): Let θ 0 be the true value of θ By the CLT, we have Maximized for θ= θ 0 As N grows, the MLE is θ 0

79 Chapter 7 Maximum Likelihood Neyman-Fisher factorization MLE

80 Chapter 7 Maximum Likelihood Neyman-Fisher factorization MLE To find the ML, it is sufficient to optimize the function g(t(x),θ) Hence, MLE can be made on the basis of the sufficient statistic(s) only

81 Transformation of parameters Chapter 7 Maximum Likelihood

82 Chapter 7 Maximum Likelihood Transformation of parameters Let us tie things together The MLE of a transformed parameter is the transformed MLE of the parameter

83 Chapter 7 Maximum Likelihood Transformation of parameters Let us tie things together The MLE of a transformed parameter is the transformed MLE of the parameter The MLE is efficient if an efficient estimator exists

84 Chapter 7 Maximum Likelihood Transformation of parameters Let us tie things together The MLE of a transformed parameter is the transformed MLE of the parameter The MLE is efficient if an efficient estimator exists From before: The (non-linear) transformed estimate of an efficient estimator does not preserve efficiency Not efficient efficient

85 Chapter 7 Maximum Likelihood Transformation of parameters Let us tie things together The MLE of a transformed parameter is the transformed MLE of the parameter The MLE is efficient if an efficient estimator exists From before: The (non-linear) transformed estimate of an efficient estimator does not preserve efficiency efficient Not efficient This argument didn t say that an efficient estimator for α does not exist

86 Chapter 7 Maximum Likelihood Transformation of parameters Let us tie things together The MLE of a transformed parameter is the transformed MLE of the parameter The MLE is efficient if an efficient estimator exists From before: The (non-linear) transformed estimate of an efficient estimator does not preserve efficiency efficient Not efficient This did not say that an efficient estimator for α does not exist We can deduce: A non-linear function of a parameter that can be efficiently estimated cannot be efficiently estimated efficient Not efficient (but would have been if an efficient existed since it is the transformed MLE)

87 Chapter 7 Maximum Likelihood Transformation of parameters Let us tie things together The MLE of a transformed parameter is the transformed MLE of the parameter The MLE is efficient if an efficient estimator exists From before: The (non-linear) transformed estimate of an efficient estimator does not preserve efficiency efficient Not efficient This did not say that an efficient estimator for α does not exist We can deduce: A non-linear function of a parameter that can be efficiently estimated cannot be efficiently estimated Due to previous linearization argument, the transformed estimate is asymptotically efficient.

88 Chapter 7 Maximum Likelihood Transformation of parameters Let us tie things together The MLE of a transformed parameter is the transformed MLE of the parameter The MLE is efficient if an efficient estimator exists From before: The (non-linear) transformed estimate of an efficient estimator does not preserve efficiency efficient Not efficient This did not say that an efficient estimator for α does not exist We can deduce: A non-linear function of a parameter that can be efficiently estimated cannot be efficiently estimated Due to previous linearization argument, the transformed estimate is asymptotically efficient. This could also have been realized by the observation that transformed estimate = MLE = asymptotically efficient

89 Chapter 7 Maximum Likelihood Transformation of parameters What is this?

90 Chapter 7 Maximum Likelihood Transformation of parameters Consider the case For some value of α, there are two values of A that produces α, The likelihood of α, is the largest of the two likelihoods

91 Chapter 7 Maximum Likelihood

92 MLE (linear) Chapter 7 Maximum Likelihood

93 Chapter 7 Maximum Likelihood MLE (linear) MLE (db) due to invariance

94 Chapter 7 Maximum Likelihood Efficient? Meaningful to compare with CRLB? Efficient (Var = 2σ 4 /N=CRLB) MLE (linear) MLE (db) due to invariance

95 Chapter 7 Maximum Likelihood Efficient? No! Asymptotically: Yes! Meaningful to compare with CRLB? No! Not even unbiased Meets CRLB asymptotically Efficient (Var = 2σ 4 /N=CRLB) MLE (linear) MLE (db) due to invariance

96 Chapter 7 Maximum Likelihood Set N=10. Let us anyway check the CRLB for the db case: Measured variance (Matlab): 4.17 Var=2.21xCRLB Efficient? No! Asymptotically: Yes! Meaningful to compare with CRLB? No! Not even unbiased Meets CRLB asymptotically Efficient (Var = 2σ 4 /N=CRLB) MLE (linear) MLE (db) due to invariance

97 Chapter 7 Maximum Likelihood Set N=50. Let us anyway check the CRLB for the db case: Measured variance (Matlab): 0.77 Var=2.04xCRLB Efficient? No! Asymptotically: Yes! Meaningful to compare with CRLB? No! Not even unbiased Meets CRLB asymptotically Efficient (Var = 2σ 4 /N=CRLB) MLE (linear) MLE (db) due to invariance

98 Chapter 7 Maximum Likelihood Set N=150. Let us anyway check the CRLB for the db case: Measured variance (Matlab): 0.25 Var=2xCRLB Efficient? No! Asymptotically: Yes! Meaningful to compare with CRLB? No! Not even unbiased Meets CRLB asymptotically Efficient (Var = 2σ 4 /N=CRLB) MLE (linear) MLE (db) due to invariance

99 Chapter 7 Maximum Likelihood Extension to vector parameter: Straightforward

100 Chapter 7 Maximum Likelihood MLE for linear model in Gaussian noise With no derivations: How should we argue in order to establish the MLE?

101 Chapter 7 Maximum Likelihood MLE for linear model in Gaussian noise With no derivations: How should we argue in order to establish the MLE? In this case a linear estimator was optimal (Chapter 4)

102 Chapter 7 Maximum Likelihood MLE for linear model in Gaussian noise With no derivations: How should we argue in order to establish the MLE? In this case a linear estimator was optimal (Chapter 4) This linear estimator was also efficient

103 Chapter 7 Maximum Likelihood MLE for linear model in Gaussian noise With no derivations: How should we argue in order to establish the MLE? In this case a linear estimator was optimal (Chapter 4) This linear estimator was also efficient An efficient estimator is always the MLE

104 Chapter 7 Maximum Likelihood MLE for linear model in Gaussian noise With no derivations: How should we argue in order to establish the MLE? In this case a linear estimator was optimal (Chapter 4) This linear estimator was also efficient An efficient estimator is always the MLE MLE is the linear estimator from Chapter 4

105 Chapter 7 Maximum Likelihood Asymptotic MLE: some intuition With a stationary Gaussian process, the log-likelihood becomes Appears strange, but is not if one knows his linear algebra + Szegö s Theorem

106 Chapter 7 Maximum Likelihood Asymptotic MLE: some intuition Covariance matrix is Toeplitz Elements of C are autocorrelation values of the process (of course) Asymptotic Toeplitz matrix admits an Eigenvalue factorization as C=QSQ* (Q=DFT) PSD is the Fourier transform of the autocorrelation sequence (e.g. from the Proakis course) not exactly Szegö s Thm, but a consequence thereof: Asymptotically and very loosely speaking, the Eigenvalues of C, i.e. S, converges to Fourier transform of the autocorrelation sequence.

107 Chapter 7 Maximum Likelihood Asymptotic MLE: some intuition Covariance matrix is Toeplitz Elements of C are autocorrelation values of the process (of course) Asymptotic Toeplitz matrix admits an Eigenvalue factorization as C=QSQ* (Q=DFT) PSD is the Fourier transform of the autocorrelation sequence (e.g. from the Proakis course) not exactly Szegö s Thm, but a consequence thereof: Asymptotically and very loosely speaking, the Eigenvalues of C, i.e. S, converges to Fourier transform of the autocorrelation sequence. Consider now log det(c) det(c) is the product of eigenvalues So, log det(c) is the sum of the logarithm of the eigenvalues But: eigenvalues = Fourier transform of autocorrelation = PSD

108 Chapter 7 Maximum Likelihood Asymptotic MLE: some intuition Covariance matrix is Toeplitz Elements of C are autocorrelation values of the process (of course) Asymptotic Toeplitz matrix admits an Eigenvalue factorization as C=QSQ* (Q=DFT) PSD is the Fourier transform of the autocorrelation sequence (e.g. from the Proakis course) not exactly Szegö s Thm, but a consequence thereof: Asymptotically and very loosely speaking, the Eigenvalues of C, i.e. S, converges to Fourier transform of the autocorrelation sequence. Consider now log det(c) det(c) is the product of eigenvalues So, log det(c) is the sum of the logarithm of the eigenvalues But: eigenvalues = Fourier transform of autocorrelation = PSD

109 Chapter 7 Maximum Likelihood Asymptotic MLE: some intuition Covariance matrix is Toeplitz Elements of C are autocorrelation values of the process (of course) Asymptotic Toeplitz matrix admits an Eigenvalue factorization as C=QSQ* (Q=DFT) PSD is the Fourier transform of the autocorrelation sequence (e.g. from the Proakis course) not exactly Szegö s Thm, but a consequence thereof: Asymptotically and very loosely speaking, the Eigenvalues of C, i.e. S, converges to Fourier transform of the autocorrelation sequence. Consider now -x T C -1 x x T C -1 x = x T QS -1 Q*x Q*x is the Fourier transform of x S -1 is one divided with the PSD x T Q together with Q*x is the periodogram of x

Variations. ECE 6540, Lecture 10 Maximum Likelihood Estimation

Variations. ECE 6540, Lecture 10 Maximum Likelihood Estimation Variations ECE 6540, Lecture 10 Last Time BLUE (Best Linear Unbiased Estimator) Formulation Advantages Disadvantages 2 The BLUE A simplification Assume the estimator is a linear system For a single parameter

More information

Detection Theory. Composite tests

Detection Theory. Composite tests Composite tests Chapter 5: Correction Thu I claimed that the above, which is the most general case, was captured by the below Thu Chapter 5: Correction Thu I claimed that the above, which is the most general

More information

Estimation Theory Fredrik Rusek. Chapters

Estimation Theory Fredrik Rusek. Chapters Estimation Theory Fredrik Rusek Chapters 3.5-3.10 Recap We deal with unbiased estimators of deterministic parameters Performance of an estimator is measured by the variance of the estimate (due to the

More information

Chapter 8: Least squares (beginning of chapter)

Chapter 8: Least squares (beginning of chapter) Chapter 8: Least squares (beginning of chapter) Least Squares So far, we have been trying to determine an estimator which was unbiased and had minimum variance. Next we ll consider a class of estimators

More information

Estimation, Detection, and Identification

Estimation, Detection, and Identification Estimation, Detection, and Identification Graduate Course on the CMU/Portugal ECE PhD Program Spring 2008/2009 Chapter 5 Best Linear Unbiased Estimators Instructor: Prof. Paulo Jorge Oliveira pjcro @ isr.ist.utl.pt

More information

ECE531 Lecture 10b: Maximum Likelihood Estimation

ECE531 Lecture 10b: Maximum Likelihood Estimation ECE531 Lecture 10b: Maximum Likelihood Estimation D. Richard Brown III Worcester Polytechnic Institute 05-Apr-2011 Worcester Polytechnic Institute D. Richard Brown III 05-Apr-2011 1 / 23 Introduction So

More information

ECE531 Lecture 8: Non-Random Parameter Estimation

ECE531 Lecture 8: Non-Random Parameter Estimation ECE531 Lecture 8: Non-Random Parameter Estimation D. Richard Brown III Worcester Polytechnic Institute 19-March-2009 Worcester Polytechnic Institute D. Richard Brown III 19-March-2009 1 / 25 Introduction

More information

10. Linear Models and Maximum Likelihood Estimation

10. Linear Models and Maximum Likelihood Estimation 10. Linear Models and Maximum Likelihood Estimation ECE 830, Spring 2017 Rebecca Willett 1 / 34 Primary Goal General problem statement: We observe y i iid pθ, θ Θ and the goal is to determine the θ that

More information

Classical Estimation Topics

Classical Estimation Topics Classical Estimation Topics Namrata Vaswani, Iowa State University February 25, 2014 This note fills in the gaps in the notes already provided (l0.pdf, l1.pdf, l2.pdf, l3.pdf, LeastSquares.pdf). 1 Min

More information

Mathematical statistics

Mathematical statistics October 4 th, 2018 Lecture 12: Information Where are we? Week 1 Week 2 Week 4 Week 7 Week 10 Week 14 Probability reviews Chapter 6: Statistics and Sampling Distributions Chapter 7: Point Estimation Chapter

More information

A Few Notes on Fisher Information (WIP)

A Few Notes on Fisher Information (WIP) A Few Notes on Fisher Information (WIP) David Meyer dmm@{-4-5.net,uoregon.edu} Last update: April 30, 208 Definitions There are so many interesting things about Fisher Information and its theoretical properties

More information

Basic concepts in estimation

Basic concepts in estimation Basic concepts in estimation Random and nonrandom parameters Definitions of estimates ML Maimum Lielihood MAP Maimum A Posteriori LS Least Squares MMS Minimum Mean square rror Measures of quality of estimates

More information

PATTERN RECOGNITION AND MACHINE LEARNING

PATTERN RECOGNITION AND MACHINE LEARNING PATTERN RECOGNITION AND MACHINE LEARNING Slide Set 2: Estimation Theory January 2018 Heikki Huttunen heikki.huttunen@tut.fi Department of Signal Processing Tampere University of Technology Classical Estimation

More information

Estimation Theory Fredrik Rusek. Chapter 11

Estimation Theory Fredrik Rusek. Chapter 11 Estimation Theory Fredrik Rusek Chapter 11 Chapter 10 Bayesian Estimation Section 10.8 Bayesian estimators for deterministic parameters If no MVU estimator exists, or is very hard to find, we can apply

More information

Introduction to Maximum Likelihood Estimation

Introduction to Maximum Likelihood Estimation Introduction to Maximum Likelihood Estimation Eric Zivot July 26, 2012 The Likelihood Function Let 1 be an iid sample with pdf ( ; ) where is a ( 1) vector of parameters that characterize ( ; ) Example:

More information

Parametric Inference Maximum Likelihood Inference Exponential Families Expectation Maximization (EM) Bayesian Inference Statistical Decison Theory

Parametric Inference Maximum Likelihood Inference Exponential Families Expectation Maximization (EM) Bayesian Inference Statistical Decison Theory Statistical Inference Parametric Inference Maximum Likelihood Inference Exponential Families Expectation Maximization (EM) Bayesian Inference Statistical Decison Theory IP, José Bioucas Dias, IST, 2007

More information

Brief Review on Estimation Theory

Brief Review on Estimation Theory Brief Review on Estimation Theory K. Abed-Meraim ENST PARIS, Signal and Image Processing Dept. abed@tsi.enst.fr This presentation is essentially based on the course BASTA by E. Moulines Brief review on

More information

Terminology Suppose we have N observations {x(n)} N 1. Estimators as Random Variables. {x(n)} N 1

Terminology Suppose we have N observations {x(n)} N 1. Estimators as Random Variables. {x(n)} N 1 Estimation Theory Overview Properties Bias, Variance, and Mean Square Error Cramér-Rao lower bound Maximum likelihood Consistency Confidence intervals Properties of the mean estimator Properties of the

More information

Linear models. x = Hθ + w, where w N(0, σ 2 I) and H R n p. The matrix H is called the observation matrix or design matrix 1.

Linear models. x = Hθ + w, where w N(0, σ 2 I) and H R n p. The matrix H is called the observation matrix or design matrix 1. Linear models As the first approach to estimator design, we consider the class of problems that can be represented by a linear model. In general, finding the MVUE is difficult. But if the linear model

More information

Machine learning - HT Maximum Likelihood

Machine learning - HT Maximum Likelihood Machine learning - HT 2016 3. Maximum Likelihood Varun Kanade University of Oxford January 27, 2016 Outline Probabilistic Framework Formulate linear regression in the language of probability Introduce

More information

Rowan University Department of Electrical and Computer Engineering

Rowan University Department of Electrical and Computer Engineering Rowan University Department of Electrical and Computer Engineering Estimation and Detection Theory Fall 2013 to Practice Exam II This is a closed book exam. There are 8 problems in the exam. The problems

More information

5601 Notes: The Sandwich Estimator

5601 Notes: The Sandwich Estimator 560 Notes: The Sandwich Estimator Charles J. Geyer December 6, 2003 Contents Maximum Likelihood Estimation 2. Likelihood for One Observation................... 2.2 Likelihood for Many IID Observations...............

More information

Estimation and Detection

Estimation and Detection stimation and Detection Lecture 2: Cramér-Rao Lower Bound Dr. ir. Richard C. Hendriks & Dr. Sundeep P. Chepuri 7//207 Remember: Introductory xample Given a process (DC in noise): x[n]=a + w[n], n=0,,n,

More information

Lecture 6: Gaussian Mixture Models (GMM)

Lecture 6: Gaussian Mixture Models (GMM) Helsinki Institute for Information Technology Lecture 6: Gaussian Mixture Models (GMM) Pedram Daee 3.11.2015 Outline Gaussian Mixture Models (GMM) Models Model families and parameters Parameter learning

More information

Unbiased Estimation. Binomial problem shows general phenomenon. An estimator can be good for some values of θ and bad for others.

Unbiased Estimation. Binomial problem shows general phenomenon. An estimator can be good for some values of θ and bad for others. Unbiased Estimation Binomial problem shows general phenomenon. An estimator can be good for some values of θ and bad for others. To compare ˆθ and θ, two estimators of θ: Say ˆθ is better than θ if it

More information

Mathematical statistics

Mathematical statistics October 1 st, 2018 Lecture 11: Sufficient statistic Where are we? Week 1 Week 2 Week 4 Week 7 Week 10 Week 14 Probability reviews Chapter 6: Statistics and Sampling Distributions Chapter 7: Point Estimation

More information

Module 2. Random Processes. Version 2, ECE IIT, Kharagpur

Module 2. Random Processes. Version 2, ECE IIT, Kharagpur Module Random Processes Version, ECE IIT, Kharagpur Lesson 9 Introduction to Statistical Signal Processing Version, ECE IIT, Kharagpur After reading this lesson, you will learn about Hypotheses testing

More information

Lecture 7 Introduction to Statistical Decision Theory

Lecture 7 Introduction to Statistical Decision Theory Lecture 7 Introduction to Statistical Decision Theory I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw December 20, 2016 1 / 55 I-Hsiang Wang IT Lecture 7

More information

Advanced Signal Processing Introduction to Estimation Theory

Advanced Signal Processing Introduction to Estimation Theory Advanced Signal Processing Introduction to Estimation Theory Danilo Mandic, room 813, ext: 46271 Department of Electrical and Electronic Engineering Imperial College London, UK d.mandic@imperial.ac.uk,

More information

DETECTION theory deals primarily with techniques for

DETECTION theory deals primarily with techniques for ADVANCED SIGNAL PROCESSING SE Optimum Detection of Deterministic and Random Signals Stefan Tertinek Graz University of Technology turtle@sbox.tugraz.at Abstract This paper introduces various methods for

More information

Lecture 3: Statistical sampling uncertainty

Lecture 3: Statistical sampling uncertainty Lecture 3: Statistical sampling uncertainty c Christopher S. Bretherton Winter 2015 3.1 Central limit theorem (CLT) Let X 1,..., X N be a sequence of N independent identically-distributed (IID) random

More information

LTI Systems, Additive Noise, and Order Estimation

LTI Systems, Additive Noise, and Order Estimation LTI Systems, Additive oise, and Order Estimation Soosan Beheshti, Munther A. Dahleh Laboratory for Information and Decision Systems Department of Electrical Engineering and Computer Science Massachusetts

More information

A Very Brief Summary of Statistical Inference, and Examples

A Very Brief Summary of Statistical Inference, and Examples A Very Brief Summary of Statistical Inference, and Examples Trinity Term 2009 Prof. Gesine Reinert Our standard situation is that we have data x = x 1, x 2,..., x n, which we view as realisations of random

More information

Detection & Estimation Lecture 1

Detection & Estimation Lecture 1 Detection & Estimation Lecture 1 Intro, MVUE, CRLB Xiliang Luo General Course Information Textbooks & References Fundamentals of Statistical Signal Processing: Estimation Theory/Detection Theory, Steven

More information

Mathematical statistics

Mathematical statistics October 18 th, 2018 Lecture 16: Midterm review Countdown to mid-term exam: 7 days Week 1 Chapter 1: Probability review Week 2 Week 4 Week 7 Chapter 6: Statistics Chapter 7: Point Estimation Chapter 8:

More information

SGN Advanced Signal Processing: Lecture 8 Parameter estimation for AR and MA models. Model order selection

SGN Advanced Signal Processing: Lecture 8 Parameter estimation for AR and MA models. Model order selection SG 21006 Advanced Signal Processing: Lecture 8 Parameter estimation for AR and MA models. Model order selection Ioan Tabus Department of Signal Processing Tampere University of Technology Finland 1 / 28

More information

Maximum Likelihood Estimation

Maximum Likelihood Estimation Connexions module: m11446 1 Maximum Likelihood Estimation Clayton Scott Robert Nowak This work is produced by The Connexions Project and licensed under the Creative Commons Attribution License Abstract

More information

Two hours. To be supplied by the Examinations Office: Mathematical Formula Tables THE UNIVERSITY OF MANCHESTER. 21 June :45 11:45

Two hours. To be supplied by the Examinations Office: Mathematical Formula Tables THE UNIVERSITY OF MANCHESTER. 21 June :45 11:45 Two hours MATH20802 To be supplied by the Examinations Office: Mathematical Formula Tables THE UNIVERSITY OF MANCHESTER STATISTICAL METHODS 21 June 2010 9:45 11:45 Answer any FOUR of the questions. University-approved

More information

Expectation Maximization and Mixtures of Gaussians

Expectation Maximization and Mixtures of Gaussians Statistical Machine Learning Notes 10 Expectation Maximiation and Mixtures of Gaussians Instructor: Justin Domke Contents 1 Introduction 1 2 Preliminary: Jensen s Inequality 2 3 Expectation Maximiation

More information

POLI 8501 Introduction to Maximum Likelihood Estimation

POLI 8501 Introduction to Maximum Likelihood Estimation POLI 8501 Introduction to Maximum Likelihood Estimation Maximum Likelihood Intuition Consider a model that looks like this: Y i N(µ, σ 2 ) So: E(Y ) = µ V ar(y ) = σ 2 Suppose you have some data on Y,

More information

Covariance function estimation in Gaussian process regression

Covariance function estimation in Gaussian process regression Covariance function estimation in Gaussian process regression François Bachoc Department of Statistics and Operations Research, University of Vienna WU Research Seminar - May 2015 François Bachoc Gaussian

More information

10-704: Information Processing and Learning Fall Lecture 24: Dec 7

10-704: Information Processing and Learning Fall Lecture 24: Dec 7 0-704: Information Processing and Learning Fall 206 Lecturer: Aarti Singh Lecture 24: Dec 7 Note: These notes are based on scribed notes from Spring5 offering of this course. LaTeX template courtesy of

More information

ECE 541 Stochastic Signals and Systems Problem Set 11 Solution

ECE 541 Stochastic Signals and Systems Problem Set 11 Solution ECE 54 Stochastic Signals and Systems Problem Set Solution Problem Solutions : Yates and Goodman,..4..7.3.3.4.3.8.3 and.8.0 Problem..4 Solution Since E[Y (t] R Y (0, we use Theorem.(a to evaluate R Y (τ

More information

ECE 275A Homework 7 Solutions

ECE 275A Homework 7 Solutions ECE 275A Homework 7 Solutions Solutions 1. For the same specification as in Homework Problem 6.11 we want to determine an estimator for θ using the Method of Moments (MOM). In general, the MOM estimator

More information

F2E5216/TS1002 Adaptive Filtering and Change Detection. Course Organization. Lecture plan. The Books. Lecture 1

F2E5216/TS1002 Adaptive Filtering and Change Detection. Course Organization. Lecture plan. The Books. Lecture 1 Adaptive Filtering and Change Detection Bo Wahlberg (KTH and Fredrik Gustafsson (LiTH Course Organization Lectures and compendium: Theory, Algorithms, Applications, Evaluation Toolbox and manual: Algorithms,

More information

Statistics & Data Sciences: First Year Prelim Exam May 2018

Statistics & Data Sciences: First Year Prelim Exam May 2018 Statistics & Data Sciences: First Year Prelim Exam May 2018 Instructions: 1. Do not turn this page until instructed to do so. 2. Start each new question on a new sheet of paper. 3. This is a closed book

More information

Detection & Estimation Lecture 1

Detection & Estimation Lecture 1 Detection & Estimation Lecture 1 Intro, MVUE, CRLB Xiliang Luo General Course Information Textbooks & References Fundamentals of Statistical Signal Processing: Estimation Theory/Detection Theory, Steven

More information

Gaussian, Markov and stationary processes

Gaussian, Markov and stationary processes Gaussian, Markov and stationary processes Gonzalo Mateos Dept. of ECE and Goergen Institute for Data Science University of Rochester gmateosb@ece.rochester.edu http://www.ece.rochester.edu/~gmateosb/ November

More information

Econometrics I, Estimation

Econometrics I, Estimation Econometrics I, Estimation Department of Economics Stanford University September, 2008 Part I Parameter, Estimator, Estimate A parametric is a feature of the population. An estimator is a function of the

More information

Brandon C. Kelly (Harvard Smithsonian Center for Astrophysics)

Brandon C. Kelly (Harvard Smithsonian Center for Astrophysics) Brandon C. Kelly (Harvard Smithsonian Center for Astrophysics) Probability quantifies randomness and uncertainty How do I estimate the normalization and logarithmic slope of a X ray continuum, assuming

More information

14.30 Introduction to Statistical Methods in Economics Spring 2009

14.30 Introduction to Statistical Methods in Economics Spring 2009 MIT OpenCourseWare http://ocw.mit.edu 4.0 Introduction to Statistical Methods in Economics Spring 009 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.

More information

Parametric Models. Dr. Shuang LIANG. School of Software Engineering TongJi University Fall, 2012

Parametric Models. Dr. Shuang LIANG. School of Software Engineering TongJi University Fall, 2012 Parametric Models Dr. Shuang LIANG School of Software Engineering TongJi University Fall, 2012 Today s Topics Maximum Likelihood Estimation Bayesian Density Estimation Today s Topics Maximum Likelihood

More information

MA/ST 810 Mathematical-Statistical Modeling and Analysis of Complex Systems

MA/ST 810 Mathematical-Statistical Modeling and Analysis of Complex Systems MA/ST 810 Mathematical-Statistical Modeling and Analysis of Complex Systems Principles of Statistical Inference Recap of statistical models Statistical inference (frequentist) Parametric vs. semiparametric

More information

Econometría 2: Análisis de series de Tiempo

Econometría 2: Análisis de series de Tiempo Econometría 2: Análisis de series de Tiempo Karoll GOMEZ kgomezp@unal.edu.co http://karollgomez.wordpress.com Segundo semestre 2016 II. Basic definitions A time series is a set of observations X t, each

More information

Estimators as Random Variables

Estimators as Random Variables Estimation Theory Overview Properties Bias, Variance, and Mean Square Error Cramér-Rao lower bound Maimum likelihood Consistency Confidence intervals Properties of the mean estimator Introduction Up until

More information

IEOR E4570: Machine Learning for OR&FE Spring 2015 c 2015 by Martin Haugh. The EM Algorithm

IEOR E4570: Machine Learning for OR&FE Spring 2015 c 2015 by Martin Haugh. The EM Algorithm IEOR E4570: Machine Learning for OR&FE Spring 205 c 205 by Martin Haugh The EM Algorithm The EM algorithm is used for obtaining maximum likelihood estimates of parameters when some of the data is missing.

More information

Estimation, Detection, and Identification CMU 18752

Estimation, Detection, and Identification CMU 18752 Estimation, Detection, and Identification CMU 18752 Graduate Course on the CMU/Portugal ECE PhD Program Spring 2008/2009 Instructor: Prof. Paulo Jorge Oliveira pjcro @ isr.ist.utl.pt Phone: +351 21 8418053

More information

Likelihood-Based Methods

Likelihood-Based Methods Likelihood-Based Methods Handbook of Spatial Statistics, Chapter 4 Susheela Singh September 22, 2016 OVERVIEW INTRODUCTION MAXIMUM LIKELIHOOD ESTIMATION (ML) RESTRICTED MAXIMUM LIKELIHOOD ESTIMATION (REML)

More information

Final Exam. 1. (6 points) True/False. Please read the statements carefully, as no partial credit will be given.

Final Exam. 1. (6 points) True/False. Please read the statements carefully, as no partial credit will be given. 1. (6 points) True/False. Please read the statements carefully, as no partial credit will be given. (a) If X and Y are independent, Corr(X, Y ) = 0. (b) (c) (d) (e) A consistent estimator must be asymptotically

More information

A minimalist s exposition of EM

A minimalist s exposition of EM A minimalist s exposition of EM Karl Stratos 1 What EM optimizes Let O, H be a random variables representing the space of samples. Let be the parameter of a generative model with an associated probability

More information

Unbiased Estimation. Binomial problem shows general phenomenon. An estimator can be good for some values of θ and bad for others.

Unbiased Estimation. Binomial problem shows general phenomenon. An estimator can be good for some values of θ and bad for others. Unbiased Estimation Binomial problem shows general phenomenon. An estimator can be good for some values of θ and bad for others. To compare ˆθ and θ, two estimators of θ: Say ˆθ is better than θ if it

More information

Parameter estimation and forecasting. Cristiano Porciani AIfA, Uni-Bonn

Parameter estimation and forecasting. Cristiano Porciani AIfA, Uni-Bonn Parameter estimation and forecasting Cristiano Porciani AIfA, Uni-Bonn Questions? C. Porciani Estimation & forecasting 2 Temperature fluctuations Variance at multipole l (angle ~180o/l) C. Porciani Estimation

More information

ECE531 Homework Assignment Number 7 Solution

ECE531 Homework Assignment Number 7 Solution ECE53 Homework Assignment Number 7 Solution Due by 8:50pm on Wednesday 30-Mar-20 Make sure your reasoning and work are clear to receive full credit for each problem.. 4 points. Kay I: 2.6. Solution: I

More information

Elements of statistics (MATH0487-1)

Elements of statistics (MATH0487-1) Elements of statistics (MATH0487-1) Prof. Dr. Dr. K. Van Steen University of Liège, Belgium November 12, 2012 Introduction to Statistics Basic Probability Revisited Sampling Exploratory Data Analysis -

More information

Stat260: Bayesian Modeling and Inference Lecture Date: February 10th, Jeffreys priors. exp 1 ) p 2

Stat260: Bayesian Modeling and Inference Lecture Date: February 10th, Jeffreys priors. exp 1 ) p 2 Stat260: Bayesian Modeling and Inference Lecture Date: February 10th, 2010 Jeffreys priors Lecturer: Michael I. Jordan Scribe: Timothy Hunter 1 Priors for the multivariate Gaussian Consider a multivariate

More information

Predictive Hypothesis Identification

Predictive Hypothesis Identification Marcus Hutter - 1 - Predictive Hypothesis Identification Predictive Hypothesis Identification Marcus Hutter Canberra, ACT, 0200, Australia http://www.hutter1.net/ ANU RSISE NICTA Marcus Hutter - 2 - Predictive

More information

CS 540: Machine Learning Lecture 2: Review of Probability & Statistics

CS 540: Machine Learning Lecture 2: Review of Probability & Statistics CS 540: Machine Learning Lecture 2: Review of Probability & Statistics AD January 2008 AD () January 2008 1 / 35 Outline Probability theory (PRML, Section 1.2) Statistics (PRML, Sections 2.1-2.4) AD ()

More information

1.4 Properties of the autocovariance for stationary time-series

1.4 Properties of the autocovariance for stationary time-series 1.4 Properties of the autocovariance for stationary time-series In general, for a stationary time-series, (i) The variance is given by (0) = E((X t µ) 2 ) 0. (ii) (h) apple (0) for all h 2 Z. ThisfollowsbyCauchy-Schwarzas

More information

STAT 100C: Linear models

STAT 100C: Linear models STAT 100C: Linear models Arash A. Amini June 9, 2018 1 / 56 Table of Contents Multiple linear regression Linear model setup Estimation of β Geometric interpretation Estimation of σ 2 Hat matrix Gram matrix

More information

Estimation theory. Parametric estimation. Properties of estimators. Minimum variance estimator. Cramer-Rao bound. Maximum likelihood estimators

Estimation theory. Parametric estimation. Properties of estimators. Minimum variance estimator. Cramer-Rao bound. Maximum likelihood estimators Estimation theory Parametric estimation Properties of estimators Minimum variance estimator Cramer-Rao bound Maximum likelihood estimators Confidence intervals Bayesian estimation 1 Random Variables Let

More information

Inference in non-linear time series

Inference in non-linear time series Intro LS MLE Other Erik Lindström Centre for Mathematical Sciences Lund University LU/LTH & DTU Intro LS MLE Other General Properties Popular estimatiors Overview Introduction General Properties Estimators

More information

GMM - Generalized method of moments

GMM - Generalized method of moments GMM - Generalized method of moments GMM Intuition: Matching moments You want to estimate properties of a data set {x t } T t=1. You assume that x t has a constant mean and variance. x t (µ 0, σ 2 ) Consider

More information

V. Properties of estimators {Parts C, D & E in this file}

V. Properties of estimators {Parts C, D & E in this file} A. Definitions & Desiderata. model. estimator V. Properties of estimators {Parts C, D & E in this file}. sampling errors and sampling distribution 4. unbiasedness 5. low sampling variance 6. low mean squared

More information

Estimation Tasks. Short Course on Image Quality. Matthew A. Kupinski. Introduction

Estimation Tasks. Short Course on Image Quality. Matthew A. Kupinski. Introduction Estimation Tasks Short Course on Image Quality Matthew A. Kupinski Introduction Section 13.3 in B&M Keep in mind the similarities between estimation and classification Image-quality is a statistical concept

More information

STAT 730 Chapter 4: Estimation

STAT 730 Chapter 4: Estimation STAT 730 Chapter 4: Estimation Timothy Hanson Department of Statistics, University of South Carolina Stat 730: Multivariate Analysis 1 / 23 The likelihood We have iid data, at least initially. Each datum

More information

ECE534, Spring 2018: Solutions for Problem Set #3

ECE534, Spring 2018: Solutions for Problem Set #3 ECE534, Spring 08: Solutions for Problem Set #3 Jointly Gaussian Random Variables and MMSE Estimation Suppose that X, Y are jointly Gaussian random variables with µ X = µ Y = 0 and σ X = σ Y = Let their

More information

Graduate Econometrics I: Maximum Likelihood I

Graduate Econometrics I: Maximum Likelihood I Graduate Econometrics I: Maximum Likelihood I Yves Dominicy Université libre de Bruxelles Solvay Brussels School of Economics and Management ECARES Yves Dominicy Graduate Econometrics I: Maximum Likelihood

More information

EIE6207: Maximum-Likelihood and Bayesian Estimation

EIE6207: Maximum-Likelihood and Bayesian Estimation EIE6207: Maximum-Likelihood and Bayesian Estimation Man-Wai MAK Dept. of Electronic and Information Engineering, The Hong Kong Polytechnic University enmwmak@polyu.edu.hk http://www.eie.polyu.edu.hk/ mwmak

More information

Quick Review on Linear Multiple Regression

Quick Review on Linear Multiple Regression Quick Review on Linear Multiple Regression Mei-Yuan Chen Department of Finance National Chung Hsing University March 6, 2007 Introduction for Conditional Mean Modeling Suppose random variables Y, X 1,

More information

A NEW INFORMATION THEORETIC APPROACH TO ORDER ESTIMATION PROBLEM. Massachusetts Institute of Technology, Cambridge, MA 02139, U.S.A.

A NEW INFORMATION THEORETIC APPROACH TO ORDER ESTIMATION PROBLEM. Massachusetts Institute of Technology, Cambridge, MA 02139, U.S.A. A EW IFORMATIO THEORETIC APPROACH TO ORDER ESTIMATIO PROBLEM Soosan Beheshti Munther A. Dahleh Massachusetts Institute of Technology, Cambridge, MA 0239, U.S.A. Abstract: We introduce a new method of model

More information

Cheng Soon Ong & Christian Walder. Canberra February June 2017

Cheng Soon Ong & Christian Walder. Canberra February June 2017 Cheng Soon Ong & Christian Walder Research Group and College of Engineering and Computer Science Canberra February June 2017 (Many figures from C. M. Bishop, "Pattern Recognition and ") 1of 679 Part XIX

More information

Data Preprocessing. Cluster Similarity

Data Preprocessing. Cluster Similarity 1 Cluster Similarity Similarity is most often measured with the help of a distance function. The smaller the distance, the more similar the data objects (points). A function d: M M R is a distance on M

More information

STAT 135 Lab 3 Asymptotic MLE and the Method of Moments

STAT 135 Lab 3 Asymptotic MLE and the Method of Moments STAT 135 Lab 3 Asymptotic MLE and the Method of Moments Rebecca Barter February 9, 2015 Maximum likelihood estimation (a reminder) Maximum likelihood estimation Suppose that we have a sample, X 1, X 2,...,

More information

ECON 616: Lecture 1: Time Series Basics

ECON 616: Lecture 1: Time Series Basics ECON 616: Lecture 1: Time Series Basics ED HERBST August 30, 2017 References Overview: Chapters 1-3 from Hamilton (1994). Technical Details: Chapters 2-3 from Brockwell and Davis (1987). Intuition: Chapters

More information

Max. Likelihood Estimation. Outline. Econometrics II. Ricardo Mora. Notes. Notes

Max. Likelihood Estimation. Outline. Econometrics II. Ricardo Mora. Notes. Notes Maximum Likelihood Estimation Econometrics II Department of Economics Universidad Carlos III de Madrid Máster Universitario en Desarrollo y Crecimiento Económico Outline 1 3 4 General Approaches to Parameter

More information

Advanced Signal Processing Minimum Variance Unbiased Estimation (MVU)

Advanced Signal Processing Minimum Variance Unbiased Estimation (MVU) Advanced Signal Processing Minimum Variance Unbiased Estimation (MVU) Danilo Mandic room 813, ext: 46271 Department of Electrical and Electronic Engineering Imperial College London, UK d.mandic@imperial.ac.uk,

More information

ELEG 5633 Detection and Estimation Minimum Variance Unbiased Estimators (MVUE)

ELEG 5633 Detection and Estimation Minimum Variance Unbiased Estimators (MVUE) 1 ELEG 5633 Detection and Estimation Minimum Variance Unbiased Estimators (MVUE) Jingxian Wu Department of Electrical Engineering University of Arkansas Outline Minimum Variance Unbiased Estimators (MVUE)

More information

BIO5312 Biostatistics Lecture 13: Maximum Likelihood Estimation

BIO5312 Biostatistics Lecture 13: Maximum Likelihood Estimation BIO5312 Biostatistics Lecture 13: Maximum Likelihood Estimation Yujin Chung November 29th, 2016 Fall 2016 Yujin Chung Lec13: MLE Fall 2016 1/24 Previous Parametric tests Mean comparisons (normality assumption)

More information

Stat 411 Lecture Notes 03 Likelihood and Maximum Likelihood Estimation

Stat 411 Lecture Notes 03 Likelihood and Maximum Likelihood Estimation Stat 411 Lecture Notes 03 Likelihood and Maximum Likelihood Estimation Ryan Martin www.math.uic.edu/~rgmartin Version: August 19, 2013 1 Introduction Previously we have discussed various properties of

More information

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 8, AUGUST For convenience and ease of presentation only, we assume that

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 8, AUGUST For convenience and ease of presentation only, we assume that IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 8, AUGUST 2006 3411 Statistical and Information-Theoretic Analysis of Resolution in Imaging Morteza Shahram, Member, IEEE, and Peyman Milanfar, Senior

More information

f(x θ)dx with respect to θ. Assuming certain smoothness conditions concern differentiating under the integral the integral sign, we first obtain

f(x θ)dx with respect to θ. Assuming certain smoothness conditions concern differentiating under the integral the integral sign, we first obtain 0.1. INTRODUCTION 1 0.1 Introduction R. A. Fisher, a pioneer in the development of mathematical statistics, introduced a measure of the amount of information contained in an observaton from f(x θ). Fisher

More information

ECE302 Spring 2006 Practice Final Exam Solution May 4, Name: Score: /100

ECE302 Spring 2006 Practice Final Exam Solution May 4, Name: Score: /100 ECE302 Spring 2006 Practice Final Exam Solution May 4, 2006 1 Name: Score: /100 You must show ALL of your work for full credit. This exam is open-book. Calculators may NOT be used. 1. As a function of

More information

UNIFORMLY MOST POWERFUL CYCLIC PERMUTATION INVARIANT DETECTION FOR DISCRETE-TIME SIGNALS

UNIFORMLY MOST POWERFUL CYCLIC PERMUTATION INVARIANT DETECTION FOR DISCRETE-TIME SIGNALS UNIFORMLY MOST POWERFUL CYCLIC PERMUTATION INVARIANT DETECTION FOR DISCRETE-TIME SIGNALS F. C. Nicolls and G. de Jager Department of Electrical Engineering, University of Cape Town Rondebosch 77, South

More information

Hypothesis Testing. 1 Definitions of test statistics. CB: chapter 8; section 10.3

Hypothesis Testing. 1 Definitions of test statistics. CB: chapter 8; section 10.3 Hypothesis Testing CB: chapter 8; section 0.3 Hypothesis: statement about an unknown population parameter Examples: The average age of males in Sweden is 7. (statement about population mean) The lowest

More information

STA 4273H: Sta-s-cal Machine Learning

STA 4273H: Sta-s-cal Machine Learning STA 4273H: Sta-s-cal Machine Learning Russ Salakhutdinov Department of Computer Science! Department of Statistical Sciences! rsalakhu@cs.toronto.edu! h0p://www.cs.utoronto.ca/~rsalakhu/ Lecture 2 In our

More information

CS281 Section 4: Factor Analysis and PCA

CS281 Section 4: Factor Analysis and PCA CS81 Section 4: Factor Analysis and PCA Scott Linderman At this point we have seen a variety of machine learning models, with a particular emphasis on models for supervised learning. In particular, we

More information

The Cooper Union Department of Electrical Engineering ECE111 Signal Processing & Systems Analysis Final May 4, 2012

The Cooper Union Department of Electrical Engineering ECE111 Signal Processing & Systems Analysis Final May 4, 2012 The Cooper Union Department of Electrical Engineering ECE111 Signal Processing & Systems Analysis Final May 4, 2012 Time: 3 hours. Close book, closed notes. No calculators. Part I: ANSWER ALL PARTS. WRITE

More information

WE study the capacity of peak-power limited, single-antenna,

WE study the capacity of peak-power limited, single-antenna, 1158 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 56, NO. 3, MARCH 2010 Gaussian Fading Is the Worst Fading Tobias Koch, Member, IEEE, and Amos Lapidoth, Fellow, IEEE Abstract The capacity of peak-power

More information

Estimation MLE-Pandemic data MLE-Financial crisis data Evaluating estimators. Estimation. September 24, STAT 151 Class 6 Slide 1

Estimation MLE-Pandemic data MLE-Financial crisis data Evaluating estimators. Estimation. September 24, STAT 151 Class 6 Slide 1 Estimation September 24, 2018 STAT 151 Class 6 Slide 1 Pandemic data Treatment outcome, X, from n = 100 patients in a pandemic: 1 = recovered and 0 = not recovered 1 1 1 0 0 0 1 1 1 0 0 1 0 1 0 0 1 1 1

More information

Density Estimation. Seungjin Choi

Density Estimation. Seungjin Choi Density Estimation Seungjin Choi Department of Computer Science and Engineering Pohang University of Science and Technology 77 Cheongam-ro, Nam-gu, Pohang 37673, Korea seungjin@postech.ac.kr http://mlg.postech.ac.kr/

More information