Motivation CRLB and Regularity Conditions CRLLB A New Bound Examples Conclusions

Size: px
Start display at page:

Download "Motivation CRLB and Regularity Conditions CRLLB A New Bound Examples Conclusions"

Transcription

1 A Survey of Some Recent Results on the CRLB for Parameter Estimation and its Extension Yaakov 1 1 Electrical and Computer Engineering Department University of Connecticut, Storrs, CT A Survey of Some Recent Results on the CRLB for Parameter Estimation 504v and its Extension 1 / 30

2 Motivation Is the Cramèr-Rao Lower Bound (CRLB) valid for more relaxed regularity conditions than are generally cited in the literature? Yes. Specifically, the integrability of the first two derivatives of the log-likelihood function (LLF) is not necessary, and it is also not necessary for the support of the likelihood function (LF) to be independent of the parameter to be estimated. Is there a bound for a LF with parameter dependent support (i.e., measurement noise with a finite support pdf where it is nonzero)? Yes: the Cramér-Rao-Leibniz Lower Bound (CRLLB) for likelihood functions with parameter-dependent support A Survey of Some Recent Results on the CRLB for Parameter Estimation 504v and its Extension 2 / 30

3 CRLB and Regularity Conditions (i) (i ) (i ) (ii) (ii ) Table 1: Regularity Conditions for the CRLB/CRLLB LF Properties existence and absolute integrability of the first two derivatives of the LF existence and absolute integrability of the first two derivatives λ 1 (x; z) and of λ 2 (x; z) of the LLF existence (finiteness) of the expected values of the square of λ 1 (x; z) and of λ 2 (x; z) (the FI) the support of the LF be independent of x continuity of the LF at the boundary of its support (the pdf of the measurement noise is zero at the boundary of its support) discontinuity of the LF at the boundary of its support Comments Commonly cited, but not sufficient for CRLB (inverted parabola) Not necessary for CRLB (raised cosine) Combined with (ii ) N & S for CRLB Not necessary for CRLB (raised cosine) Combined with (i ) N & S for CRLB Existence of FIM E{λ 2 1 (x; z)} is N & S for CRLLB A Survey of Some Recent Results on the CRLB for Parameter Estimation 504v and its Extension 3 / 30

4 Classical CRLB The classical CRLB states that the variance of an unbiased estimator ˆx(z) of a real-valued (non-random) parameter x has the following lower bound E [ (ˆx(z) x) 2] J(x) 1 = J 1 (x) 1 = { E [ λ1 (x; z) 2]} 1 = J 2 (x) 1 = {E [λ2 (x; z)]} 1 (1) where the expectations are over z, J denotes the Fisher Information (FI) and λ 1 (x; z) = ln p(z x) x λ 2 (x; z) = 2 ln p(z x) x 2 (3) are the first and second derivatives of the LLF. (2) A Survey of Some Recent Results on the CRLB for Parameter Estimation 504v and its Extension 4 / 30

5 Regularity Conditions The generally held regularity conditions for the CRLB to hold are (i ) and (ii) from Table 1. (i ) [integrability of LLF derivatives] is too stringent and should be replaced by (i ) existence (finiteness) of the expected values of λ 2 1 and of λ 2 (the FI). (ii) [parameter-independent LF support] is also too stringent and should be replaced by (ii ) continuity of the LF at the boundary of its support. Both cases will be illustrated later through specific examples. Parameter-dependent support of the LF arises when an unknown parameter is observed in the presence of additive measurement noise and the measurement noise pdf has a finite support. A Survey of Some Recent Results on the CRLB for Parameter Estimation 504v and its Extension 5 / 30

6 Continuity of the LF Regularity Condition The CRLB derivation starts by expressing the unbiasedness property of the estimator as follows E [(ˆx(z) x)] = h(x) g(x) (ˆx(z) x)p(z x) dz = 0 (4) where p(z x) is the LF, and the integral is over the support of the LF. Differentiating the above w.r.t. x with Leibniz integral rule, necessitated by the dependence of the support on x, yields D(x) = d dx h(x) g(x) [ˆx(z) x]p(z x) dz x = D L (x) + D I (x) = 0 x (5) A Survey of Some Recent Results on the CRLB for Parameter Estimation 504v and its Extension 6 / 30

7 Continuity of the LF Regularity Condition (cont.) The first term of (5) the extra Leibniz term is where D L (x) = D L1 (x) + D L2 (x) (6) D L1 (x) = dh(x) [ˆx(z) x]p(z x) dx (7) z=h(x) D L2 (x) = dg(x) [ˆx(z) x]p(z x) dx (8) z=g(x) The second term of (5) D I (x) = h(x) g(x) p(z x) dz + h(x) g(x) [ˆx(z) x] p(z x) dz (9) x comprises the terms resulting from the interchange of the differentiation and integration. A Survey of Some Recent Results on the CRLB for Parameter Estimation 504v and its Extension 7 / 30

8 Interchangibility of Differentiation and Integration If we have D L (x) = 0, then D(x) = D I (x) = 0, and we have interchangeability of the differentiation w.r.t. x and integration w.r.t. z in (5). The classical derivation of the CRLB then follows. This interchangeability is equivalent to the extra Leibniz terms either being zero or summing up to zero. For a general unbiased estimator, these do not sum to zero since this would require: the product of the derivatives of the integration limits and the LF at the limits to be the same (which is possible), and that the estimate be the same at the upper and lower limits of z (which is not possible). The only other possibility for the above interchangeability to hold is if the LF is zero at the boundary of its support i.e., if the LF is continuous (in z) at the boundary of its support. A Survey of Some Recent Results on the CRLB for Parameter Estimation 504v and its Extension 8 / 30

9 CRLLB The New Bound (2014) If the extra Leibniz terms are not zero, i.e., D L (x) 0, then, using (5) and (9), we have h(x) g(x) [ˆx(z) x] p(z x) dz = 1 D L (x) (10) x Following the steps of the classical CRLB derivation yields the CRLLB E [ (ˆx(z) x) 2] [1 D L(x)] 2 J 1 (11) where the FI is J 1 (x) = E [ { [ λ 1 (x; z) 2] ] } ln p(z x) 2 = E x (12) Note that the two forms of the FI are no longer equal, and as such the CRLLB is expressed solely in terms of J 1. A Survey of Some Recent Results on the CRLB for Parameter Estimation 504v and its Extension 9 / 30

10 Remarks about the CRLLB The achievability of this new bound statistical efficiency of the estimator is subject to the same collinearity condition as for the classical CRLB, i.e., ln p(z x) = J 1 (x)[ˆx(z) x] x (13) x This is equivalent to stating that the LF should belong to the exponential family (Necessary). It should be noted that the CRLLB (11), unlike the CRLB for unbiased estimators, is not independent of the estimator. In view of (6) (8) the CRLLB depends on the estimator s values at the boundary of the support of the LF. This seems to be unavoidable, similarly to the situation of the CRLB for biased estimators. A Survey of Some Recent Results on the CRLB for Parameter Estimation 504v and its Extension 10 / 30

11 The Curious Case of the Inverted-Parabola LF Support p(w) w Figure 1: Measurement noise pdf p(w) = 3 [ 4a a 2 3 w 2], a = 2 (limited in magnitude, with a centrality tendency) A Survey of Some Recent Results on the CRLB for Parameter Estimation 504v and its Extension 11 / 30

12 The Curious Case of the Inverted-Parabola LF Support Consider the measurement z = x + w where the measurement noise has the inverted parabola pdf with finite-interval support The LF of x is in this case p(w) = 3 4a 3 [ a 2 w 2] (14) w [ a, +a] (15) p(z x) = 3 4a 3 [ a 2 (z x) 2], z [x a, x + a] (16) centered at x, i.e., parameter dependent. It is easy to verify that this LF satisfies (i) [integrability of LF derivatives] and (ii ) [continuity of the LF at the boundary of its support], so one would expect the CRLB to hold. A Survey of Some Recent Results on the CRLB for Parameter Estimation 504v and its Extension 12 / 30

13 Inverted-Parabola Example (cont.) The first derivative of the LLF is x a λ 1 (z x) = 2(z x) a 2 (z x) 2 (17) The first form of the FI is x+a [ ] 2(z x) 2 3 [ J 1 (x) = a 2 a 2 (z x) 2 4a 3 (z x) 2] dz Rewriting the above, one obtains J 1 (x) = 3 a 3 a a w 2 (18) a 2 dw = (19) w2 Since this integral is infinite, the CRLB is zero, i.e., meaningless in this case. A Survey of Some Recent Results on the CRLB for Parameter Estimation 504v and its Extension 13 / 30

14 Inverted-Parabola Example (cont.) For the inverted-parabola case, (i) [the derivatives of the LF w.r.t. x are integrable w.r.t. z] holds but (i ) [finite expected values of λ 2 1 and λ 2] does not and there is no meaningful CRLB. The actual variance of the ML estimator in this case can be easily shown to be ˆx ML (z) = z (20) var[ˆx ML (z)] = a2 5 The above discussion points to the fact that requirement (i) on the LF is not sufficient for the CRLB and should be replaced by (i ) see above. (21) A Survey of Some Recent Results on the CRLB for Parameter Estimation 504v and its Extension 14 / 30

15 Raised Cosine Likelihood Function p(w) w Figure 2: Measurement noise pdf p(w) = 1 2a magnitude, with a centrality tendency) [ 1 + cos π a w], a = π (limited in A Survey of Some Recent Results on the CRLB for Parameter Estimation 504v and its Extension 15 / 30

16 Raised Cosine Likelihood Function In this case, the LF of x is p(z x) = 1 [1 + cos π ] 2a a (z x), z [x a, x + a] (22) This model does not satisfy (i ) [integrability of LLF derivatives] (because of ln 0) (ii) [parameter independent LF support] This model does satisfy (i ) [existence of expected values of λ 2 1 and λ 2 ] (ii ) [continuity of LF at boundary of its support] This model, therefore, should have a valid CRLB It can be easily shown that, for a single measurement, the ML estimator of x is unbiased and given by ˆx ML (z) = z (23) A Survey of Some Recent Results on the CRLB for Parameter Estimation 504v and its Extension 16 / 30

17 Raised Cosine Likelihood Function CRLB The variance of (23) is (for a = π, for simplicity and with no loss of generality) x+π var[ˆx ML (z)] = 1 (z x) 2 [1 + cos(z x)] dz 2π x π = 1 [ 2π 3 /3 4π ] = π 2 /3 2 = (24) 2π The FI for (22) is given by J = 1 x+π [ ] sin(z x) 2 [1 + cos(z x)] dz 2π x π 1 + cos(z x) = 1 π [1 cos w] dw = 1 (25) 2π π The CRLB is therefore valid J 1 = 1 < var[ˆx ML (z)] = (26) A Survey of Some Recent Results on the CRLB for Parameter Estimation 504v and its Extension 17 / 30

18 Uniform LF Centered at the Unknown Parameter p(w) w Figure 3: Measurement noise pdf p(w) = 1 2a, a = 2 A Survey of Some Recent Results on the CRLB for Parameter Estimation 504v and its Extension 18 / 30

19 Uniform LF Centered at the Unknown Parameter Consider the measurement with uniformly distributed noise over w [ a, a] which yields the uniform LF p(z x) = 1 2a with parameter-dependent support (27) z [x a, x + a] (28) The unbiased estimator ˆx(z) = z has variance var[ˆx(z)] = (2a) 2 /12 = a 2 /3 (29) The naïve evaluation of the FI yields zero, and thus it appears that J = 0, i.e., the lower bound (1) is infinite. The reason for this apparent failure of the CRLB is that the LF (27) violates the continuity requirement (ii ) by being discontinuous at the boundary of its support. A Survey of Some Recent Results on the CRLB for Parameter Estimation 504v and its Extension 19 / 30

20 The Raised Fractional Cosine LF p(w) w Figure 4: Measurement noise pdf p(w) = 1 2a [ 1 + β cos π a w], a = π, β = 0.5 A Survey of Some Recent Results on the CRLB for Parameter Estimation 504v and its Extension 20 / 30

21 The Raised Fractional Cosine LF Consider a measurement which follows the raised fractional cosine pdf. The LF is (with 0 < β < 1) p(z x) = 1 [1 + β cos π ] 2a a (z x) (30) with support z [x a, x + a] (31) Clearly, the LF (30) does not satisfy (ii ) because it is discontinuous at the boundary of its support. Thus the classical CRLB does not hold for this LF. The unbiased ML estimator in this case is ˆx ML (z) = z Its variance (setting again a = π for simplicity), is which yields, e.g., for β = 0.5, var[ˆx ML (z)] = π 2 /3 2β (32) var[ˆx ML (z)] = (33) A Survey of Some Recent Results on the CRLB for Parameter Estimation 504v and its Extension 21 / 30

22 Raised Fractional Cosine CRLLB The Leibniz terms (6) are, noting that the derivatives of the integration limits w.r.t. x are unity D L (x) = (z x)p(z x) z=x+a (z x)p(z x) z=x a = 1 β (34) The FI in the present case is J = 1 2π π π The CRLLB is therefore E [ (ˆx(z) x) 2] β 2 sin 2 w 1 + β cos w dw = 1 1 β 2 (35) β β 2 = β 2 (36) For β = 0.5 the CRLLB yields the valid result var[ˆx ML (z)] = = (37) A Survey of Some Recent Results on the CRLB for Parameter Estimation 504v and its Extension 22 / 30

23 The Truncated Gaussian LF p(w) w Figure 5: Measurement noise pdf p(w) = 1 w2 e 2σ c 2, σ = 1, a = 2 A Survey of Some Recent Results on the CRLB for Parameter Estimation 504v and its Extension 23 / 30

24 The Truncated Gaussian LF Consider a measurement with errors from a truncated Gaussian pdf. The LF is p(z x) = 1 (z x)2 e 2σ c 2, z [x a, x + a] (38) where the normalizing constant is c = [ ( a ) ( 2πσ Φ Φ a )] (39) σ σ and Φ( ) is the standard Gaussian cdf. The unbiased ML estimator for a single measurement is ˆx ML (z) = z and its variance is x+a E { (z x) 2} = 1 (z x) 2 e (z x)2 2σ c 2 dz x a ( = σ 2 1 2a ) a2 e 2σ c 2 (40) A Survey of Some Recent Results on the CRLB for Parameter Estimation 504v and its Extension 24 / 30

25 Truncated Gaussian CRLLB The FI in this case is J = E { (z x) 2} σ 4 = 1 2ac a2 (1 σ 2 e The Leibniz term is 2σ 2 ) D L (x) = (z x)p(z x) z=x+a (z x)p(z x) z=x a = 2a c (41) e a2 2σ 2 (42) Using (42), the variance of the ML estimator can be written as var[ˆx ML (z)] = E { (z x) 2} = (1 D L )σ 2 (43) A Survey of Some Recent Results on the CRLB for Parameter Estimation 504v and its Extension 25 / 30

26 Truncated Gaussian CRLLB (cont.) The standard CRLB is invalid larger than var[ˆx ML (z)] CRLB : var[ˆx(z)] σ2 1 D L > var[ˆx ML (z)] = (1 D L )σ 2 The CRLLB is met exactly! (44) CRLLB : var[ˆx(z)] (1 D L )σ 2 = var[ˆx ML (z)] (45) The variance (43) of the estimator is equal to the CRLLB, therefore, this estimator is statistically efficient. It should also be noted that comparison to the CRLB would lead to an apparent super-efficiency of the ML estimator for this finite-support LF. This is due to the CRLB being invalid in this case When compared to the CRLLB (which is lower than the CRLB in this case), the ML estimator is found to be efficient (because the LF is exponential). A Survey of Some Recent Results on the CRLB for Parameter Estimation 504v and its Extension 26 / 30

27 Uniform LF as a Limit of the Truncated Gaussian p(w) σ = 1 σ = 3 σ = w Figure 6: Truncated Gaussian pdf for a = 2 and increasing σ. A Survey of Some Recent Results on the CRLB for Parameter Estimation 504v and its Extension 27 / 30

28 The Uniform LF as a Limit of the Truncated Gaussian The truncated Gaussian pdf discussed in the previous example approaches the uniform pdf in the limit. The normalizing constant can be written as c ( ) 2a 2πσ + HOT 3 (46) 2πσ where HOT 3 refers to the higher order terms (of the order of σ 3 ) of the Taylor series expansion. The limiting form of the normalizing constant is (since HOT 3 approaches zero as σ approaches infinity) lim c = lim 2a 2πσ = 2a (47) σ σ 2πσ and the pdf, therefore, becomes lim p(z x) = 1 σ 2a (48) A Survey of Some Recent Results on the CRLB for Parameter Estimation 504v and its Extension 28 / 30

29 The Uniform CRLLB as a Limit of the Truncated Gaussian The CRLLB for the uniform LF should then follow as the limit of (45) with (42) var{ˆx(z)} lim (1 D L)σ 2 σ 2ac a2 = lim (1 e σ 2σ 2 ) σ 2 = (2a)2 12 = a2 3 = E{(z x)2 } (49) This resulting CRLLB matches the variance (29) of the ML estimator for the uniform pdf, demonstrating that the ML estimator is indeed efficient in this case. A Survey of Some Recent Results on the CRLB for Parameter Estimation 504v and its Extension 29 / 30

30 Conclusions In summary, the classical CRLB holds for LFs with parameter-dependent support under the following conditions: (i ) existence (finiteness) of the expected value of the square of the first derivative of the log-lf and the expected value of the second derivative of the log-lf [the derivative has to exist only a.e., e.g., Laplace LF], and (ii ) continuity of LF at the boundary of its support. For LFs that satisfy (i ) but not (ii ), the recently developed CRLLB provides the bound. The CRLLB solved the longstanding problem of the inapplicability of the CRLB to the uniformly distributed measurement noise case and shattered the myth of super-efficiency (for z U[0, x]; IEEE TAES July 2014). Extension to vector parameters: the Leibniz term is an integral on the LF support boundary of an interior product. A Survey of Some Recent Results on the CRLB for Parameter Estimation 504v and its Extension 30 / 30

Statistically Efficient Target Tracking with a Single Stationary Optical Sensor and Measurement Extraction

Statistically Efficient Target Tracking with a Single Stationary Optical Sensor and Measurement Extraction University of Connecticut OpenCommons@UConn Doctoral Dissertations University of Connecticut Graduate School 6-15-2018 Statistically Efficient Target Tracking with a Single Stationary Optical Sensor and

More information

Basic concepts in estimation

Basic concepts in estimation Basic concepts in estimation Random and nonrandom parameters Definitions of estimates ML Maimum Lielihood MAP Maimum A Posteriori LS Least Squares MMS Minimum Mean square rror Measures of quality of estimates

More information

ECE531 Lecture 10b: Maximum Likelihood Estimation

ECE531 Lecture 10b: Maximum Likelihood Estimation ECE531 Lecture 10b: Maximum Likelihood Estimation D. Richard Brown III Worcester Polytechnic Institute 05-Apr-2011 Worcester Polytechnic Institute D. Richard Brown III 05-Apr-2011 1 / 23 Introduction So

More information

A Few Notes on Fisher Information (WIP)

A Few Notes on Fisher Information (WIP) A Few Notes on Fisher Information (WIP) David Meyer dmm@{-4-5.net,uoregon.edu} Last update: April 30, 208 Definitions There are so many interesting things about Fisher Information and its theoretical properties

More information

Brief Review on Estimation Theory

Brief Review on Estimation Theory Brief Review on Estimation Theory K. Abed-Meraim ENST PARIS, Signal and Image Processing Dept. abed@tsi.enst.fr This presentation is essentially based on the course BASTA by E. Moulines Brief review on

More information

1. Fisher Information

1. Fisher Information 1. Fisher Information Let f(x θ) be a density function with the property that log f(x θ) is differentiable in θ throughout the open p-dimensional parameter set Θ R p ; then the score statistic (or score

More information

Review Quiz. 1. Prove that in a one-dimensional canonical exponential family, the complete and sufficient statistic achieves the

Review Quiz. 1. Prove that in a one-dimensional canonical exponential family, the complete and sufficient statistic achieves the Review Quiz 1. Prove that in a one-dimensional canonical exponential family, the complete and sufficient statistic achieves the Cramér Rao lower bound (CRLB). That is, if where { } and are scalars, then

More information

Variations. ECE 6540, Lecture 10 Maximum Likelihood Estimation

Variations. ECE 6540, Lecture 10 Maximum Likelihood Estimation Variations ECE 6540, Lecture 10 Last Time BLUE (Best Linear Unbiased Estimator) Formulation Advantages Disadvantages 2 The BLUE A simplification Assume the estimator is a linear system For a single parameter

More information

Estimation theory. Parametric estimation. Properties of estimators. Minimum variance estimator. Cramer-Rao bound. Maximum likelihood estimators

Estimation theory. Parametric estimation. Properties of estimators. Minimum variance estimator. Cramer-Rao bound. Maximum likelihood estimators Estimation theory Parametric estimation Properties of estimators Minimum variance estimator Cramer-Rao bound Maximum likelihood estimators Confidence intervals Bayesian estimation 1 Random Variables Let

More information

Estimators as Random Variables

Estimators as Random Variables Estimation Theory Overview Properties Bias, Variance, and Mean Square Error Cramér-Rao lower bound Maimum likelihood Consistency Confidence intervals Properties of the mean estimator Introduction Up until

More information

Midterm Examination. STA 215: Statistical Inference. Due Wednesday, 2006 Mar 8, 1:15 pm

Midterm Examination. STA 215: Statistical Inference. Due Wednesday, 2006 Mar 8, 1:15 pm Midterm Examination STA 215: Statistical Inference Due Wednesday, 2006 Mar 8, 1:15 pm This is an open-book take-home examination. You may work on it during any consecutive 24-hour period you like; please

More information

Technique for Numerical Computation of Cramér-Rao Bound using MATLAB

Technique for Numerical Computation of Cramér-Rao Bound using MATLAB Technique for Numerical Computation of Cramér-Rao Bound using MATLAB Hing Cheung So http://www.ee.cityu.edu.hk/~hcso Department of Electronic Engineering City University of Hong Kong H. C. So Page 1 The

More information

Statistics. Lecture 2 August 7, 2000 Frank Porter Caltech. The Fundamentals; Point Estimation. Maximum Likelihood, Least Squares and All That

Statistics. Lecture 2 August 7, 2000 Frank Porter Caltech. The Fundamentals; Point Estimation. Maximum Likelihood, Least Squares and All That Statistics Lecture 2 August 7, 2000 Frank Porter Caltech The plan for these lectures: The Fundamentals; Point Estimation Maximum Likelihood, Least Squares and All That What is a Confidence Interval? Interval

More information

Inferring from data. Theory of estimators

Inferring from data. Theory of estimators Inferring from data Theory of estimators 1 Estimators Estimator is any function of the data e(x) used to provide an estimate ( a measurement ) of an unknown parameter. Because estimators are functions

More information

EIE6207: Estimation Theory

EIE6207: Estimation Theory EIE6207: Estimation Theory Man-Wai MAK Dept. of Electronic and Information Engineering, The Hong Kong Polytechnic University enmwmak@polyu.edu.hk http://www.eie.polyu.edu.hk/ mwmak References: Steven M.

More information

8 Basics of Hypothesis Testing

8 Basics of Hypothesis Testing 8 Basics of Hypothesis Testing 4 Problems Problem : The stochastic signal S is either 0 or E with equal probability, for a known value E > 0. Consider an observation X = x of the stochastic variable X

More information

Estimation Theory Fredrik Rusek. Chapters

Estimation Theory Fredrik Rusek. Chapters Estimation Theory Fredrik Rusek Chapters 3.5-3.10 Recap We deal with unbiased estimators of deterministic parameters Performance of an estimator is measured by the variance of the estimate (due to the

More information

2 Statistical Estimation: Basic Concepts

2 Statistical Estimation: Basic Concepts Technion Israel Institute of Technology, Department of Electrical Engineering Estimation and Identification in Dynamical Systems (048825) Lecture Notes, Fall 2009, Prof. N. Shimkin 2 Statistical Estimation:

More information

Estimation and Detection

Estimation and Detection stimation and Detection Lecture 2: Cramér-Rao Lower Bound Dr. ir. Richard C. Hendriks & Dr. Sundeep P. Chepuri 7//207 Remember: Introductory xample Given a process (DC in noise): x[n]=a + w[n], n=0,,n,

More information

July 21 Math 2254 sec 001 Summer 2015

July 21 Math 2254 sec 001 Summer 2015 July 21 Math 2254 sec 001 Summer 2015 Section 8.8: Power Series Theorem: Let a n (x c) n have positive radius of convergence R, and let the function f be defined by this power series f (x) = a n (x c)

More information

ELEG 5633 Detection and Estimation Minimum Variance Unbiased Estimators (MVUE)

ELEG 5633 Detection and Estimation Minimum Variance Unbiased Estimators (MVUE) 1 ELEG 5633 Detection and Estimation Minimum Variance Unbiased Estimators (MVUE) Jingxian Wu Department of Electrical Engineering University of Arkansas Outline Minimum Variance Unbiased Estimators (MVUE)

More information

Statistics 100A Homework 5 Solutions

Statistics 100A Homework 5 Solutions Chapter 5 Statistics 1A Homework 5 Solutions Ryan Rosario 1. Let X be a random variable with probability density function a What is the value of c? fx { c1 x 1 < x < 1 otherwise We know that for fx to

More information

Methods of evaluating estimators and best unbiased estimators Hamid R. Rabiee

Methods of evaluating estimators and best unbiased estimators Hamid R. Rabiee Stochastic Processes Methods of evaluating estimators and best unbiased estimators Hamid R. Rabiee 1 Outline Methods of Mean Squared Error Bias and Unbiasedness Best Unbiased Estimators CR-Bound for variance

More information

On the Cramér-Rao lower bound under model mismatch

On the Cramér-Rao lower bound under model mismatch On the Cramér-Rao lower bound under model mismatch Carsten Fritsche, Umut Orguner, Emre Özkan and Fredrik Gustafsson Linköping University Post Print N.B.: When citing this work, cite the original article.

More information

Parametric Inference Maximum Likelihood Inference Exponential Families Expectation Maximization (EM) Bayesian Inference Statistical Decison Theory

Parametric Inference Maximum Likelihood Inference Exponential Families Expectation Maximization (EM) Bayesian Inference Statistical Decison Theory Statistical Inference Parametric Inference Maximum Likelihood Inference Exponential Families Expectation Maximization (EM) Bayesian Inference Statistical Decison Theory IP, José Bioucas Dias, IST, 2007

More information

Exercises with solutions (Set D)

Exercises with solutions (Set D) Exercises with solutions Set D. A fair die is rolled at the same time as a fair coin is tossed. Let A be the number on the upper surface of the die and let B describe the outcome of the coin toss, where

More information

Mathematical statistics

Mathematical statistics October 18 th, 2018 Lecture 16: Midterm review Countdown to mid-term exam: 7 days Week 1 Chapter 1: Probability review Week 2 Week 4 Week 7 Chapter 6: Statistics Chapter 7: Point Estimation Chapter 8:

More information

Mathematical statistics

Mathematical statistics October 4 th, 2018 Lecture 12: Information Where are we? Week 1 Week 2 Week 4 Week 7 Week 10 Week 14 Probability reviews Chapter 6: Statistics and Sampling Distributions Chapter 7: Point Estimation Chapter

More information

Modern Methods of Data Analysis - WS 07/08

Modern Methods of Data Analysis - WS 07/08 Modern Methods of Data Analysis Lecture VIc (19.11.07) Contents: Maximum Likelihood Fit Maximum Likelihood (I) Assume N measurements of a random variable Assume them to be independent and distributed according

More information

p(z)

p(z) Chapter Statistics. Introduction This lecture is a quick review of basic statistical concepts; probabilities, mean, variance, covariance, correlation, linear regression, probability density functions and

More information

ECE 275B Homework # 1 Solutions Winter 2018

ECE 275B Homework # 1 Solutions Winter 2018 ECE 275B Homework # 1 Solutions Winter 2018 1. (a) Because x i are assumed to be independent realizations of a continuous random variable, it is almost surely (a.s.) 1 the case that x 1 < x 2 < < x n Thus,

More information

Economics 620, Lecture 9: Asymptotics III: Maximum Likelihood Estimation

Economics 620, Lecture 9: Asymptotics III: Maximum Likelihood Estimation Economics 620, Lecture 9: Asymptotics III: Maximum Likelihood Estimation Nicholas M. Kiefer Cornell University Professor N. M. Kiefer (Cornell University) Lecture 9: Asymptotics III(MLE) 1 / 20 Jensen

More information

Classical Estimation Topics

Classical Estimation Topics Classical Estimation Topics Namrata Vaswani, Iowa State University February 25, 2014 This note fills in the gaps in the notes already provided (l0.pdf, l1.pdf, l2.pdf, l3.pdf, LeastSquares.pdf). 1 Min

More information

ECE 275B Homework # 1 Solutions Version Winter 2015

ECE 275B Homework # 1 Solutions Version Winter 2015 ECE 275B Homework # 1 Solutions Version Winter 2015 1. (a) Because x i are assumed to be independent realizations of a continuous random variable, it is almost surely (a.s.) 1 the case that x 1 < x 2

More information

Lecture 32: Taylor Series and McLaurin series We saw last day that some functions are equal to a power series on part of their domain.

Lecture 32: Taylor Series and McLaurin series We saw last day that some functions are equal to a power series on part of their domain. Lecture 32: Taylor Series and McLaurin series We saw last day that some functions are equal to a power series on part of their domain. For example f(x) = 1 1 x = 1 + x + x2 + x 3 + = ln(1 + x) = x x2 2

More information

A6523 Signal Modeling, Statistical Inference and Data Mining in Astrophysics Spring 2011

A6523 Signal Modeling, Statistical Inference and Data Mining in Astrophysics Spring 2011 A6523 Signal Modeling, Statistical Inference and Data Mining in Astrophysics Spring 2011 Reading Chapter 5 (continued) Lecture 8 Key points in probability CLT CLT examples Prior vs Likelihood Box & Tiao

More information

Chapter 3: Unbiased Estimation Lecture 22: UMVUE and the method of using a sufficient and complete statistic

Chapter 3: Unbiased Estimation Lecture 22: UMVUE and the method of using a sufficient and complete statistic Chapter 3: Unbiased Estimation Lecture 22: UMVUE and the method of using a sufficient and complete statistic Unbiased estimation Unbiased or asymptotically unbiased estimation plays an important role in

More information

Random variables. DS GA 1002 Probability and Statistics for Data Science.

Random variables. DS GA 1002 Probability and Statistics for Data Science. Random variables DS GA 1002 Probability and Statistics for Data Science http://www.cims.nyu.edu/~cfgranda/pages/dsga1002_fall17 Carlos Fernandez-Granda Motivation Random variables model numerical quantities

More information

Math 180A. Lecture 16 Friday May 7 th. Expectation. Recall the three main probability density functions so far (1) Uniform (2) Exponential.

Math 180A. Lecture 16 Friday May 7 th. Expectation. Recall the three main probability density functions so far (1) Uniform (2) Exponential. Math 8A Lecture 6 Friday May 7 th Epectation Recall the three main probability density functions so far () Uniform () Eponential (3) Power Law e, ( ), Math 8A Lecture 6 Friday May 7 th Epectation Eample

More information

Examples of the Fourier Theorem (Sect. 10.3). The Fourier Theorem: Continuous case.

Examples of the Fourier Theorem (Sect. 10.3). The Fourier Theorem: Continuous case. s of the Fourier Theorem (Sect. 1.3. The Fourier Theorem: Continuous case. : Using the Fourier Theorem. The Fourier Theorem: Piecewise continuous case. : Using the Fourier Theorem. The Fourier Theorem:

More information

ST5215: Advanced Statistical Theory

ST5215: Advanced Statistical Theory Department of Statistics & Applied Probability Wednesday, October 19, 2011 Lecture 17: UMVUE and the first method of derivation Estimable parameters Let ϑ be a parameter in the family P. If there exists

More information

University of Alberta

University of Alberta University of Alberta Parameter Estimation and Optimal Detection in Generalized Gaussian Noise by Qintian Guo A thesis submitted to the Faculty of Graduate Studies and Research in partial fulfillment of

More information

Advanced Signal Processing Introduction to Estimation Theory

Advanced Signal Processing Introduction to Estimation Theory Advanced Signal Processing Introduction to Estimation Theory Danilo Mandic, room 813, ext: 46271 Department of Electrical and Electronic Engineering Imperial College London, UK d.mandic@imperial.ac.uk,

More information

Outline Lecture 2 2(32)

Outline Lecture 2 2(32) Outline Lecture (3), Lecture Linear Regression and Classification it is our firm belief that an understanding of linear models is essential for understanding nonlinear ones Thomas Schön Division of Automatic

More information

Multiple Random Variables

Multiple Random Variables Multiple Random Variables Joint Probability Density Let X and Y be two random variables. Their joint distribution function is F ( XY x, y) P X x Y y. F XY ( ) 1, < x

More information

Lecture 25: Review. Statistics 104. April 23, Colin Rundel

Lecture 25: Review. Statistics 104. April 23, Colin Rundel Lecture 25: Review Statistics 104 Colin Rundel April 23, 2012 Joint CDF F (x, y) = P [X x, Y y] = P [(X, Y ) lies south-west of the point (x, y)] Y (x,y) X Statistics 104 (Colin Rundel) Lecture 25 April

More information

Proof In the CR proof. and

Proof In the CR proof. and Question Under what conditions will we be able to attain the Cramér-Rao bound and find a MVUE? Lecture 4 - Consequences of the Cramér-Rao Lower Bound. Searching for a MVUE. Rao-Blackwell Theorem, Lehmann-Scheffé

More information

Vector Derivatives and the Gradient

Vector Derivatives and the Gradient ECE 275AB Lecture 10 Fall 2008 V1.1 c K. Kreutz-Delgado, UC San Diego p. 1/1 Lecture 10 ECE 275A Vector Derivatives and the Gradient ECE 275AB Lecture 10 Fall 2008 V1.1 c K. Kreutz-Delgado, UC San Diego

More information

Fisher Information Matrix-based Nonlinear System Conversion for State Estimation

Fisher Information Matrix-based Nonlinear System Conversion for State Estimation Fisher Information Matrix-based Nonlinear System Conversion for State Estimation Ming Lei Christophe Baehr and Pierre Del Moral Abstract In practical target tracing a number of improved measurement conversion

More information

The First Derivative and Second Derivative Test

The First Derivative and Second Derivative Test The First Derivative and Second Derivative Test James K. Peterson Department of Biological Sciences and Department of Mathematical Sciences Clemson University April 9, 2018 Outline 1 Extremal Values 2

More information

Lecture Notes 1 Probability and Random Variables. Conditional Probability and Independence. Functions of a Random Variable

Lecture Notes 1 Probability and Random Variables. Conditional Probability and Independence. Functions of a Random Variable Lecture Notes 1 Probability and Random Variables Probability Spaces Conditional Probability and Independence Random Variables Functions of a Random Variable Generation of a Random Variable Jointly Distributed

More information

Conditional Posterior Cramér-Rao Lower Bounds for Nonlinear Sequential Bayesian Estimation

Conditional Posterior Cramér-Rao Lower Bounds for Nonlinear Sequential Bayesian Estimation 1 Conditional Posterior Cramér-Rao Lower Bounds for Nonlinear Sequential Bayesian Estimation Long Zuo, Ruixin Niu, and Pramod K. Varshney Abstract Posterior Cramér-Rao lower bounds (PCRLBs) 1] for sequential

More information

Lecture Notes 1 Probability and Random Variables. Conditional Probability and Independence. Functions of a Random Variable

Lecture Notes 1 Probability and Random Variables. Conditional Probability and Independence. Functions of a Random Variable Lecture Notes 1 Probability and Random Variables Probability Spaces Conditional Probability and Independence Random Variables Functions of a Random Variable Generation of a Random Variable Jointly Distributed

More information

The First Derivative and Second Derivative Test

The First Derivative and Second Derivative Test The First Derivative and Second Derivative Test James K. Peterson Department of Biological Sciences and Department of Mathematical Sciences Clemson University November 8, 2017 Outline Extremal Values The

More information

Estimation techniques

Estimation techniques Estimation techniques March 2, 2006 Contents 1 Problem Statement 2 2 Bayesian Estimation Techniques 2 2.1 Minimum Mean Squared Error (MMSE) estimation........................ 2 2.1.1 General formulation......................................

More information

PROBLEM SET 3 FYS3140

PROBLEM SET 3 FYS3140 PROBLEM SET FYS40 Problem. (Cauchy s theorem and integral formula) Cauchy s integral formula f(a) = πi z a dz πi f(a) a in z a dz = 0 otherwise we solve the following problems by comparing the integrals

More information

Advanced Signal Processing Minimum Variance Unbiased Estimation (MVU)

Advanced Signal Processing Minimum Variance Unbiased Estimation (MVU) Advanced Signal Processing Minimum Variance Unbiased Estimation (MVU) Danilo Mandic room 813, ext: 46271 Department of Electrical and Electronic Engineering Imperial College London, UK d.mandic@imperial.ac.uk,

More information

MATH Solutions to Probability Exercises

MATH Solutions to Probability Exercises MATH 5 9 MATH 5 9 Problem. Suppose we flip a fair coin once and observe either T for tails or H for heads. Let X denote the random variable that equals when we observe tails and equals when we observe

More information

MTH739U/P: Topics in Scientific Computing Autumn 2016 Week 6

MTH739U/P: Topics in Scientific Computing Autumn 2016 Week 6 MTH739U/P: Topics in Scientific Computing Autumn 16 Week 6 4.5 Generic algorithms for non-uniform variates We have seen that sampling from a uniform distribution in [, 1] is a relatively straightforward

More information

MDL Approach for Multiple Low Observable Track Initiation

MDL Approach for Multiple Low Observable Track Initiation University of New Orleans From the SelectedWorks of Huimin Chen July, 003 MDL Approach for Multiple Low Observable Track Initiation Huimin Chen, University of New Orleans T K Kirubarajan, McMaster University

More information

Multivariate Regression

Multivariate Regression Multivariate Regression The so-called supervised learning problem is the following: we want to approximate the random variable Y with an appropriate function of the random variables X 1,..., X p with the

More information

Ch 4. Linear Models for Classification

Ch 4. Linear Models for Classification Ch 4. Linear Models for Classification Pattern Recognition and Machine Learning, C. M. Bishop, 2006. Department of Computer Science and Engineering Pohang University of Science and echnology 77 Cheongam-ro,

More information

Math Camp II. Calculus. Yiqing Xu. August 27, 2014 MIT

Math Camp II. Calculus. Yiqing Xu. August 27, 2014 MIT Math Camp II Calculus Yiqing Xu MIT August 27, 2014 1 Sequence and Limit 2 Derivatives 3 OLS Asymptotics 4 Integrals Sequence Definition A sequence {y n } = {y 1, y 2, y 3,..., y n } is an ordered set

More information

SGN Advanced Signal Processing: Lecture 8 Parameter estimation for AR and MA models. Model order selection

SGN Advanced Signal Processing: Lecture 8 Parameter estimation for AR and MA models. Model order selection SG 21006 Advanced Signal Processing: Lecture 8 Parameter estimation for AR and MA models. Model order selection Ioan Tabus Department of Signal Processing Tampere University of Technology Finland 1 / 28

More information

Unbiased Estimation. Binomial problem shows general phenomenon. An estimator can be good for some values of θ and bad for others.

Unbiased Estimation. Binomial problem shows general phenomenon. An estimator can be good for some values of θ and bad for others. Unbiased Estimation Binomial problem shows general phenomenon. An estimator can be good for some values of θ and bad for others. To compare ˆθ and θ, two estimators of θ: Say ˆθ is better than θ if it

More information

2.6 Logarithmic Functions. Inverse Functions. Question: What is the relationship between f(x) = x 2 and g(x) = x?

2.6 Logarithmic Functions. Inverse Functions. Question: What is the relationship between f(x) = x 2 and g(x) = x? Inverse Functions Question: What is the relationship between f(x) = x 3 and g(x) = 3 x? Question: What is the relationship between f(x) = x 2 and g(x) = x? Definition (One-to-One Function) A function f

More information

CHAPTER 3 ELEMENTARY FUNCTIONS 28. THE EXPONENTIAL FUNCTION. Definition: The exponential function: The exponential function e z by writing

CHAPTER 3 ELEMENTARY FUNCTIONS 28. THE EXPONENTIAL FUNCTION. Definition: The exponential function: The exponential function e z by writing CHAPTER 3 ELEMENTARY FUNCTIONS We consider here various elementary functions studied in calculus and define corresponding functions of a complex variable. To be specific, we define analytic functions of

More information

System Identification, Lecture 4

System Identification, Lecture 4 System Identification, Lecture 4 Kristiaan Pelckmans (IT/UU, 2338) Course code: 1RT880, Report code: 61800 - Spring 2012 F, FRI Uppsala University, Information Technology 30 Januari 2012 SI-2012 K. Pelckmans

More information

System Identification, Lecture 4

System Identification, Lecture 4 System Identification, Lecture 4 Kristiaan Pelckmans (IT/UU, 2338) Course code: 1RT880, Report code: 61800 - Spring 2016 F, FRI Uppsala University, Information Technology 13 April 2016 SI-2016 K. Pelckmans

More information

Taylor and Maclaurin Series

Taylor and Maclaurin Series Taylor and Maclaurin Series MATH 211, Calculus II J. Robert Buchanan Department of Mathematics Spring 2018 Background We have seen that some power series converge. When they do, we can think of them as

More information

Chapter 4. Continuous Random Variables 4.1 PDF

Chapter 4. Continuous Random Variables 4.1 PDF Chapter 4 Continuous Random Variables In this chapter we study continuous random variables. The linkage between continuous and discrete random variables is the cumulative distribution (CDF) which we will

More information

Chapter 2 Continuous Distributions

Chapter 2 Continuous Distributions Chapter Continuous Distributions Continuous random variables For a continuous random variable X the probability distribution is described by the probability density function f(x), which has the following

More information

e x = 1 + x + x2 2! + x3 If the function f(x) can be written as a power series on an interval I, then the power series is of the form

e x = 1 + x + x2 2! + x3 If the function f(x) can be written as a power series on an interval I, then the power series is of the form Taylor Series Given a function f(x), we would like to be able to find a power series that represents the function. For example, in the last section we noted that we can represent e x by the power series

More information

STATISTICS/ECONOMETRICS PREP COURSE PROF. MASSIMO GUIDOLIN

STATISTICS/ECONOMETRICS PREP COURSE PROF. MASSIMO GUIDOLIN Massimo Guidolin Massimo.Guidolin@unibocconi.it Dept. of Finance STATISTICS/ECONOMETRICS PREP COURSE PROF. MASSIMO GUIDOLIN SECOND PART, LECTURE 2: MODES OF CONVERGENCE AND POINT ESTIMATION Lecture 2:

More information

Unbiased Estimation. Binomial problem shows general phenomenon. An estimator can be good for some values of θ and bad for others.

Unbiased Estimation. Binomial problem shows general phenomenon. An estimator can be good for some values of θ and bad for others. Unbiased Estimation Binomial problem shows general phenomenon. An estimator can be good for some values of θ and bad for others. To compare ˆθ and θ, two estimators of θ: Say ˆθ is better than θ if it

More information

Northwestern University Department of Electrical Engineering and Computer Science

Northwestern University Department of Electrical Engineering and Computer Science Northwestern University Department of Electrical Engineering and Computer Science EECS 454: Modeling and Analysis of Communication Networks Spring 2008 Probability Review As discussed in Lecture 1, probability

More information

AP Calculus Testbank (Chapter 9) (Mr. Surowski)

AP Calculus Testbank (Chapter 9) (Mr. Surowski) AP Calculus Testbank (Chapter 9) (Mr. Surowski) Part I. Multiple-Choice Questions n 1 1. The series will converge, provided that n 1+p + n + 1 (A) p > 1 (B) p > 2 (C) p >.5 (D) p 0 2. The series

More information

Economics 520. Lecture Note 19: Hypothesis Testing via the Neyman-Pearson Lemma CB 8.1,

Economics 520. Lecture Note 19: Hypothesis Testing via the Neyman-Pearson Lemma CB 8.1, Economics 520 Lecture Note 9: Hypothesis Testing via the Neyman-Pearson Lemma CB 8., 8.3.-8.3.3 Uniformly Most Powerful Tests and the Neyman-Pearson Lemma Let s return to the hypothesis testing problem

More information

Least Squares and Kalman Filtering Questions: me,

Least Squares and Kalman Filtering Questions:  me, Least Squares and Kalman Filtering Questions: Email me, namrata@ece.gatech.edu Least Squares and Kalman Filtering 1 Recall: Weighted Least Squares y = Hx + e Minimize Solution: J(x) = (y Hx) T W (y Hx)

More information

Lecture 4: Types of errors. Bayesian regression models. Logistic regression

Lecture 4: Types of errors. Bayesian regression models. Logistic regression Lecture 4: Types of errors. Bayesian regression models. Logistic regression A Bayesian interpretation of regularization Bayesian vs maximum likelihood fitting more generally COMP-652 and ECSE-68, Lecture

More information

1 Introduction. 2 Measure theoretic definitions

1 Introduction. 2 Measure theoretic definitions 1 Introduction These notes aim to recall some basic definitions needed for dealing with random variables. Sections to 5 follow mostly the presentation given in chapter two of [1]. Measure theoretic definitions

More information

MATH115. Indeterminate Forms and Improper Integrals. Paolo Lorenzo Bautista. June 24, De La Salle University

MATH115. Indeterminate Forms and Improper Integrals. Paolo Lorenzo Bautista. June 24, De La Salle University MATH115 Indeterminate Forms and Improper Integrals Paolo Lorenzo Bautista De La Salle University June 24, 2014 PLBautista (DLSU) MATH115 June 24, 2014 1 / 25 Theorem (Mean-Value Theorem) Let f be a function

More information

Economics 620, Lecture 4: The K-Varable Linear Model I

Economics 620, Lecture 4: The K-Varable Linear Model I Economics 620, Lecture 4: The K-Varable Linear Model I Nicholas M. Kiefer Cornell University Professor N. M. Kiefer (Cornell University) Lecture 4: The K-Varable Linear Model I 1 / 20 Consider the system

More information

Calculus. Central role in much of modern science Physics, especially kinematics and electrodynamics Economics, engineering, medicine, chemistry, etc.

Calculus. Central role in much of modern science Physics, especially kinematics and electrodynamics Economics, engineering, medicine, chemistry, etc. Calculus Calculus - the study of change, as related to functions Formally co-developed around the 1660 s by Newton and Leibniz Two main branches - differential and integral Central role in much of modern

More information

Economics 620, Lecture 4: The K-Variable Linear Model I. y 1 = + x 1 + " 1 y 2 = + x 2 + " 2 :::::::: :::::::: y N = + x N + " N

Economics 620, Lecture 4: The K-Variable Linear Model I. y 1 = + x 1 +  1 y 2 = + x 2 +  2 :::::::: :::::::: y N = + x N +  N 1 Economics 620, Lecture 4: The K-Variable Linear Model I Consider the system y 1 = + x 1 + " 1 y 2 = + x 2 + " 2 :::::::: :::::::: y N = + x N + " N or in matrix form y = X + " where y is N 1, X is N

More information

ROBUST MINIMUM DISTANCE NEYMAN-PEARSON DETECTION OF A WEAK SIGNAL IN NON-GAUSSIAN NOISE

ROBUST MINIMUM DISTANCE NEYMAN-PEARSON DETECTION OF A WEAK SIGNAL IN NON-GAUSSIAN NOISE 17th European Signal Processing Conference EUSIPCO 2009) Glasgow, Scotland, August 24-28, 2009 ROBUST MIIMUM DISTACE EYMA-PEARSO DETECTIO OF A WEAK SIGAL I O-GAUSSIA OISE Georgy Shevlyakov, Kyungmin Lee,

More information

Exercises and Answers to Chapter 1

Exercises and Answers to Chapter 1 Exercises and Answers to Chapter The continuous type of random variable X has the following density function: a x, if < x < a, f (x), otherwise. Answer the following questions. () Find a. () Obtain mean

More information

Section 10.7 Taylor series

Section 10.7 Taylor series Section 10.7 Taylor series 1. Common Maclaurin series 2. s and approximations with Taylor polynomials 3. Multiplication and division of power series Math 126 Enhanced 10.7 Taylor Series The University

More information

conditional cdf, conditional pdf, total probability theorem?

conditional cdf, conditional pdf, total probability theorem? 6 Multiple Random Variables 6.0 INTRODUCTION scalar vs. random variable cdf, pdf transformation of a random variable conditional cdf, conditional pdf, total probability theorem expectation of a random

More information

6.1 Variational representation of f-divergences

6.1 Variational representation of f-divergences ECE598: Information-theoretic methods in high-dimensional statistics Spring 2016 Lecture 6: Variational representation, HCR and CR lower bounds Lecturer: Yihong Wu Scribe: Georgios Rovatsos, Feb 11, 2016

More information

Statistical Methods for Handling Incomplete Data Chapter 2: Likelihood-based approach

Statistical Methods for Handling Incomplete Data Chapter 2: Likelihood-based approach Statistical Methods for Handling Incomplete Data Chapter 2: Likelihood-based approach Jae-Kwang Kim Department of Statistics, Iowa State University Outline 1 Introduction 2 Observed likelihood 3 Mean Score

More information

MAT 271E Probability and Statistics

MAT 271E Probability and Statistics MAT 271E Probability and Statistics Spring 2011 Instructor : Class Meets : Office Hours : Textbook : Supp. Text : İlker Bayram EEB 1103 ibayram@itu.edu.tr 13.30 16.30, Wednesday EEB? 10.00 12.00, Wednesday

More information

Chapter 3. Point Estimation. 3.1 Introduction

Chapter 3. Point Estimation. 3.1 Introduction Chapter 3 Point Estimation Let (Ω, A, P θ ), P θ P = {P θ θ Θ}be probability space, X 1, X 2,..., X n : (Ω, A) (IR k, B k ) random variables (X, B X ) sample space γ : Θ IR k measurable function, i.e.

More information

EIE6207: Maximum-Likelihood and Bayesian Estimation

EIE6207: Maximum-Likelihood and Bayesian Estimation EIE6207: Maximum-Likelihood and Bayesian Estimation Man-Wai MAK Dept. of Electronic and Information Engineering, The Hong Kong Polytechnic University enmwmak@polyu.edu.hk http://www.eie.polyu.edu.hk/ mwmak

More information

The Laplace Approximation

The Laplace Approximation The Laplace Approximation Sargur N. University at Buffalo, State University of New York USA Topics in Linear Models for Classification Overview 1. Discriminant Functions 2. Probabilistic Generative Models

More information

FIXED POINT ITERATION

FIXED POINT ITERATION FIXED POINT ITERATION The idea of the fixed point iteration methods is to first reformulate a equation to an equivalent fixed point problem: f (x) = 0 x = g(x) and then to use the iteration: with an initial

More information

Higher order derivatives of the inverse function

Higher order derivatives of the inverse function Higher order derivatives of the inverse function Andrej Liptaj Abstract A general recursive and limit formula for higher order derivatives of the inverse function is presented. The formula is next used

More information

OR MSc Maths Revision Course

OR MSc Maths Revision Course OR MSc Maths Revision Course Tom Byrne School of Mathematics University of Edinburgh t.m.byrne@sms.ed.ac.uk 15 September 2017 General Information Today JCMB Lecture Theatre A, 09:30-12:30 Mathematics revision

More information

ESTIMATION THEORY. Chapter Estimation of Random Variables

ESTIMATION THEORY. Chapter Estimation of Random Variables Chapter ESTIMATION THEORY. Estimation of Random Variables Suppose X,Y,Y 2,...,Y n are random variables defined on the same probability space (Ω, S,P). We consider Y,...,Y n to be the observed random variables

More information

March 10, 2017 THE EXPONENTIAL CLASS OF DISTRIBUTIONS

March 10, 2017 THE EXPONENTIAL CLASS OF DISTRIBUTIONS March 10, 2017 THE EXPONENTIAL CLASS OF DISTRIBUTIONS Abstract. We will introduce a class of distributions that will contain many of the discrete and continuous we are familiar with. This class will help

More information