Maximum Likelihood Diffusive Source Localization Based on Binary Observations

Size: px
Start display at page:

Download "Maximum Likelihood Diffusive Source Localization Based on Binary Observations"

Transcription

1 Maximum Lielihood Diffusive Source Localization Based on Binary Observations Yoav Levinboo and an F. Wong Wireless Information Networing Group, University of Florida Gainesville, Florida , USA Abstract In this paper, we construct the maximum lielihood (ML estimator of diffusive source location based on binary observations. We utilize two different estimation approaches, ML estimation based on all the observations (i.e., batch processing and approximated ML estimation using only new observations and the previous estimate (i.e., real time processing. he performance of these estimators are compared with theoretical bounds and are shown to achieve excellent performance. I. Introduction We investigate the problem of estimating the location of an instantaneous source of an arbitrary gas using a simple chemical sensor networ. We assume that the gas spreads by diffusion 1 and the sensors mae binary decisions on the existence of the gas by measuring its concentration in their immediate vicinity. he sensor outputs are periodically relayed to a fusion center that uses this information to estimate the location of the source. It turns out from the analysis of the Cramer-Rao bound (CRB 2 that if we tae the physical transport mechanism of the gas into account, we may achieve excellent performance in the estimation of the diffusive source location despite of the observations being binary. he only condition for this to be true is that the measurements prior to thresholding at the sensors are noisy. We construct the estimators assuming that none of the diffusion parameters are nown and therefore we perform joint estimation of the source location, time of release, mass of the gas, diffusion coefficient, and sensors noise variance. We show that these estimators indeed achieve excellent performance. he rest of the paper is organized as follows. In Section II, we describe the system model and discuss the assumptions involved. he ML estimator and its calculation using Fisher s method of scoring is derived in Section III. A discussion on the performance of real-time estimation in general and the derivation of a real-time approximated ML estimator (RAML for the problem at hand are done in Section IV. he performance of the estimators is presented in Section V. Conclusions are given in Section VI. II. System Model We consider a sensor networ where L sensors are located on the real line at positions id, where i is an integer such that l i i, L l i + 1 and d is the distance between his wor was supported in part by the Office of Naval Research under Grant N and the National Science Foundation under Grant ANI consecutive sensors. Starting at time t 0, each sensor measures the atmospheric concentration of the gas in its vicinity. We assume that the sensors sample the atmosphere every seconds in a synchronized manner. A sensor outputs a 1 if the concentration exceeds a threshold γ and outputs a 0 otherwise. his thresholding, which is obviously information-lossy is required due to the need for simple cheap sensors or due to communications requirements. he output of all the sensors are available at the fusion center for processing. We assume that there is an error in the concentration measurement prior to thresholding and model this error over time and space as white Gaussian noise with zero mean and unnown variance σ 2. his noise accounts for the measurement error of the chemical sensors and the thermal noise generated in the electronic components of the sensors. We suppose that an instantaneous source of gas of mass M is released at some point x 0 on the real line at time t 0 > 0. We assume that the diffusion coefficient of the gas D is unnown. Our goal is to estimate θ x 0 t 0 M D σ, based on the binary observations available at the fusion center. We consider an idealized situation where the transport mechanism of the gas is diffusion. Subsequent to the release of the gas the mean concentration of the gas at some point x is given by m(x x 0,t t 0,D,M where m(x, t, D, M M 2 πdt e x 2 4Dt. (1 he measured mean concentration prior to thresholding R(x, t is normally distributed with mean m(x x 0,t t 0,D,M and variance σ 2. We assume that θ lies in some set Λ, such that the mean concentration m(x x 0,t t 0,D,M crosses the threshold γ in at least two sensors during the observation interval. III. he Maximum Lielihood Estimator Let y i,n {0, 1} be the output of the sensor at position id and time n. he lielihood function for a single sensor as a function of the parameters is p(y i,n ; θ (1 y i,n (1 Q (Γ i,n (θ + y i,n Q (Γ i,n (θ, (2 where Γ i,n (θ γ η i,n, (3 σ η i,n m(id x 0,n t 0,D,Mu(n t 0, (4 and u( is the step function. Since we assume the noise is white gaussian, the concentrations R(id, n and R(jd,m /04/$ IEEE 1008

2 are independent unless i j and n m. he log-lielihood function of all the sensor outputs from time up to a time 2 is then given by f 2 (θ log p(y 2 ; θ 2 n il i log p(y i,n ; θ, (5 where y 2 y,..., y 2 and y n y li,n,..., y lf,n. he ML estimator based on all the sensor outputs up to a time is then simply ˆθ ML (y 0 arg max f 0 (θ. (6 θ Λ It turns out that for certain observations vectors y 0, ˆθ ML is not unique, i.e., the log-lielihood has multiple global maxima (e.g., the all zeros sequence and the all ones sequence obviously do not give unique solution. In general, for θ Λ the probability for those sequences with non-unique solution is usually very small but not zero. Furthermore, the loglielihood has in general multiple stationary points. hese stationary points can be found from the gradient with respect to θ. he gradient of (5 is given by where and f 2 (θ α i,n (θ β i,n (θ In particular, f0 (θ 2 n (2y i,n 1α i,n(θβ i,n(θ, (7 p(y i,n ; θ il i ( 1 exp Γ i,n(θ 2, (8 2πσ 2 2 ηi,n n0 x 0 t 0 D M Γ i,n. (9 (2y i,n 1α i,n(θβ i,n(θ, (10 p(y i,n ; θ il i he solutions of (6 will be a subset of the set of the roots of (10. Unfortunately, the equation f 0 (θ 0 is a complicated non-linear equation with no closed form solution. he problem of finding zeros of a specified function is well nown in numerical analysis and among the various techniques is the Fisher s method of scoring (FS. Applying this technique, the update equation is ˆθ (n+1 ˆθ (n + µi 0(ˆθ (n 1 f 0 (ˆθ (n, (11 where I 2 (θ is the Fisher s information matrix (FIM based on the observation y 2. he FIM is given by I 2 (θ E f2 (θ f 2 (θ 2 α i,n(θ2 β i,n(θβ i,n(θ. (12 Q(Γ i,n (1 Q(Γ i,n n il i he estimator that we derived in this section estimates θ from y 0 by doing several FS iterations. Each iteration, the FIM I0 (ˆθ (n and the gradient f 0 (ˆθ (n are calculated, and the FIM is inverted. When is large, the complexity in terms of multiplications and additions is due to the calculation of FIM and the gradient, which are composed of double sums of multiplications of non-linear functions. he complexity in terms of multiplications is O(25L per FS iteration. he computational complexity of the inversion of the FIM, which is a 5 5 matrix, is negligible. If the fusion center receives new observations vector y the estimator has to perform again several iterations of (11. he computation complexity, for each iteration, thus, grows linearly with. Moreover, all the observations should be stored and the storing place also grows linearly with. he complexity, when the estimation is repeated for every new y,iso(25 2 L per FS iteration. Obviously the complexity of this estimator is large for realtime processing. his method is practical in the case that we can afford to wait for all the observations and then perform batch processing. IV. Real-ime Approximated Maximum Lielihood Estimation We first loo at the limit of the estimation performance when estimating θ from the observations y and ˆθ(y 0, where r 1 is the number of time samples between updates. he case r 1 corresponds to update every sample. he estimator uses only the previous estimate and the rl new observations, the observations y 0 are not directly available. Since y and y 0 are independent, y and ˆθ(y 0 are also independent. he log-lielihood for this estimation problem is f (ˆθ(y 0; θ log p(y, ˆθ(y 0; θ log p(y ; θ + log p(ˆθ(y 0; θ f (θ+f(ˆθ(y 0; θ. (13 It is easy to show that the FIM for this estimation problem is (θ E f (θ f (θ f(ˆθ(y +E 0; θ f(ˆθ(y 0 ; θ. (14 he first term on the right hand side of (14 is simply I (θ. he second term is the FIM of the estimation of θ based on ˆθ(y 0 and is denoted by Î(θ. Equation (14 can then be rewritten as 1 (θ I (θ+î(θ. (15 1 Note the difference between I (θ and Î (θ. In the later both ˆθ(y 0 and y are used for estimation. 1009

3 Note that since data manipulation cannot increase information, Î(θ I 0(θ. his implies that (θ I 0 (θ. (16 he bias of the estimator is b 0(θ E ˆθ(y 0 θ. (17 he covariance matrix of the estimator ˆθ(y 0is V 0(θ E (ˆθ(y 0 θ b 0(θ(ˆθ(y 0 θ b 0(θ. (18 From the CRB for biased estimators 3, 4, we get that the CRB matrix for the estimation of θ from ˆθ(y 0 is given by Ĉ(θ D 0(θÎ(θ 1 D 0(θ, (19 where D 0(θ 1 + b 0 (θ and 1 is the identity matrix. he covariance matrix of any estimator with bias b 0(θ is greater or equal than Ĉ(θ. In particular for the trivial estimator ˆθ(ˆθ(y 0 ˆθ(y 0, with the variance and bias given by (18 and (17, we get V 0(θ D 0(θÎ(θ 1 D 0(θ. (20 We assume that V 0(θ is invertible for all θ Λ. Under this assumption, we get J 0(θ D 0(θ V 0(θ 1 D 0(θ Î(θ I 0(θ. (21 From (21 and (15 we get (θ I (θ+j 0(θ. (22 From the CRB for biased estimators, we also get that C 0(θ D 0(θI 0(θ 1 D 0(θ, (23 where C 0(θ is the CRB matrix for the estimation of θ from y 0. If the estimator ˆθ(y 0 achieves this CRB, then In this case V 0(θ C 0(θ D 0(θI 0(θ 1 D 0(θ. (24 Î(θ I 0(θ (25 and (θ I (θ+i 0(θ I 0 (θ. (26 From (16, we get that (θ I 0 (θ. (27 Under this situation, the real-time approach that uses only new observations and the last estimate has the same CRB as the approach that uses all the observations. If the estimator ˆθ(y 0 does not achieve the CRB in (23, equation (22 indicates that ˆθ(y, ˆθ(y 0 may still achieve the CRB. his is true for example when ˆθ(y 0 is a sufficient statistic for the estimation of θ from the observations y 0. In order to calculate the estimator, we need to calculate (13. he term f (θ is calculated using (5, and thus we need only to calculate the term f(ˆθ(y 0; θ. Unfortunately calculating this term is practically impossible and requires maximum lielihood estimation for all the possible sequences instead of the unique observation sequence. herefore, the exact maximum lielihood estimation cannot be done in realtime more efficiently than the batch processing. o solve this problem, we continue by approximating p(ˆθ(y 0; θ as a Gaussian pdf with mean θ+b 0(θ and covariance V 0(θ. When more and more independent observations are used this assumption becomes justified. his means that y 0 should contain many observations after the release of the gas. In real-time processing, this assumption is obviously not valid in the first few iterations, where is small. A possible solution is to perform one batch processing in the beginning with such that there are enough observations after the time of release and then continue updating with the real-time estimator. It will be argued later, that this first bloc processing should be done due to the inability to estimate the parameters for a certain amount of time after the time of release. herefore, we approximate f(ˆθ(y 0; θ by 2 f(ˆθ(y 0; θ 1 2 (ˆθ(y 0 θ b 0(θ V 0(θ 1 (ˆθ(y 0 θ b 0(θ 1 2 log(det V 0(θ (28 It is straightforward to show that the FIM for the estimation of θ from ˆθ(y 0is Î(θ D 0(θ V 0(θ 1 D 0(θ. (29 herefore, with the Gaussian approximation the inequality sign of equation (22 can be replaced by an equality sign: (θ I (θ+d 0(θ V 0(θ 1 D 0(θ. (30 Using (21, (30 can be rewritten as (θ I (θ+j 0(θ. (31 With the Gaussian approximation, J 0(θ is the maximum available information from the estimate ˆθ(y 0. If J 0(θ is less than I 0(θ, the CRB of estimating θ from the observations y 0 can never be achieved. he total performance of the real-time estimator depends only on the bias and covariance of the previous estimation and the FIM of the new observations. If the estimation in any step is bad in the sense that the covariance is very large in comparison to the CRB, the whole estimation thereafter will be bad in that sense. herefore, the estimator must try to achieve the best possible performance in each step. 2 A constant term that does not depend on θ is omitted. 1010

4 We now mae some assumptions regarding b 0(θ and V 0(θ. We assume that b 0(θ 0andV 0(θ I 0(θ 1, that is the covariance of the estimator is equal to the CRB for unbiased estimators. he assumption of zero-mean is required due to the fact that the bias term cannot be calculated. he assumption that the covariance achieves the CRB for unbiased estimator for every is also required due to the fact that the covariance cannot be calculated during the estimation, since θ is obviously unnown. Based on these assumptions we now construct the approximated ML estimator. he approximated log-lielihood function is given by f (ˆθ(y 0; θ f (θ+ f(ˆθ(y 0; θ, (32 where f (θ is given in (5 for + 1 and 2 + r and f(ˆθ(y 0; θ is given in (28. he approximated gradient is given by f (ˆθ(y 0; θ f (θ + f 0 (ˆθ(y 0; θ, (33 where the term f (θ is given in (7 for + 1 and 2 + r. It is straightforward to show that f 0 (ˆθ(y 0; θ I 0(θ(ˆθ(y 0 θ. (34 We equal (33 to zero in order to find the stationary points, f (θ + I 0(θ(ˆθ(y 0 θ 0. (35 Equation (35 does not have a closed form solution and can only be solved numerically. he exact FIM required by (35 is not available since θ is not nown. hus the quantity I 0(ˆθ(y 0 is used instead. We then get the following set of update equations G (ˆθ, ˆθ (n ˆθ (n+1 ˆθ (n + (ˆθ (n f Î r+1 (ˆθ (n ˆθ (0 ˆθ ( ˆθ, (ˆθ (n I ˆθ (n+1 1 G + (ˆθ (ˆθ (ˆθ (n (n ˆθ, (ˆθ, ˆθ (n +Î r+1(ˆθ, (36 where ˆθ (n+1 (y, ˆθ, which is the (n + 1th iteration of the FS algorithm for the calculation of the estimator of θ from the observation y and the previous estimate ˆθ. ˆθ ( he previous estimate ˆθ is the value to which the previous set of iterations of the FS algorithm converge. In practice, a few iterations are enough, as long as the last iteration gets close enough to the desired value (i.e., the global maximum. he proposed estimator has a complexity in terms of multiplications of O(25rL for every FS iteration. his complexity does not depend on the time index. here is a need of constant storage place of size rl to save the new observations. he overall complexity of this method up to a time is O(25mL where m is the number of FS iterations done on each update. If r is small, the change in the estimate every update will not be large since the information contained in the new observations is relatively small. herefore, m can be chosen smaller than it is in the batch algorithm. V. Numerical Results A. ML Estimator We start with the ML estimator for the case that the distance between the sensors d is 10 m, the sampling time is 10 s, the total number of sensors L 21 and the threshold γ is 10 2 g/cm. he true values of the parameters are x 0 30 cm, t s, D 10cm 2 /s, M 1gandσ he initial values of the FS iterations are ˆx (0 0 2m,ˆt (0 0 92s, ˆD (0 15cm 2 /s, ˆM (0 5 g and ˆσ ( he estimator estimates θ from the observations y 0 where Although the initial values are not in the neighborhood of the true values the estimator converges after 570 iterations to the values ˆx cm, ˆt s, ˆD cm 2 /s, ˆM g and ˆσ In order to quantify the performance of the estimator for a specific set of parameters, we define the normalized errors e x0 ˆx(n (n 0 x 0 ˆt 0 t d, e t0 0, e D ˆD (n D D, e M ˆM (n M M and e σ ˆσ(n σ σ. We denote the vector of the normalized errors as e. he normalized error measures the resolution of the estimate. For x 0 and t 0, the resolution is relative to the spatial and temporal sampling interval. For D, M and σ, which are always positive, the errors are simply normalized by the respective true values. he overall performance of the estimator for a specific θ is given by Ee 2 x0, Ee 2 t0, Ee 2 D, Ee2 M and Ee2 σ. We are also interested in the quantities Ee x0, Ee t0, Ee D, Ee M and Ee σ. We next proceed with the approximated calculation of these quantities by averaging over different noise realizations. We would lie to compare them to the CRB for biased estimators. Due to the fact that the term D(θ does not have a close form expression and that approximating it numerically is difficult. We simply compare the performance to the CRB for unbiased estimators. Asymptotically as the spatial and temporal sampling become denser and denser and the observation interval increases, it is reasonable to assume that the bias will become smaller and smaller. If the CRB for unbiased estimators is achievable then at least we can be assured that the performance is very good although may not be the best. he results are summarized in able I. he performance of the estimation of all the parameters in terms of mean squared error outperforms the CRB for unbiased estimators. he CRBs shown in the table are actually normalized CRBs. hey are the lower bounds on the normalized mean squared error of any unbiased estimator. he estimator achieves this by trading variance for bias. It is clear from the results that the estimator is biased. We emphasize that although the comparison is not totally fair, there is no 1011

5 x 0 t 0 D M σ rue value Ee 2 db EedB CRBdB ABLE I numerical results for the ML estimator using FS method RAML ML x 0 t 0 D M σ rue value Ee 2 db EedB ABLE II numerical results for the real-time approximated ML estimator argument that the performance of the estimator in terms of localizing the source is excellent. he source can be localized, in this particular case, with a resolution of less than 1cm. hus the estimator achieves super resolution with respect to the spatial separation between the sensors. he estimation of D, M and σ is also very good. Repeating the same process for other parameter values shows similar results. B. RAML Estimator We proceed to study the performance of the RAML estimator for the the same system, same value of parameters, and same initial values. From the calculation of the CRB, it is apparent that no reliable estimation can be done from the observations y 0 for < In this case, the mean squared error (MSE in the estimation of all the parameters is large and the algorithm becomes unstable. herefore we start with initial bloc processing on the observations y 0 where his initial bloc processing should have enough iterations to converge to ˆθ ML (y 0. If this is not the case we now that we will lose information and the performance will be degraded on the average. We use 2000 FS iterations for this initial bloc processing. he complexity of the initial bloc processing is constant and does not increase with the number of observation. We then continue with the real-time update using r 10 and 30 iterations of (36 for each bloc. At the time where 20000, the estimator converges to the values ˆx cm, ˆt s, ˆD cm 2 /s, ˆM g and ˆσ he overall performance of the estimator for a specific θ is given by the mean squared error terms and the bias terms. We next proceed with the approximated calculation of these quantities by averaging over different noise realizations. he results after the 20000th iteration are summarized in able II. We obtain similar results for different values of θ. Comparing ables (I and (II, we see that the MSE is close to the MSE of the ML estimator. here is about 1dB loss for the estimation of x 0. For the other parameter the degra- 2 Ee x x 10 4 Fig. 1. he normalized error e x0 as the number of observations, dation is not more than 2.1dB. he bias terms, however, increased significantly (except from the case of t 0. his can be explained by the zero-mean approximation taen in the development of the RAML algorithm. Despite of this drawbac, the proposed real-time estimator has excellent performance in the MSE sense. It outperforms the unbiased CRB for all the parameters except x 0 and approaches the performance of the ML estimator. he large reduction in complexity may worth the loss in the performance. o demonstrate the evolution of the RAML algorithm, Figs. 1 shows the plot of Ee 2 x 0 as the number of observations,, increases. VI. Conclusions We have derived two estimators for the diffusive source localization problem: the ML and RAML algorithms which are both numerically obtained by the FS method. Both estimators exhibit excellent performance in the estimation of the source location as well as other parameters using only binary observations. Both estimators are biased and outperform the CRB for unbiased estimators by trading variance with bias. he RAML achieves performance close to that of the ML as the number of samples,, increases, with complexity of O( instead of O( 2. his reduction in complexity maes the RAML algorithm practicaor real-time signal processing. References 1 R. Ghez, A Primer of Diffusion Problems. Wiley, S. Vijayaumaran, Y. Levinboo, and. F. Wong, On diffusive source localization using dumb sensors, in Proceedings of the IEEE International Symposium on Information heory (ISI 2004, (Chicago, IL, Jun H. V. Poor, An Introduction to Signal Detection and Estimation. New Yor: Springer-Verlag, S. M. Kay, Fundamentals of Statistical Signal Processing: Estimation heory. NJ: Prentice Hall,

SOURCE localization [1], [2] using a distributed sensor array

SOURCE localization [1], [2] using a distributed sensor array IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 55, NO. 2, FEBRUARY 2007 665 Maximum Likelihood Localization of a Diffusive Point Source Using Binary Observations Saravanan Vijayakumaran, Student Member,

More information

Estimating Gaussian Mixture Densities with EM A Tutorial

Estimating Gaussian Mixture Densities with EM A Tutorial Estimating Gaussian Mixture Densities with EM A Tutorial Carlo Tomasi Due University Expectation Maximization (EM) [4, 3, 6] is a numerical algorithm for the maximization of functions of several variables

More information

ECE 275A Homework 6 Solutions

ECE 275A Homework 6 Solutions ECE 275A Homework 6 Solutions. The notation used in the solutions for the concentration (hyper) ellipsoid problems is defined in the lecture supplement on concentration ellipsoids. Note that θ T Σ θ =

More information

Mixture Models & EM. Nicholas Ruozzi University of Texas at Dallas. based on the slides of Vibhav Gogate

Mixture Models & EM. Nicholas Ruozzi University of Texas at Dallas. based on the slides of Vibhav Gogate Mixture Models & EM icholas Ruozzi University of Texas at Dallas based on the slides of Vibhav Gogate Previously We looed at -means and hierarchical clustering as mechanisms for unsupervised learning -means

More information

ECE531 Lecture 10b: Maximum Likelihood Estimation

ECE531 Lecture 10b: Maximum Likelihood Estimation ECE531 Lecture 10b: Maximum Likelihood Estimation D. Richard Brown III Worcester Polytechnic Institute 05-Apr-2011 Worcester Polytechnic Institute D. Richard Brown III 05-Apr-2011 1 / 23 Introduction So

More information

Mixture Models & EM. Nicholas Ruozzi University of Texas at Dallas. based on the slides of Vibhav Gogate

Mixture Models & EM. Nicholas Ruozzi University of Texas at Dallas. based on the slides of Vibhav Gogate Mixture Models & EM icholas Ruozzi University of Texas at Dallas based on the slides of Vibhav Gogate Previously We looed at -means and hierarchical clustering as mechanisms for unsupervised learning -means

More information

Parameter Estimation

Parameter Estimation 1 / 44 Parameter Estimation Saravanan Vijayakumaran sarva@ee.iitb.ac.in Department of Electrical Engineering Indian Institute of Technology Bombay October 25, 2012 Motivation System Model used to Derive

More information

EUSIPCO

EUSIPCO EUSIPCO 3 569736677 FULLY ISTRIBUTE SIGNAL ETECTION: APPLICATION TO COGNITIVE RAIO Franc Iutzeler Philippe Ciblat Telecom ParisTech, 46 rue Barrault 753 Paris, France email: firstnamelastname@telecom-paristechfr

More information

sine wave fit algorithm

sine wave fit algorithm TECHNICAL REPORT IR-S3-SB-9 1 Properties of the IEEE-STD-57 four parameter sine wave fit algorithm Peter Händel, Senior Member, IEEE Abstract The IEEE Standard 57 (IEEE-STD-57) provides algorithms for

More information

Lecture 4: Probabilistic Learning. Estimation Theory. Classification with Probability Distributions

Lecture 4: Probabilistic Learning. Estimation Theory. Classification with Probability Distributions DD2431 Autumn, 2014 1 2 3 Classification with Probability Distributions Estimation Theory Classification in the last lecture we assumed we new: P(y) Prior P(x y) Lielihood x2 x features y {ω 1,..., ω K

More information

Cramér-Rao Bounds for Estimation of Linear System Noise Covariances

Cramér-Rao Bounds for Estimation of Linear System Noise Covariances Journal of Mechanical Engineering and Automation (): 6- DOI: 593/jjmea Cramér-Rao Bounds for Estimation of Linear System oise Covariances Peter Matiso * Vladimír Havlena Czech echnical University in Prague

More information

Classical Estimation Topics

Classical Estimation Topics Classical Estimation Topics Namrata Vaswani, Iowa State University February 25, 2014 This note fills in the gaps in the notes already provided (l0.pdf, l1.pdf, l2.pdf, l3.pdf, LeastSquares.pdf). 1 Min

More information

Asymptotically Optimal and Bandwith-efficient Decentralized Detection

Asymptotically Optimal and Bandwith-efficient Decentralized Detection Asymptotically Optimal and Bandwith-efficient Decentralized Detection Yasin Yılmaz and Xiaodong Wang Electrical Engineering Department, Columbia University New Yor, NY 10027 Email: yasin,wangx@ee.columbia.edu

More information

Basic concepts in estimation

Basic concepts in estimation Basic concepts in estimation Random and nonrandom parameters Definitions of estimates ML Maimum Lielihood MAP Maimum A Posteriori LS Least Squares MMS Minimum Mean square rror Measures of quality of estimates

More information

Recursive Least Squares for an Entropy Regularized MSE Cost Function

Recursive Least Squares for an Entropy Regularized MSE Cost Function Recursive Least Squares for an Entropy Regularized MSE Cost Function Deniz Erdogmus, Yadunandana N. Rao, Jose C. Principe Oscar Fontenla-Romero, Amparo Alonso-Betanzos Electrical Eng. Dept., University

More information

Module 2. Random Processes. Version 2, ECE IIT, Kharagpur

Module 2. Random Processes. Version 2, ECE IIT, Kharagpur Module Random Processes Version, ECE IIT, Kharagpur Lesson 9 Introduction to Statistical Signal Processing Version, ECE IIT, Kharagpur After reading this lesson, you will learn about Hypotheses testing

More information

Likelihood-Based Methods

Likelihood-Based Methods Likelihood-Based Methods Handbook of Spatial Statistics, Chapter 4 Susheela Singh September 22, 2016 OVERVIEW INTRODUCTION MAXIMUM LIKELIHOOD ESTIMATION (ML) RESTRICTED MAXIMUM LIKELIHOOD ESTIMATION (REML)

More information

5682 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 55, NO. 12, DECEMBER /$ IEEE

5682 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 55, NO. 12, DECEMBER /$ IEEE 5682 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 55, NO. 12, DECEMBER 2009 Hyperplane-Based Vector Quantization for Distributed Estimation in Wireless Sensor Networks Jun Fang, Member, IEEE, and Hongbin

More information

SGN Advanced Signal Processing: Lecture 8 Parameter estimation for AR and MA models. Model order selection

SGN Advanced Signal Processing: Lecture 8 Parameter estimation for AR and MA models. Model order selection SG 21006 Advanced Signal Processing: Lecture 8 Parameter estimation for AR and MA models. Model order selection Ioan Tabus Department of Signal Processing Tampere University of Technology Finland 1 / 28

More information

Covariance function estimation in Gaussian process regression

Covariance function estimation in Gaussian process regression Covariance function estimation in Gaussian process regression François Bachoc Department of Statistics and Operations Research, University of Vienna WU Research Seminar - May 2015 François Bachoc Gaussian

More information

Introduction: The Perceptron

Introduction: The Perceptron Introduction: The Perceptron Haim Sompolinsy, MIT October 4, 203 Perceptron Architecture The simplest type of perceptron has a single layer of weights connecting the inputs and output. Formally, the perceptron

More information

A NOVEL OPTIMAL PROBABILITY DENSITY FUNCTION TRACKING FILTER DESIGN 1

A NOVEL OPTIMAL PROBABILITY DENSITY FUNCTION TRACKING FILTER DESIGN 1 A NOVEL OPTIMAL PROBABILITY DENSITY FUNCTION TRACKING FILTER DESIGN 1 Jinglin Zhou Hong Wang, Donghua Zhou Department of Automation, Tsinghua University, Beijing 100084, P. R. China Control Systems Centre,

More information

Brief Review on Estimation Theory

Brief Review on Estimation Theory Brief Review on Estimation Theory K. Abed-Meraim ENST PARIS, Signal and Image Processing Dept. abed@tsi.enst.fr This presentation is essentially based on the course BASTA by E. Moulines Brief review on

More information

Multilayer Perceptron

Multilayer Perceptron Outline Hong Chang Institute of Computing Technology, Chinese Academy of Sciences Machine Learning Methods (Fall 2012) Outline Outline I 1 Introduction 2 Single Perceptron 3 Boolean Function Learning 4

More information

On the convergence of the iterative solution of the likelihood equations

On the convergence of the iterative solution of the likelihood equations On the convergence of the iterative solution of the likelihood equations R. Moddemeijer University of Groningen, Department of Computing Science, P.O. Box 800, NL-9700 AV Groningen, The Netherlands, e-mail:

More information

ECE 275A Homework 7 Solutions

ECE 275A Homework 7 Solutions ECE 275A Homework 7 Solutions Solutions 1. For the same specification as in Homework Problem 6.11 we want to determine an estimator for θ using the Method of Moments (MOM). In general, the MOM estimator

More information

PARAMETER ESTIMATION AND ORDER SELECTION FOR LINEAR REGRESSION PROBLEMS. Yngve Selén and Erik G. Larsson

PARAMETER ESTIMATION AND ORDER SELECTION FOR LINEAR REGRESSION PROBLEMS. Yngve Selén and Erik G. Larsson PARAMETER ESTIMATION AND ORDER SELECTION FOR LINEAR REGRESSION PROBLEMS Yngve Selén and Eri G Larsson Dept of Information Technology Uppsala University, PO Box 337 SE-71 Uppsala, Sweden email: yngveselen@ituuse

More information

Lecture 7 Introduction to Statistical Decision Theory

Lecture 7 Introduction to Statistical Decision Theory Lecture 7 Introduction to Statistical Decision Theory I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw December 20, 2016 1 / 55 I-Hsiang Wang IT Lecture 7

More information

MS&E 226: Small Data. Lecture 11: Maximum likelihood (v2) Ramesh Johari

MS&E 226: Small Data. Lecture 11: Maximum likelihood (v2) Ramesh Johari MS&E 226: Small Data Lecture 11: Maximum likelihood (v2) Ramesh Johari ramesh.johari@stanford.edu 1 / 18 The likelihood function 2 / 18 Estimating the parameter This lecture develops the methodology behind

More information

Design of Nearly Constant Velocity Track Filters for Brief Maneuvers

Design of Nearly Constant Velocity Track Filters for Brief Maneuvers 4th International Conference on Information Fusion Chicago, Illinois, USA, July 5-8, 20 Design of Nearly Constant Velocity rack Filters for Brief Maneuvers W. Dale Blair Georgia ech Research Institute

More information

EIE6207: Estimation Theory

EIE6207: Estimation Theory EIE6207: Estimation Theory Man-Wai MAK Dept. of Electronic and Information Engineering, The Hong Kong Polytechnic University enmwmak@polyu.edu.hk http://www.eie.polyu.edu.hk/ mwmak References: Steven M.

More information

EE-597 Notes Quantization

EE-597 Notes Quantization EE-597 Notes Quantization Phil Schniter June, 4 Quantization Given a continuous-time and continuous-amplitude signal (t, processing and storage by modern digital hardware requires discretization in both

More information

ELEG 5633 Detection and Estimation Minimum Variance Unbiased Estimators (MVUE)

ELEG 5633 Detection and Estimation Minimum Variance Unbiased Estimators (MVUE) 1 ELEG 5633 Detection and Estimation Minimum Variance Unbiased Estimators (MVUE) Jingxian Wu Department of Electrical Engineering University of Arkansas Outline Minimum Variance Unbiased Estimators (MVUE)

More information

Statistics: Learning models from data

Statistics: Learning models from data DS-GA 1002 Lecture notes 5 October 19, 2015 Statistics: Learning models from data Learning models from data that are assumed to be generated probabilistically from a certain unknown distribution is a crucial

More information

Variations. ECE 6540, Lecture 10 Maximum Likelihood Estimation

Variations. ECE 6540, Lecture 10 Maximum Likelihood Estimation Variations ECE 6540, Lecture 10 Last Time BLUE (Best Linear Unbiased Estimator) Formulation Advantages Disadvantages 2 The BLUE A simplification Assume the estimator is a linear system For a single parameter

More information

Data Detection for Controlled ISI. h(nt) = 1 for n=0,1 and zero otherwise.

Data Detection for Controlled ISI. h(nt) = 1 for n=0,1 and zero otherwise. Data Detection for Controlled ISI *Symbol by symbol suboptimum detection For the duobinary signal pulse h(nt) = 1 for n=0,1 and zero otherwise. The samples at the output of the receiving filter(demodulator)

More information

Fisher Information Maximization for Distributed Vector Estimation in Wireless Sensor Networks

Fisher Information Maximization for Distributed Vector Estimation in Wireless Sensor Networks Fisher Information Maximization for Distributed Vector Estimation in Wireless Sensor Networs Mojtaba Shirazi, Azadeh Vosoughi, Senior Member, IEEE arxiv:75.8v [cs.it] Jul 8 Abstract In this paper we consider

More information

A Few Notes on Fisher Information (WIP)

A Few Notes on Fisher Information (WIP) A Few Notes on Fisher Information (WIP) David Meyer dmm@{-4-5.net,uoregon.edu} Last update: April 30, 208 Definitions There are so many interesting things about Fisher Information and its theoretical properties

More information

QUANTIZATION FOR DISTRIBUTED ESTIMATION IN LARGE SCALE SENSOR NETWORKS

QUANTIZATION FOR DISTRIBUTED ESTIMATION IN LARGE SCALE SENSOR NETWORKS QUANTIZATION FOR DISTRIBUTED ESTIMATION IN LARGE SCALE SENSOR NETWORKS Parvathinathan Venkitasubramaniam, Gökhan Mergen, Lang Tong and Ananthram Swami ABSTRACT We study the problem of quantization for

More information

On Identification of Cascade Systems 1

On Identification of Cascade Systems 1 On Identification of Cascade Systems 1 Bo Wahlberg Håkan Hjalmarsson Jonas Mårtensson Automatic Control and ACCESS, School of Electrical Engineering, KTH, SE-100 44 Stockholm, Sweden. (bo.wahlberg@ee.kth.se

More information

10-701/15-781, Machine Learning: Homework 4

10-701/15-781, Machine Learning: Homework 4 10-701/15-781, Machine Learning: Homewor 4 Aarti Singh Carnegie Mellon University ˆ The assignment is due at 10:30 am beginning of class on Mon, Nov 15, 2010. ˆ Separate you answers into five parts, one

More information

Local Strong Convexity of Maximum-Likelihood TDOA-Based Source Localization and Its Algorithmic Implications

Local Strong Convexity of Maximum-Likelihood TDOA-Based Source Localization and Its Algorithmic Implications Local Strong Convexity of Maximum-Likelihood TDOA-Based Source Localization and Its Algorithmic Implications Huikang Liu, Yuen-Man Pun, and Anthony Man-Cho So Dept of Syst Eng & Eng Manag, The Chinese

More information

1 EM algorithm: updating the mixing proportions {π k } ik are the posterior probabilities at the qth iteration of EM.

1 EM algorithm: updating the mixing proportions {π k } ik are the posterior probabilities at the qth iteration of EM. Université du Sud Toulon - Var Master Informatique Probabilistic Learning and Data Analysis TD: Model-based clustering by Faicel CHAMROUKHI Solution The aim of this practical wor is to show how the Classification

More information

Generalized Linear Models. Kurt Hornik

Generalized Linear Models. Kurt Hornik Generalized Linear Models Kurt Hornik Motivation Assuming normality, the linear model y = Xβ + e has y = β + ε, ε N(0, σ 2 ) such that y N(μ, σ 2 ), E(y ) = μ = β. Various generalizations, including general

More information

Target Localization in Wireless Sensor Networks using Error Correcting Codes

Target Localization in Wireless Sensor Networks using Error Correcting Codes Target Localization in Wireless Sensor etwors using Error Correcting Codes Aditya Vempaty, Student Member, IEEE, Yunghsiang S. Han, Fellow, IEEE, ramod K. Varshney, Fellow, IEEE arxiv:36.452v2 stat.a 4

More information

Lecture 8: Bayesian Estimation of Parameters in State Space Models

Lecture 8: Bayesian Estimation of Parameters in State Space Models in State Space Models March 30, 2016 Contents 1 Bayesian estimation of parameters in state space models 2 Computational methods for parameter estimation 3 Practical parameter estimation in state space

More information

Example: An experiment can either result in success or failure with probability θ and (1 θ) respectively. The experiment is performed independently

Example: An experiment can either result in success or failure with probability θ and (1 θ) respectively. The experiment is performed independently Chapter 3 Sufficient statistics and variance reduction Let X 1,X 2,...,X n be a random sample from a certain distribution with p.m/d.f fx θ. A function T X 1,X 2,...,X n = T X of these observations is

More information

Outline. Motivation Contest Sample. Estimator. Loss. Standard Error. Prior Pseudo-Data. Bayesian Estimator. Estimators. John Dodson.

Outline. Motivation Contest Sample. Estimator. Loss. Standard Error. Prior Pseudo-Data. Bayesian Estimator. Estimators. John Dodson. s s Practitioner Course: Portfolio Optimization September 24, 2008 s The Goal of s The goal of estimation is to assign numerical values to the parameters of a probability model. Considerations There are

More information

A STATE-SPACE APPROACH FOR THE ANALYSIS OF WAVE AND DIFFUSION FIELDS

A STATE-SPACE APPROACH FOR THE ANALYSIS OF WAVE AND DIFFUSION FIELDS ICASSP 2015 A STATE-SPACE APPROACH FOR THE ANALYSIS OF WAVE AND DIFFUSION FIELDS Stefano Maranò Donat Fäh Hans-Andrea Loeliger ETH Zurich, Swiss Seismological Service, 8092 Zürich ETH Zurich, Dept. Information

More information

Statistics. Lecture 2 August 7, 2000 Frank Porter Caltech. The Fundamentals; Point Estimation. Maximum Likelihood, Least Squares and All That

Statistics. Lecture 2 August 7, 2000 Frank Porter Caltech. The Fundamentals; Point Estimation. Maximum Likelihood, Least Squares and All That Statistics Lecture 2 August 7, 2000 Frank Porter Caltech The plan for these lectures: The Fundamentals; Point Estimation Maximum Likelihood, Least Squares and All That What is a Confidence Interval? Interval

More information

UTILIZING PRIOR KNOWLEDGE IN ROBUST OPTIMAL EXPERIMENT DESIGN. EE & CS, The University of Newcastle, Australia EE, Technion, Israel.

UTILIZING PRIOR KNOWLEDGE IN ROBUST OPTIMAL EXPERIMENT DESIGN. EE & CS, The University of Newcastle, Australia EE, Technion, Israel. UTILIZING PRIOR KNOWLEDGE IN ROBUST OPTIMAL EXPERIMENT DESIGN Graham C. Goodwin James S. Welsh Arie Feuer Milan Depich EE & CS, The University of Newcastle, Australia 38. EE, Technion, Israel. Abstract:

More information

ENEE 621 SPRING 2016 DETECTION AND ESTIMATION THEORY THE PARAMETER ESTIMATION PROBLEM

ENEE 621 SPRING 2016 DETECTION AND ESTIMATION THEORY THE PARAMETER ESTIMATION PROBLEM c 2007-2016 by Armand M. Makowski 1 ENEE 621 SPRING 2016 DETECTION AND ESTIMATION THEORY THE PARAMETER ESTIMATION PROBLEM 1 The basic setting Throughout, p, q and k are positive integers. The setup With

More information

SELECTIVE ANGLE MEASUREMENTS FOR A 3D-AOA INSTRUMENTAL VARIABLE TMA ALGORITHM

SELECTIVE ANGLE MEASUREMENTS FOR A 3D-AOA INSTRUMENTAL VARIABLE TMA ALGORITHM SELECTIVE ANGLE MEASUREMENTS FOR A 3D-AOA INSTRUMENTAL VARIABLE TMA ALGORITHM Kutluyıl Doğançay Reza Arablouei School of Engineering, University of South Australia, Mawson Lakes, SA 595, Australia ABSTRACT

More information

The Finite Sample Properties of the Least Squares Estimator / Basic Hypothesis Testing

The Finite Sample Properties of the Least Squares Estimator / Basic Hypothesis Testing 1 The Finite Sample Properties of the Least Squares Estimator / Basic Hypothesis Testing Greene Ch 4, Kennedy Ch. R script mod1s3 To assess the quality and appropriateness of econometric estimators, we

More information

10. Linear Models and Maximum Likelihood Estimation

10. Linear Models and Maximum Likelihood Estimation 10. Linear Models and Maximum Likelihood Estimation ECE 830, Spring 2017 Rebecca Willett 1 / 34 Primary Goal General problem statement: We observe y i iid pθ, θ Θ and the goal is to determine the θ that

More information

Fisher Information Matrix-based Nonlinear System Conversion for State Estimation

Fisher Information Matrix-based Nonlinear System Conversion for State Estimation Fisher Information Matrix-based Nonlinear System Conversion for State Estimation Ming Lei Christophe Baehr and Pierre Del Moral Abstract In practical target tracing a number of improved measurement conversion

More information

CRAMÉR-RAO BOUNDS FOR RADAR ALTIMETER WAVEFORMS. Corinne Mailhes (1), Jean-Yves Tourneret (1), Jérôme Severini (1), and Pierre Thibaut (2)

CRAMÉR-RAO BOUNDS FOR RADAR ALTIMETER WAVEFORMS. Corinne Mailhes (1), Jean-Yves Tourneret (1), Jérôme Severini (1), and Pierre Thibaut (2) CRAMÉR-RAO BOUNDS FOR RADAR ALTIMETER WAVEFORMS Corinne Mailhes (1, Jean-Yves Tourneret (1, Jérôme Severini (1, and Pierre Thibaut ( (1 University of Toulouse, IRIT-ENSEEIHT-TéSA, Toulouse, France ( Collecte

More information

Analysis of incremental RLS adaptive networks with noisy links

Analysis of incremental RLS adaptive networks with noisy links Analysis of incremental RLS adaptive networs with noisy lins Azam Khalili, Mohammad Ali Tinati, and Amir Rastegarnia a) Faculty of Electrical and Computer Engineering, University of Tabriz Tabriz 51664,

More information

Unbiased Estimation. Binomial problem shows general phenomenon. An estimator can be good for some values of θ and bad for others.

Unbiased Estimation. Binomial problem shows general phenomenon. An estimator can be good for some values of θ and bad for others. Unbiased Estimation Binomial problem shows general phenomenon. An estimator can be good for some values of θ and bad for others. To compare ˆθ and θ, two estimators of θ: Say ˆθ is better than θ if it

More information

Asymptotic Filtering and Entropy Rate of a Hidden Markov Process in the Rare Transitions Regime

Asymptotic Filtering and Entropy Rate of a Hidden Markov Process in the Rare Transitions Regime Asymptotic Filtering and Entropy Rate of a Hidden Marov Process in the Rare Transitions Regime Chandra Nair Dept. of Elect. Engg. Stanford University Stanford CA 94305, USA mchandra@stanford.edu Eri Ordentlich

More information

2 Statistical Estimation: Basic Concepts

2 Statistical Estimation: Basic Concepts Technion Israel Institute of Technology, Department of Electrical Engineering Estimation and Identification in Dynamical Systems (048825) Lecture Notes, Fall 2009, Prof. N. Shimkin 2 Statistical Estimation:

More information

Estimation theory. Parametric estimation. Properties of estimators. Minimum variance estimator. Cramer-Rao bound. Maximum likelihood estimators

Estimation theory. Parametric estimation. Properties of estimators. Minimum variance estimator. Cramer-Rao bound. Maximum likelihood estimators Estimation theory Parametric estimation Properties of estimators Minimum variance estimator Cramer-Rao bound Maximum likelihood estimators Confidence intervals Bayesian estimation 1 Random Variables Let

More information

SIMULTANEOUS STATE AND PARAMETER ESTIMATION USING KALMAN FILTERS

SIMULTANEOUS STATE AND PARAMETER ESTIMATION USING KALMAN FILTERS ECE5550: Applied Kalman Filtering 9 1 SIMULTANEOUS STATE AND PARAMETER ESTIMATION USING KALMAN FILTERS 9.1: Parameters versus states Until now, we have assumed that the state-space model of the system

More information

Performance Comparison of K-Means and Expectation Maximization with Gaussian Mixture Models for Clustering EE6540 Final Project

Performance Comparison of K-Means and Expectation Maximization with Gaussian Mixture Models for Clustering EE6540 Final Project Performance Comparison of K-Means and Expectation Maximization with Gaussian Mixture Models for Clustering EE6540 Final Project Devin Cornell & Sushruth Sastry May 2015 1 Abstract In this article, we explore

More information

Optimal Time Division Multiplexing Schemes for DOA Estimation of a Moving Target Using a Colocated MIMO Radar

Optimal Time Division Multiplexing Schemes for DOA Estimation of a Moving Target Using a Colocated MIMO Radar Optimal Division Multiplexing Schemes for DOA Estimation of a Moving Target Using a Colocated MIMO Radar Kilian Rambach, Markus Vogel and Bin Yang Institute of Signal Processing and System Theory University

More information

EIE6207: Maximum-Likelihood and Bayesian Estimation

EIE6207: Maximum-Likelihood and Bayesian Estimation EIE6207: Maximum-Likelihood and Bayesian Estimation Man-Wai MAK Dept. of Electronic and Information Engineering, The Hong Kong Polytechnic University enmwmak@polyu.edu.hk http://www.eie.polyu.edu.hk/ mwmak

More information

DETECTING PROCESS STATE CHANGES BY NONLINEAR BLIND SOURCE SEPARATION. Alexandre Iline, Harri Valpola and Erkki Oja

DETECTING PROCESS STATE CHANGES BY NONLINEAR BLIND SOURCE SEPARATION. Alexandre Iline, Harri Valpola and Erkki Oja DETECTING PROCESS STATE CHANGES BY NONLINEAR BLIND SOURCE SEPARATION Alexandre Iline, Harri Valpola and Erkki Oja Laboratory of Computer and Information Science Helsinki University of Technology P.O.Box

More information

NON-LINEAR CONTROL OF OUTPUT PROBABILITY DENSITY FUNCTION FOR LINEAR ARMAX SYSTEMS

NON-LINEAR CONTROL OF OUTPUT PROBABILITY DENSITY FUNCTION FOR LINEAR ARMAX SYSTEMS Control 4, University of Bath, UK, September 4 ID-83 NON-LINEAR CONTROL OF OUTPUT PROBABILITY DENSITY FUNCTION FOR LINEAR ARMAX SYSTEMS H. Yue, H. Wang Control Systems Centre, University of Manchester

More information

System Identification, Lecture 4

System Identification, Lecture 4 System Identification, Lecture 4 Kristiaan Pelckmans (IT/UU, 2338) Course code: 1RT880, Report code: 61800 - Spring 2012 F, FRI Uppsala University, Information Technology 30 Januari 2012 SI-2012 K. Pelckmans

More information

On the convergence of the iterative solution of the likelihood equations

On the convergence of the iterative solution of the likelihood equations On the convergence of the iterative solution of the likelihood equations R. Moddemeijer University of Groningen, Department of Computing Science, P.O. Box 800, NL-9700 AV Groningen, The Netherlands, e-mail:

More information

System Identification, Lecture 4

System Identification, Lecture 4 System Identification, Lecture 4 Kristiaan Pelckmans (IT/UU, 2338) Course code: 1RT880, Report code: 61800 - Spring 2016 F, FRI Uppsala University, Information Technology 13 April 2016 SI-2016 K. Pelckmans

More information

Online Question Asking Algorithms For Measuring Skill

Online Question Asking Algorithms For Measuring Skill Online Question Asking Algorithms For Measuring Skill Jack Stahl December 4, 2007 Abstract We wish to discover the best way to design an online algorithm for measuring hidden qualities. In particular,

More information

A Matrix Theoretic Derivation of the Kalman Filter

A Matrix Theoretic Derivation of the Kalman Filter A Matrix Theoretic Derivation of the Kalman Filter 4 September 2008 Abstract This paper presents a matrix-theoretic derivation of the Kalman filter that is accessible to students with a strong grounding

More information

12.4 Known Channel (Water-Filling Solution)

12.4 Known Channel (Water-Filling Solution) ECEn 665: Antennas and Propagation for Wireless Communications 54 2.4 Known Channel (Water-Filling Solution) The channel scenarios we have looed at above represent special cases for which the capacity

More information

Lecture 8: Information Theory and Statistics

Lecture 8: Information Theory and Statistics Lecture 8: Information Theory and Statistics Part II: Hypothesis Testing and I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw December 23, 2015 1 / 50 I-Hsiang

More information

STONY BROOK UNIVERSITY. CEAS Technical Report 829

STONY BROOK UNIVERSITY. CEAS Technical Report 829 1 STONY BROOK UNIVERSITY CEAS Technical Report 829 Variable and Multiple Target Tracking by Particle Filtering and Maximum Likelihood Monte Carlo Method Jaechan Lim January 4, 2006 2 Abstract In most applications

More information

TSRT14: Sensor Fusion Lecture 9

TSRT14: Sensor Fusion Lecture 9 TSRT14: Sensor Fusion Lecture 9 Simultaneous localization and mapping (SLAM) Gustaf Hendeby gustaf.hendeby@liu.se TSRT14 Lecture 9 Gustaf Hendeby Spring 2018 1 / 28 Le 9: simultaneous localization and

More information

MMSE DECODING FOR ANALOG JOINT SOURCE CHANNEL CODING USING MONTE CARLO IMPORTANCE SAMPLING

MMSE DECODING FOR ANALOG JOINT SOURCE CHANNEL CODING USING MONTE CARLO IMPORTANCE SAMPLING MMSE DECODING FOR ANALOG JOINT SOURCE CHANNEL CODING USING MONTE CARLO IMPORTANCE SAMPLING Yichuan Hu (), Javier Garcia-Frias () () Dept. of Elec. and Comp. Engineering University of Delaware Newark, DE

More information

SIGNAL STRENGTH LOCALIZATION BOUNDS IN AD HOC & SENSOR NETWORKS WHEN TRANSMIT POWERS ARE RANDOM. Neal Patwari and Alfred O.

SIGNAL STRENGTH LOCALIZATION BOUNDS IN AD HOC & SENSOR NETWORKS WHEN TRANSMIT POWERS ARE RANDOM. Neal Patwari and Alfred O. SIGNAL STRENGTH LOCALIZATION BOUNDS IN AD HOC & SENSOR NETWORKS WHEN TRANSMIT POWERS ARE RANDOM Neal Patwari and Alfred O. Hero III Department of Electrical Engineering & Computer Science University of

More information

Cramer-Rao Lower Bound Computation Via the Characteristic Function

Cramer-Rao Lower Bound Computation Via the Characteristic Function Cramer-Rao ower Bound Computation Via the Characteristic Function Steven Kay, Fellow, IEEE, and Cuichun Xu Abstract The Cramer-Rao ower Bound is widely used in statistical signal processing as a benchmark

More information

LIKELIHOOD-BASED ESTIMATION OF PERIODICITIES IN SYMBOLIC SEQUENCES. Dept. of Mathematical Statistics, Lund University, Sweden

LIKELIHOOD-BASED ESTIMATION OF PERIODICITIES IN SYMBOLIC SEQUENCES. Dept. of Mathematical Statistics, Lund University, Sweden LIKELIHOOD-BASED ESTIMATION OF PERIODICITIES IN SYMBOLIC SEQUENCES Stefan Ingi Adalbjörnsson, Johan Swärd, and Andreas Jaobsson Dept. of Mathematical Statistics, Lund University, Sweden ABSTRACT In this

More information

An Interpretation of the Moore-Penrose. Generalized Inverse of a Singular Fisher Information Matrix

An Interpretation of the Moore-Penrose. Generalized Inverse of a Singular Fisher Information Matrix An Interpretation of the Moore-Penrose 1 Generalized Inverse of a Singular Fisher Information Matrix Yen-Huan Li, Member, IEEE, Ping-Cheng Yeh, Member, IEEE, arxiv:1107.1944v4 [cs.i] 6 Aug 2012 Abstract

More information

Estimation Theory Fredrik Rusek. Chapters

Estimation Theory Fredrik Rusek. Chapters Estimation Theory Fredrik Rusek Chapters 3.5-3.10 Recap We deal with unbiased estimators of deterministic parameters Performance of an estimator is measured by the variance of the estimate (due to the

More information

ECE 275B Homework #2 Due Thursday MIDTERM is Scheduled for Tuesday, February 21, 2012

ECE 275B Homework #2 Due Thursday MIDTERM is Scheduled for Tuesday, February 21, 2012 Reading ECE 275B Homework #2 Due Thursday 2-16-12 MIDTERM is Scheduled for Tuesday, February 21, 2012 Read and understand the Newton-Raphson and Method of Scores MLE procedures given in Kay, Example 7.11,

More information

Efficiency Tradeoffs in Estimating the Linear Trend Plus Noise Model. Abstract

Efficiency Tradeoffs in Estimating the Linear Trend Plus Noise Model. Abstract Efficiency radeoffs in Estimating the Linear rend Plus Noise Model Barry Falk Department of Economics, Iowa State University Anindya Roy University of Maryland Baltimore County Abstract his paper presents

More information

SIGNAL STRENGTH LOCALIZATION BOUNDS IN AD HOC & SENSOR NETWORKS WHEN TRANSMIT POWERS ARE RANDOM. Neal Patwari and Alfred O.

SIGNAL STRENGTH LOCALIZATION BOUNDS IN AD HOC & SENSOR NETWORKS WHEN TRANSMIT POWERS ARE RANDOM. Neal Patwari and Alfred O. SIGNAL STRENGTH LOCALIZATION BOUNDS IN AD HOC & SENSOR NETWORKS WHEN TRANSMIT POWERS ARE RANDOM Neal Patwari and Alfred O. Hero III Department of Electrical Engineering & Computer Science University of

More information

I. D. Landau, A. Karimi: A Course on Adaptive Control Adaptive Control. Part 9: Adaptive Control with Multiple Models and Switching

I. D. Landau, A. Karimi: A Course on Adaptive Control Adaptive Control. Part 9: Adaptive Control with Multiple Models and Switching I. D. Landau, A. Karimi: A Course on Adaptive Control - 5 1 Adaptive Control Part 9: Adaptive Control with Multiple Models and Switching I. D. Landau, A. Karimi: A Course on Adaptive Control - 5 2 Outline

More information

On the Cramér-Rao lower bound under model mismatch

On the Cramér-Rao lower bound under model mismatch On the Cramér-Rao lower bound under model mismatch Carsten Fritsche, Umut Orguner, Emre Özkan and Fredrik Gustafsson Linköping University Post Print N.B.: When citing this work, cite the original article.

More information

Estimation Tasks. Short Course on Image Quality. Matthew A. Kupinski. Introduction

Estimation Tasks. Short Course on Image Quality. Matthew A. Kupinski. Introduction Estimation Tasks Short Course on Image Quality Matthew A. Kupinski Introduction Section 13.3 in B&M Keep in mind the similarities between estimation and classification Image-quality is a statistical concept

More information

Parameter Estimation

Parameter Estimation Parameter Estimation Consider a sample of observations on a random variable Y. his generates random variables: (y 1, y 2,, y ). A random sample is a sample (y 1, y 2,, y ) where the random variables y

More information

Carrier frequency estimation. ELEC-E5410 Signal processing for communications

Carrier frequency estimation. ELEC-E5410 Signal processing for communications Carrier frequency estimation ELEC-E54 Signal processing for communications Contents. Basic system assumptions. Data-aided DA: Maximum-lielihood ML estimation of carrier frequency 3. Data-aided: Practical

More information

A Reservoir Sampling Algorithm with Adaptive Estimation of Conditional Expectation

A Reservoir Sampling Algorithm with Adaptive Estimation of Conditional Expectation A Reservoir Sampling Algorithm with Adaptive Estimation of Conditional Expectation Vu Malbasa and Slobodan Vucetic Abstract Resource-constrained data mining introduces many constraints when learning from

More information

Multiple Bits Distributed Moving Horizon State Estimation for Wireless Sensor Networks. Ji an Luo

Multiple Bits Distributed Moving Horizon State Estimation for Wireless Sensor Networks. Ji an Luo Multiple Bits Distributed Moving Horizon State Estimation for Wireless Sensor Networks Ji an Luo 2008.6.6 Outline Background Problem Statement Main Results Simulation Study Conclusion Background Wireless

More information

ECE 275B Homework #2 Due Thursday 2/12/2015. MIDTERM is Scheduled for Thursday, February 19, 2015

ECE 275B Homework #2 Due Thursday 2/12/2015. MIDTERM is Scheduled for Thursday, February 19, 2015 Reading ECE 275B Homework #2 Due Thursday 2/12/2015 MIDTERM is Scheduled for Thursday, February 19, 2015 Read and understand the Newton-Raphson and Method of Scores MLE procedures given in Kay, Example

More information

ECE531 Lecture 8: Non-Random Parameter Estimation

ECE531 Lecture 8: Non-Random Parameter Estimation ECE531 Lecture 8: Non-Random Parameter Estimation D. Richard Brown III Worcester Polytechnic Institute 19-March-2009 Worcester Polytechnic Institute D. Richard Brown III 19-March-2009 1 / 25 Introduction

More information

Linear Prediction Theory

Linear Prediction Theory Linear Prediction Theory Joseph A. O Sullivan ESE 524 Spring 29 March 3, 29 Overview The problem of estimating a value of a random process given other values of the random process is pervasive. Many problems

More information

Least Squares. Ken Kreutz-Delgado (Nuno Vasconcelos) ECE 175A Winter UCSD

Least Squares. Ken Kreutz-Delgado (Nuno Vasconcelos) ECE 175A Winter UCSD Least Squares Ken Kreutz-Delgado (Nuno Vasconcelos) ECE 75A Winter 0 - UCSD (Unweighted) Least Squares Assume linearity in the unnown, deterministic model parameters Scalar, additive noise model: y f (

More information

arxiv: v1 [cs.it] 31 Dec 2018

arxiv: v1 [cs.it] 31 Dec 2018 arxiv:181211891v1 [csit] 31 Dec 2018 How did Donald Trump Surprisingly Win the 2016 United States Presidential Election? an Information-Theoretic Perspective ie Clean Sensing for Big Data Analytics: Optimal

More information

Variable Learning Rate LMS Based Linear Adaptive Inverse Control *

Variable Learning Rate LMS Based Linear Adaptive Inverse Control * ISSN 746-7659, England, UK Journal of Information and Computing Science Vol., No. 3, 6, pp. 39-48 Variable Learning Rate LMS Based Linear Adaptive Inverse Control * Shuying ie, Chengjin Zhang School of

More information

Pattern Classification

Pattern Classification Pattern Classification All materials in these slides were taen from Pattern Classification (2nd ed) by R. O. Duda,, P. E. Hart and D. G. Stor, John Wiley & Sons, 2000 with the permission of the authors

More information