Extended Kalman Filter Modifications Based on an Optimization View Point
|
|
- Willis Warner
- 5 years ago
- Views:
Transcription
1 8th International Conference on Information Fusion Washington, DC - July 6-9, 05 Etended Kalman Filter Modifications Based on an Optimization View Point Martin A. Skoglund, Gustaf endeby Daniel Aehill Division of Automatic Control, Linköping University, SE Linköping, Sweden, {ms,hendeby,daniel}@isy.liu.se Dept. of Sensor & EW Systems, Swedish Defence Research Agency FOI, Linköping, Sweden Abstract he etended Kalman filter EKF has been an important tool for state estimation of nonlinear systems since its introduction. owever, the EKF does not possess the same optimality properties as the Kalman filter, and may perform poorly. By viewing the EKF as an optimization problem it is possible to, in many cases, improve its performance and robustness. he paper derives three variations of the EKF by applying different optimisation algorithms to the EKF cost function and relate these to the iterated EKF. he derived filters are evaluated in two simulation studies which eemplify the presented filters. I. INRODUCION he Kalman filter KF, [] is a minimum variance estimator if the involved models are linear and the noise is additive Gaussian. A common situation is however that dynamic models and/or measurement models are nonlinear. he etended Kalman filter EKF, [] approimates these nonlinearities using a first-order aylor epansion around the current estimate and then the time and the measurement update formulas as in the KF can be used. igher-order terms can be included in the aylor epansions if the degree of nonlinearity is high and thus not well approimated using only first-order information. Many suggestions have been made to improve the EKF. In the modified EKF [3] higher-order moments are included, [4, 5] use sigma-point approimations which is related to including second-order moments [6], while [7] proposes a robust EKF using theory. Another viewpoint is that the EKF is a special case of Gauss-Newton GN optimization without iterations, see e.g., [8, 9] for longer discussions. hus, iterations may improve the performance with just a little added effort. he idea eplored here is that it is not only linearisation errors that causes filter inconsistencies but rather the non-iterative way of including new information as it may not be sufficient for staying close to the true state. here is of course nothing that prevents the inclusion of higher-order terms when relinearising in the iterations. Notice that the Fisher scoring algorithm is also known as Gauss-Newton when applied to nonlinear least-squares see, e.g., [0]. An iterative version of EKF is the iterative etended Kalman filter IEKF, [] which re-linearises the measurement Jacobian at the updated state and applies the measurement update repeatedly until some stopping criterion is met. It should be noted that the iterations might not reduce the state error or reduce the associated cost function. It is actually not guaranteed even that a single update in the EKF improves the estimate since the full step may be to long. In [] a bistatic ranging eample illustrates when the IEKF converges to the true state as measurements become more accurate whereas the EKF is biased even though the state error covariance converges. Yet the EKF often works satisfactorily which can be understood in the light of being a GN method in which the first correction often gives a large improvement compared to the following, often smaller, corrections. Furthermore, the cost is often decreased for the full step. ence, both IEKF and EKF could easily be improved using step control using e.g., backtracking line search. hat is, with step control, or at least checking for cost decrease, unnecessary divergent behaviour of the EKF can be avoided. his paper shows that line search can be used in EKF and IEKF improve the overall performance. Furthermore, a quasi-newton approimation for EKF update with a essian correction term is derived. his matri can be successively built during the iterations using a symmetric rank- update. A slight modification of the Levenberg-Marquardt-IEKF LM- IEKF, [] is made to include a diagonal damping matri which could further speed up convergence. he paper is organized as follows. In Sec. II the cost function for the measurement update of the EKF is derived. Sec. III describes the EKF as an optimization problem via Gauss- Newton and introduces line search as a way to improve the convergence rate. he IEKF is further modified to include an eplicit step size. A quasi-newton EKF is the introduced and a slight modification of the LM-IEKF is then derived. Sec. IV contains two simulations eemplifying the derived filters and the papper ends with conclusions and future directions in Sec. V. II. E MEASUREMEN UPDAE AND IS COS FUNCION Consider systems having linear dynamics and nonlinear measurement models according to t = F t +w t y t = h t +e t a b where t is the state, y t the measurement, w t N 0,Q t and e t N 0,R t all at time t. In a Bayesian setting, the ISIF 856
2 Algorithm Kalman Filter Available measurements are Y = {y,...,y N }. Require an initial state, ˆ 0 0, and an initial state covariance, P 0 0, Q t and R t are the covariance matrices of w t and e t, respectively. ime Update Measurement Update ˆ t t = Fˆ t t P t t = FP t t F +Q t K = P t t P t t +R t, ˆ t t = ˆ t t +Ky t ˆ t t, P t t = I KP t t, 3a 3b 4a 4b 4c nonlinear Gaussian filtering problem propagate the following densities p t Y t N t ;ˆ t t,p t t a p t Y t N t ;ˆ t t,p t t b p t Y t N t ;ˆ t t,p t t c where p t Y t is the initial prior density, p t Y t is the prediction density, p t Y t is the posterior density and Y t = {y i } t i= are the measurements. he dynamic model b is linear which means that the prediction density is eactly propagated in the KF time update equations as given in Algorithm. he measurement update is however nonlinear and is given as the maimum a posteriori MAP estimate according to ˆ t t = argmap t Y t. 5 t he posterior is proportional to the product of the likelihood and the prior p t Y t py t t p t Y t ep yt h t R t yt h t +ˆ t t t P t t ˆ t t t, where terms not depending on t have been dropped. Since the eponential function is monotone and increasing we can minimise the negative log instead which gives where ˆ t t = argmin = argmin = argmin r = yt h R t yt h + ˆ t t P t t ˆ t t r r V 6 [ R / t ] y h, 7 P / t t ˆ t t and we have dropped the time dependence on the variable. he first term in the objective function is corresponding to the unknown sensor noise and the second term corresponds to the importance of the prior to the estimate. Furthermore, the optimization problem should return a covariance approimation ˆP t t. We simply base it on the assumption that the ˆ t t is optimal resulting in ˆP t t = I K i i ˆP t t, 8 since it should reflect the state uncertainty when 5 has converged. A general formulation for the MAP estimation problem is then {ˆ t t, ˆP t t } = argminv 9 where ˆP t t is computed using 8 while K i and i depends on the optimization method used. III. EKF VIEWED AS AN OPIMIZAION PROBLEM here is no general method for solving 6 and the nonlinear nature implies that iterative methods are needed to obtain an approimate estimate []. he Newton step p is the solution to the equation Vp = V. 0 Starting from an initial guess 0, Newton s method iterates the following equations i+ = i V i Vi, where V i and V i are the essian and the gradient of the cost function, respectively. Note that ifv is quadratic then gives the minimum in a single step. Gauss-Newton can be applied when the minimisation problem is on nonlinear least-squares form as in 6. hen the gradient has a simple form V = J r where and the essian is approimated as J = rs s, s= V J J, 3 avoiding the need for second-order derivatives. he GN step then becomes i+ = i J i J i J i r i. 4 where J i = J i and r i = r i are introduced to ease notation. Furthermore, the Jacobian evaluates to [ ] R / t i J i = where i = hs s, 5 s=i P / t t is the measurement Jacobian. 857
3 A. Line Search he EKF measurement update should ideally give a cost decrease V i+ < V i, 6 at each iteration. his will not always the case and this is why a full step in the EKF, and consequently in the IEKF, may cause problems. A remedy to this behaviour is to introduce a line search such that the estimate actually results in cost decrease. Without an eplicit step-size rule the EKF should be epected to have sub-linearly convergence [3] even though it should be linear [9]. A step-size condition was proposed in [8] for smoothing passes over the data batch 0 α i t c/t where c > 0 and α i t is initially small but should approach unity. here are many strategies for defining a sufficient decrease for the iteration procedure. he general iteration for Gauss-Newton with step-size parameter α is i+ = i α i J i J i J i r i, 7 where each α i is chosen as to satisfy some criterion e.g., the cost decrease condition 6. For the EKF update the GN step with step-size 7 evaluates to ˆ t t = ˆ t t +α i K y t hˆ t t 8 = ˆ t t +αp. 9 he cost decrease condition is then Vˆ t t +αp < Vˆ t t, 0 and it is only needed to find a suitable α over 0 < α starting with α =. Furthermore, an eact line search where α = argminvˆ t t +sp, 0<s is rather cheap since the search direction p is fied. A simple line search strategy can be to multiply the current step length with some factor less than unity until sufficient decrease is obtained. More advanced methods can of course also be used such as the Wolfe conditions, see e.g., [4], V i +α i p i V i +c α V i p i, V i +α i p i p i c V i p i, a b with 0 < c < c <, which also require that the slope has been reduced sufficiently by α i. he backtracking line search, see e.g., [5], is another effective ineact line search method which simply reduces the step by a factor α := αβ until V+αp < V+αγ V p, 3 is satisfied, starting with α =, with 0 < γ < 0.5 and 0 < β <. Algorithm Iterated Etended Kalman Measurement Update Require an initial state, ˆ 0 0, and an initial state covariance, ˆP Measurement update iterations i = hs 4a s s=i K i = ˆP t t i i ˆPt t i +R t 4b i+ = ˆ t t +K i yt h i i ˆ t t i 4c. Update the state and the covariance B. Iterated EKF ˆ t t = i+, ˆP t t = I K i i ˆP t t. 5a 5b he IEKF was derived in [9,, 6] and is summarised in Algorithm. he iterated update for the IEKF from 4c is i+ = ˆ+K i y t h i i ˆ i = i + ˆ i +K i yt h i i ˆ i 6 where ˆ = ˆ t t and the iterations are initialized with 0 = ˆ. With 7 the step parameter α can be introduced. he modified measurement update of the IEKF is then on the form i+ = i +α i ˆ i +K i yt h i i ˆ i = i +α i p i, 7 where the step length 0 < α i and the search direction p i is chosen such that 6, or a more sophisticated convergence criterion, is satisfied in each step. Note that for α i =, the first iteration is eactly the same as for the EKF. his step-size should always be attempted first since, if it is allowed, will result in faster convergence. We will refer to this method as IEKF-L. As a special case, the standard EKF is obtained in the first iteration of the measurement update 4. Note that the state covariance is updated using the last Kalman gain and measurement Jacobian when the iterations have finished 5 as discussed in 8. C. Quasi-Newton EKF Quasi-Newton type methods try to avoid eact essian and/or Jacobian computation for efficiency. he essian approimation described in 3 may be bad far from the optimum resulting in e.g., slow convergence. A quite successful approach to avoid this is to introduce a correction V i J i J i + i, 8 where i should mimic the dropped second-order terms. he step p with essian correction is the solution to the problem J J +p = J r 9 858
4 which is iteratively solved by i+ = i J i J i + i J i r i. 30 he state update for the quasi-newton EKF QN-EKF becomes i+ = ˆ+K q i yt h i i i S q i i i, 3a S q i = i R i i +P + i, 3b K q i = Sq i i R, 3c where P = P t t, h i = h i, i = ˆ i have been introduced. he derivation can be found in Appendi A. he matri i can be built during the iterations [7] using where i = i + w iv i +v i w i v i s i v i = J i r i J i r i w i s i v i s i v iv i, 3 = i R y h i + i R y h i, 33a z i = J i J i r i = i i R y h i, 33b s i = i i, w i = z i i s i. 33c 33d Furthermore, the quasi-newton relation i s i = z i is satisfied by construction. he iterations are initialized with 0 = 0 and i should be replaced with τ i i before the update where τ i = min, s i z i s i is i, 34 is a heuristic to ensure that i converges to zero when the residual vanishes. he covariance update with this approach can be updated using 8 for the last iterate ˆP t t = I K q i iˆp t t. 35 An eplicit line-search parameter can further be introduced i+ = i +α i i +K q i yt h i i i S q i i i. 36 using the same argumentation as in Section III-B. D. Levenberg-Marquardt Iterated EKF rust region methods for GN are similar to the essian correction above in the way that the Newton step is computed. he main difference being that only a single parameter is adapted at runtime. he well-known Levenberg-Marquardt method introduced by [8, 9] is such a method and it is obtained by adding a damping parameter µ to the GN problem Applying this to 4 results in J J +µip = J r. 37 i+ = ˆ+K i yt h i i i µi I K i i P i, 38a P = I P P + I P, 38b µ i K i = P i i P i +R, 38c which is referred to as Levenberg-Marquardt IEKF LM-IEKF. he damping parameter works as an interpolation between GN and steepest-descent when µ is small, and large, respectively. he parameter also affects the step-size, where large µ means shorter steps reflecting the size of the trustregion and it should be adapted at each iteration. he relations in 38 were derived in [] but, just as in [0], it was never indicated how µ is selected, in fact, they never seem to perform any iterations. herefore their solution rather works as ikhonov regularization for the linearized problem with fied µ. In [, Eample ] the damping have to be selected as µ = condj J 0 6 in order to obtain roughly the same update. he update, counterintuitively, has a larger cost, V, than the EKF update but a lower RMSE. his eample is etremely ill-conditioned and the essian approimation is thus not very useful. Furthermore, the condition number can serve as a simple heuristic for selecting the damping automatically resulting in condj J + µ I. Another option, due to [9], is to start with some initial µ = µ 0 typically µ 0 = 0.0 and factor v >. he factor is used to either increase or decrease the damping if the cost is increased or decrease, respectively. he LM-IEKF can be improved by replacing the diagonal damping with µdiagj J in 37 as suggested by [9]. By doing so, larger steps are made in directions where the gradient is small, further speeding up convergence. he resulting equations for the problem 6 then becomes i+ = ˆ+K i yt h i i i µ i I K i i PB i i, 39a P = I P P + B i P, 39b µ i K i = P i i P i +R, B i = diagji J i = diagi R i +P, which is shown in Appendi B. 39c 39d IV. RESULS he filters are evaluated in two simulations highlighting the effect of line search and the overall better performance obtained with the derived optimisation based methods in a bearings only tracking problem. A. Line Search o illustrate the effect of using line search in EKF and IEKF, consider the measurement function h =. 40 his result in a cost function with two minima in ± y and a local maimum at = 0 which makes the problem difficult. Fig. shows the estimates obtained with the EKF and IEKF initialized with 0 = 0., R = 0. and P = where V 0 = 4.90, while the true state is =, and a perfect measurement y = is given. In the IEKF the step length is reduced to α = 0.5 for the first iteration resulting in IEKF = 0.807, V IEKF = while the EKF equivalent to the IEKF 859
5 0 9 8 EKF IEKF - iteration IEKF - 3 iterations EKF.9 EKF IEKF-L LM-IEKF QN-EKF 7.8 V IEKF IEKF Fig.. EKF with full step increases the cost while half the step-size results in a reduced cost. Remember that EKF and IEKF are identical in the first step. hree IEKF iterations with line search significantly reduces the cost and further improves the estimate. without step-length adaption yields EKF =.5, V EKF = Another two iterations in the IEKF results in 3 IEKF = 0.978, V3 IEKF = wo things can be noted from this simple eample. First, the adaptive step-length improves the EKF noticeably at a relatively low cost. Second, the iterations in the IEKF further improves the estimate. Furthermore, with a smaller measurement covariance e.g., R = 0.0 the EKF performance degrades even more giving EKF = 4.0 not shown in Fig., while the IEKF performs well. his illustrates that EKF struggles with large residuals as the linearisation point differs much from its optimal value. B. Bearings Only racking he net eample, a bearings-only tracking problem involving several sensors, which is also studied in [, Ch. 5] and [, Ch. 4], is used to illustrate IEKF-L, QN-EKF and LM- IEKF. In a first stage the target with state = [X,Y] is stationary, i.e., w t = 0, at the true position = [.5,.5]. he bearing measurement function from the j-th sensor S j at time t is y j t = h j t +e t = arctany t S j Y,X t S j X +e t, 4 where arctan is the two argument arctangent function, S Y and S X denotes the Y and the X coordinates of the sensors, respectively. he noise is e t N 0,R t. he Jacobians of the dynamics and measurements are [ F = I, j Y S j = X S j X +Y S j Y ] Y X S j. X 4 With the two sensors having positions S = [0,0] and S = [.5,0]. he filters are initialised with ˆ 0 0 = [0.5,0.], P 0 0 = 0.I, R = π 0 5 I and Q = 0.I. Fig. shows the four filters posteriors with σ covariances after a single Y X Fig.. All filters ecept for the EKF are iterated 5 times with line search and come closer to the true state with a covariance that captures the uncertainty. ABLE I MONE CARLO SIMULAION RESULS FOR E FOUR FILERS. Method EKF IEKF-L LM-IEKF QN-EKF Mean-RMSE perfect measurement and 5 iterations. All filters, ecept for the EKF, result in acceptable estimates of the true target state and its uncertainty. After 0 iterations IEKF-L, LM-IEKF and QN-EKF produce nearly identical results. For the final eample 0 iterations are used to ensure the all filters have converged less would have sufficed and the result are clearly improved estimates compared to the EKF. A small Monte Carlo study is done with 0 trajectories generated by w t N 0,0.I, e t N 0,π 0 5 I for 0 time steps. he filters are initialized with the true position ˆ 0 0 = [.5,.5] with P 0 0 = 0.I. he results are summarized in able I where it can be seen again that all filters perform better than EKF with IEKF-L and LM-IEKF being slightly better than QN-EKF. V. CONCLUSIONS he cost function used in the etended Kalman filter EKF has been studied from an optimisation point of view. Applying optimisation techniques to the cost function motivates using an adaptive step length in the EKF and three iterated EKF variations are derived. In simulations it is shown that these filters outperform the standard EKF providing more accurate and consistent results. It remains to investigate the convergence rates obtained this way, as well as how to incorporate other EKF improvements such as the higher order EKF, square-root 860
6 implementations for improved numerical stability, and how to deal with colored noise. ACKNOWLEDGMEN his work was funded by the Vinnova Industry Ecellence Center LINK-SIC together with the Swedish Research Council through the grant Scalable Kalman Filters and the Swedish strategic research center Security Link. APPENDIX A DERIVAION OF E QUASI-NEWON EKF Using 7, 5 gives a simplified notation [ ] R r i = / y h i, 43 J i = P / ˆ i [ R / i P / he right-hand side of 30 evaluates to ]. 44 i Ji J i + i Ji r i = = i +i R i +P + i i R y h i +P ˆ i = ˆ+ i R i +P + i i R y h i i ˆ i i ˆ i = ˆ+S q i i R y h i i ˆ i S q i iˆ i = ˆ+K q i y t h i i i S q i i i, 45 which is the epression in 3 with i = ˆ i. APPENDIX B DERIVAION OF E MODIFIED LM-IEKF he LM-IEKF with modified step is evaluated using 37, 4, 43 and 44 as i Ji J i +µ i B i Ji r i = = i +i R i +P +µ i B i i R y h i +P ˆ i = ˆ+ i R i +P +µ i B i i R y h i i i µ i B i i = ˆ+K i y t h i i i µ i I K i i PB i i, 46 which is the epression in 39. Where we have used the relations and K i = i R i + P i R = P i i P i +R, 47 i R i + P = I K i i P, 48 P = P +µ i B i = I P P + B i P. 49 µ i REFERENCES [] R. E. Kalman, A new approach to linear filtering and prediction problems, ransactions of the ASME Journal of Basic Engineering, vol. 8, no. Series D, pp , 960. [] A.. Jazwinski, Stochastic Processes and Filtering heory, ser. Mathematics in Science and Engineering. Academic Press, Inc, 970, vol. 64. [3] N. U. Ahmed and S. M. Radaideh, Modified etended Kalman filtering, IEEE ransactions on Automatic Control, vol. 39, no. 6, pp. 3 36, Jun [4] S. Julier, J. Uhlmann, and. F. Durrant-Whyte, A new method for the nonlinear transformation of means and covariances in filters and estimators, IEEE ransactions on Automatic Control, vol. 45, no. 3, pp , Mar 000. [5] S. J. Julier and J. K. Uhlmann, Unscented filtering and nonlinear estimation, Proceedings of the IEEE, vol. 9, no. 3, pp. 40 4, Mar [6] F. Gustafsson and G. endeby, Some relations between etended and unscented Kalman filters, IEEE ransactions on Signal Processing, vol. 60, no., pp , Feb. 0. [7] G. A. Einicke and L. B. White, Robust etended Kalman filtering, IEEE ransactions on Signal Processing, vol. 47, no. 9, pp , Sep 999. [8] D. Bertsekas, Incremental Least Squares Methods and the Etended Kalman Filter, in Decision and Control, 994., Proceedings of the 33rd IEEE Conference on, vol., Lake Buena Vista, Florida, USA, dec 994, pp. 4. [9] P. J. G. eunissen, On the local convergence of the iterated etended kalman filter, in Proceeding of the XX General Assembly of the IUGG, IAG Section IV. Vienna: IAG, Aug. 99, pp [0] G. K. Smyth, Encyclopedia of Environmetrics. John Wiley & Sons, Ltd, 006. [Online]. Available: [] B. M. Bell and F. W. Cathey, he iterated Kalman filter update as a Gauss-Newton method, IEEE ransactions on Automatic Control, vol. 38, no., pp , Feb [] R. L. Bellaire, E. W. Kamen, and S. M. Zabin, New nonlinear iterated filter with applications to target tracking, in SPIE Signal and Data Processing of Small argets, vol. 56, San Diego, CA, USA, Sep. 995, pp [3]. Moriyama, N. Yamashita, and M. Fukushima, he incremental Gauss-Newton algorithm with adaptive stepsize rule, Computational Optimization and Applications, vol. 6, no., pp. 07 4, 003. [4] J. Nocedal and S. J. Wright, Numerical Optimization, nd ed. New York: Springer, 006. [5] S. Boyd and L. Vandenberghe, Conve Optimization. Cambridge University Press, 004. [6] Y. Bar-Shalom and X.-R. Li, Estimation and tracking: principles, techniques and software. Boston: Artech ouse, 993. [7] J. E. Dennis, D. M. Gay, and R. E. Welsh, Algorithm 573 NLSOL, An adaptive nonlinear least-squares algorithm, ACM ransactions on Mathematical Software, vol. 7, pp , 98. [8] K. Levenberg, A method for the solution of certain non-linear problems in least squares, Quarterly Journal of Applied Mathmatics, vol. II, no., pp , 944. [9] D. W. Marquardt, An algorithm for least-squares estimation of nonlinear parameters, SIAM Journal on Applied Mathematics, vol., no., pp , 963. [0] M. Fatemi, L. Svensson, L. ammarstrand, and M. Morlande, A study of MAP estimation techniques for nonlinear filtering, in Proceedings of International conference on information fusion FUSION, 03. [] G. endeby, Performance and implementation aspects of nonlinear filtering, Dissertations No 6, Linköping Studies in Science and echnology, Mar [], Fundamental estimation and detection limits in linear non- Gaussian systems, Licentiate hesis No 99, Department of Electrical Engineering, Linköpings universitet, Sweden, Nov
Nonlinear State Estimation! Particle, Sigma-Points Filters!
Nonlinear State Estimation! Particle, Sigma-Points Filters! Robert Stengel! Optimal Control and Estimation, MAE 546! Princeton University, 2017!! Particle filter!! Sigma-Points Unscented Kalman ) filter!!
More informationThe Scaled Unscented Transformation
The Scaled Unscented Transformation Simon J. Julier, IDAK Industries, 91 Missouri Blvd., #179 Jefferson City, MO 6519 E-mail:sjulier@idak.com Abstract This paper describes a generalisation of the unscented
More informationMobile Robot Localization
Mobile Robot Localization 1 The Problem of Robot Localization Given a map of the environment, how can a robot determine its pose (planar coordinates + orientation)? Two sources of uncertainty: - observations
More informationDesign of Nearly Constant Velocity Track Filters for Brief Maneuvers
4th International Conference on Information Fusion Chicago, Illinois, USA, July 5-8, 20 Design of Nearly Constant Velocity rack Filters for Brief Maneuvers W. Dale Blair Georgia ech Research Institute
More informationNUMERICAL COMPUTATION OF THE CAPACITY OF CONTINUOUS MEMORYLESS CHANNELS
NUMERICAL COMPUTATION OF THE CAPACITY OF CONTINUOUS MEMORYLESS CHANNELS Justin Dauwels Dept. of Information Technology and Electrical Engineering ETH, CH-8092 Zürich, Switzerland dauwels@isi.ee.ethz.ch
More informationPower EP. Thomas Minka Microsoft Research Ltd., Cambridge, UK MSR-TR , October 4, Abstract
Power EP Thomas Minka Microsoft Research Ltd., Cambridge, UK MSR-TR-2004-149, October 4, 2004 Abstract This note describes power EP, an etension of Epectation Propagation (EP) that makes the computations
More informationCramér-Rao Bounds for Estimation of Linear System Noise Covariances
Journal of Mechanical Engineering and Automation (): 6- DOI: 593/jjmea Cramér-Rao Bounds for Estimation of Linear System oise Covariances Peter Matiso * Vladimír Havlena Czech echnical University in Prague
More informationON MODEL SELECTION FOR STATE ESTIMATION FOR NONLINEAR SYSTEMS. Robert Bos,1 Xavier Bombois Paul M. J. Van den Hof
ON MODEL SELECTION FOR STATE ESTIMATION FOR NONLINEAR SYSTEMS Robert Bos,1 Xavier Bombois Paul M. J. Van den Hof Delft Center for Systems and Control, Delft University of Technology, Mekelweg 2, 2628 CD
More informationWhere now? Machine Learning and Bayesian Inference
Machine Learning and Bayesian Inference Dr Sean Holden Computer Laboratory, Room FC6 Telephone etension 67 Email: sbh@clcamacuk wwwclcamacuk/ sbh/ Where now? There are some simple take-home messages from
More informationTactical Ballistic Missile Tracking using the Interacting Multiple Model Algorithm
Tactical Ballistic Missile Tracking using the Interacting Multiple Model Algorithm Robert L Cooperman Raytheon Co C 3 S Division St Petersburg, FL Robert_L_Cooperman@raytheoncom Abstract The problem of
More informationEKF/UKF Maneuvering Target Tracking using Coordinated Turn Models with Polar/Cartesian Velocity
EKF/UKF Maneuvering Target Tracking using Coordinated Turn Models with Polar/Cartesian Velocity Michael Roth, Gustaf Hendeby, and Fredrik Gustafsson Dept. Electrical Engineering, Linköping University,
More informationUnconstrained Multivariate Optimization
Unconstrained Multivariate Optimization Multivariate optimization means optimization of a scalar function of a several variables: and has the general form: y = () min ( ) where () is a nonlinear scalar-valued
More informationConstrained State Estimation Using the Unscented Kalman Filter
16th Mediterranean Conference on Control and Automation Congress Centre, Ajaccio, France June 25-27, 28 Constrained State Estimation Using the Unscented Kalman Filter Rambabu Kandepu, Lars Imsland and
More informationChapter 3 Numerical Methods
Chapter 3 Numerical Methods Part 2 3.2 Systems of Equations 3.3 Nonlinear and Constrained Optimization 1 Outline 3.2 Systems of Equations 3.3 Nonlinear and Constrained Optimization Summary 2 Outline 3.2
More informationBeyond Newton s method Thomas P. Minka
Beyond Newton s method Thomas P. Minka 2000 (revised 7/21/2017) Abstract Newton s method for optimization is equivalent to iteratively maimizing a local quadratic approimation to the objective function.
More informationPrediction of ESTSP Competition Time Series by Unscented Kalman Filter and RTS Smoother
Prediction of ESTSP Competition Time Series by Unscented Kalman Filter and RTS Smoother Simo Särkkä, Aki Vehtari and Jouko Lampinen Helsinki University of Technology Department of Electrical and Communications
More informationLocal Strong Convexity of Maximum-Likelihood TDOA-Based Source Localization and Its Algorithmic Implications
Local Strong Convexity of Maximum-Likelihood TDOA-Based Source Localization and Its Algorithmic Implications Huikang Liu, Yuen-Man Pun, and Anthony Man-Cho So Dept of Syst Eng & Eng Manag, The Chinese
More informationA New Nonlinear Filtering Method for Ballistic Target Tracking
th International Conference on Information Fusion Seattle, WA, USA, July 6-9, 9 A New Nonlinear Filtering Method for Ballistic arget racing Chunling Wu Institute of Electronic & Information Engineering
More informationin a Rao-Blackwellised Unscented Kalman Filter
A Rao-Blacwellised Unscented Kalman Filter Mar Briers QinetiQ Ltd. Malvern Technology Centre Malvern, UK. m.briers@signal.qinetiq.com Simon R. Masell QinetiQ Ltd. Malvern Technology Centre Malvern, UK.
More informationLikelihood Bounds for Constrained Estimation with Uncertainty
Proceedings of the 44th IEEE Conference on Decision and Control, and the European Control Conference 5 Seville, Spain, December -5, 5 WeC4. Likelihood Bounds for Constrained Estimation with Uncertainty
More informationv are uncorrelated, zero-mean, white
6.0 EXENDED KALMAN FILER 6.1 Introduction One of the underlying assumptions of the Kalman filter is that it is designed to estimate the states of a linear system based on measurements that are a linear
More informationFUNDAMENTAL FILTERING LIMITATIONS IN LINEAR NON-GAUSSIAN SYSTEMS
FUNDAMENTAL FILTERING LIMITATIONS IN LINEAR NON-GAUSSIAN SYSTEMS Gustaf Hendeby Fredrik Gustafsson Division of Automatic Control Department of Electrical Engineering, Linköpings universitet, SE-58 83 Linköping,
More informationESTIMATOR STABILITY ANALYSIS IN SLAM. Teresa Vidal-Calleja, Juan Andrade-Cetto, Alberto Sanfeliu
ESTIMATOR STABILITY ANALYSIS IN SLAM Teresa Vidal-Calleja, Juan Andrade-Cetto, Alberto Sanfeliu Institut de Robtica i Informtica Industrial, UPC-CSIC Llorens Artigas 4-6, Barcelona, 88 Spain {tvidal, cetto,
More informationCSE 559A: Computer Vision
CSE 559A: Computer Vision Fall 208: T-R: :30-pm @ Lopata 0 Instructor: Ayan Chakrabarti (ayan@wustl.edu). Course Staff: Zhihao ia, Charlie Wu, Han Liu http://www.cse.wustl.edu/~ayan/courses/cse559a/ Sep
More informationRecursive Least Squares for an Entropy Regularized MSE Cost Function
Recursive Least Squares for an Entropy Regularized MSE Cost Function Deniz Erdogmus, Yadunandana N. Rao, Jose C. Principe Oscar Fontenla-Romero, Amparo Alonso-Betanzos Electrical Eng. Dept., University
More informationOPTIMAL ESTIMATION of DYNAMIC SYSTEMS
CHAPMAN & HALL/CRC APPLIED MATHEMATICS -. AND NONLINEAR SCIENCE SERIES OPTIMAL ESTIMATION of DYNAMIC SYSTEMS John L Crassidis and John L. Junkins CHAPMAN & HALL/CRC A CRC Press Company Boca Raton London
More informationChapter 6. Nonlinear Equations. 6.1 The Problem of Nonlinear Root-finding. 6.2 Rate of Convergence
Chapter 6 Nonlinear Equations 6. The Problem of Nonlinear Root-finding In this module we consider the problem of using numerical techniques to find the roots of nonlinear equations, f () =. Initially we
More informationWhy do we care? Measurements. Handling uncertainty over time: predicting, estimating, recognizing, learning. Dealing with time
Handling uncertainty over time: predicting, estimating, recognizing, learning Chris Atkeson 2004 Why do we care? Speech recognition makes use of dependence of words and phonemes across time. Knowing where
More informationLecture Note 2: Estimation and Information Theory
Univ. of Michigan - NAME 568/EECS 568/ROB 530 Winter 2018 Lecture Note 2: Estimation and Information Theory Lecturer: Maani Ghaffari Jadidi Date: April 6, 2018 2.1 Estimation A static estimation problem
More informationMachine Learning 4771
Machine Learning 4771 Instructor: ony Jebara Kalman Filtering Linear Dynamical Systems and Kalman Filtering Structure from Motion Linear Dynamical Systems Audio: x=pitch y=acoustic waveform Vision: x=object
More informationTracking an Accelerated Target with a Nonlinear Constant Heading Model
Tracking an Accelerated Target with a Nonlinear Constant Heading Model Rong Yang, Gee Wah Ng DSO National Laboratories 20 Science Park Drive Singapore 118230 yrong@dsoorgsg ngeewah@dsoorgsg Abstract This
More informationMultidisciplinary System Design Optimization (MSDO)
oday s opics Multidisciplinary System Design Optimization (MSDO) Approimation Methods Lecture 9 6 April 004 Design variable lining Reduced-Basis Methods Response Surface Approimations Kriging Variable-Fidelity
More informationWhy do we care? Examples. Bayes Rule. What room am I in? Handling uncertainty over time: predicting, estimating, recognizing, learning
Handling uncertainty over time: predicting, estimating, recognizing, learning Chris Atkeson 004 Why do we care? Speech recognition makes use of dependence of words and phonemes across time. Knowing where
More informationCSE 559A: Computer Vision Tomorrow Zhihao's Office Hours back in Jolley 309: 10:30am-Noon
CSE 559A: Computer Vision ADMINISTRIVIA Tomorrow Zhihao's Office Hours back in Jolley 309: 0:30am-Noon Fall 08: T-R: :30-pm @ Lopata 0 This Friday: Regular Office Hours Net Friday: Recitation for PSET
More informationFisher Information Matrix-based Nonlinear System Conversion for State Estimation
Fisher Information Matrix-based Nonlinear System Conversion for State Estimation Ming Lei Christophe Baehr and Pierre Del Moral Abstract In practical target tracing a number of improved measurement conversion
More informationSelected Topics in Optimization. Some slides borrowed from
Selected Topics in Optimization Some slides borrowed from http://www.stat.cmu.edu/~ryantibs/convexopt/ Overview Optimization problems are almost everywhere in statistics and machine learning. Input Model
More informationA comparison of estimation accuracy by the use of KF, EKF & UKF filters
Computational Methods and Eperimental Measurements XIII 779 A comparison of estimation accurac b the use of KF EKF & UKF filters S. Konatowski & A. T. Pieniężn Department of Electronics Militar Universit
More informationState Space and Hidden Markov Models
State Space and Hidden Markov Models Kunsch H.R. State Space and Hidden Markov Models. ETH- Zurich Zurich; Aliaksandr Hubin Oslo 2014 Contents 1. Introduction 2. Markov Chains 3. Hidden Markov and State
More informationTime Series Prediction by Kalman Smoother with Cross-Validated Noise Density
Time Series Prediction by Kalman Smoother with Cross-Validated Noise Density Simo Särkkä E-mail: simo.sarkka@hut.fi Aki Vehtari E-mail: aki.vehtari@hut.fi Jouko Lampinen E-mail: jouko.lampinen@hut.fi Abstract
More informationAutomated Tuning of the Nonlinear Complementary Filter for an Attitude Heading Reference Observer
Automated Tuning of the Nonlinear Complementary Filter for an Attitude Heading Reference Observer Oscar De Silva, George K.I. Mann and Raymond G. Gosine Faculty of Engineering and Applied Sciences, Memorial
More informationLecture 2: From Linear Regression to Kalman Filter and Beyond
Lecture 2: From Linear Regression to Kalman Filter and Beyond January 18, 2017 Contents 1 Batch and Recursive Estimation 2 Towards Bayesian Filtering 3 Kalman Filter and Bayesian Filtering and Smoothing
More information2 Nonlinear least squares algorithms
1 Introduction Notes for 2017-05-01 We briefly discussed nonlinear least squares problems in a previous lecture, when we described the historical path leading to trust region methods starting from the
More informationBregman Divergence and Mirror Descent
Bregman Divergence and Mirror Descent Bregman Divergence Motivation Generalize squared Euclidean distance to a class of distances that all share similar properties Lots of applications in machine learning,
More informationParticle Methods as Message Passing
Particle Methods as Message Passing Justin Dauwels RIKEN Brain Science Institute Hirosawa,2-1,Wako-shi,Saitama,Japan Email: justin@dauwels.com Sascha Korl Phonak AG CH-8712 Staefa, Switzerland Email: sascha.korl@phonak.ch
More informationEKF, UKF. Pieter Abbeel UC Berkeley EECS. Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics
EKF, UKF Pieter Abbeel UC Berkeley EECS Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics Kalman Filter Kalman Filter = special case of a Bayes filter with dynamics model and sensory
More informationEKF, UKF. Pieter Abbeel UC Berkeley EECS. Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics
EKF, UKF Pieter Abbeel UC Berkeley EECS Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics Kalman Filter Kalman Filter = special case of a Bayes filter with dynamics model and sensory
More informationRiccati difference equations to non linear extended Kalman filter constraints
International Journal of Scientific & Engineering Research Volume 3, Issue 12, December-2012 1 Riccati difference equations to non linear extended Kalman filter constraints Abstract Elizabeth.S 1 & Jothilakshmi.R
More informationTopic 8c Multi Variable Optimization
Course Instructor Dr. Raymond C. Rumpf Office: A 337 Phone: (915) 747 6958 E Mail: rcrumpf@utep.edu Topic 8c Multi Variable Optimization EE 4386/5301 Computational Methods in EE Outline Mathematical Preliminaries
More informationGLOBALLY CONVERGENT GAUSS-NEWTON METHODS
GLOBALLY CONVERGENT GAUSS-NEWTON METHODS by C. Fraley TECHNICAL REPORT No. 200 March 1991 Department of Statistics, GN-22 University of Washington Seattle, Washington 98195 USA Globally Convergent Gauss-Newton
More informationReading Group on Deep Learning Session 1
Reading Group on Deep Learning Session 1 Stephane Lathuiliere & Pablo Mesejo 2 June 2016 1/31 Contents Introduction to Artificial Neural Networks to understand, and to be able to efficiently use, the popular
More informationState Estimation for Nonlinear Systems using Restricted Genetic Optimization
State Estimation for Nonlinear Systems using Restricted Genetic Optimization Santiago Garrido, Luis Moreno, and Carlos Balaguer Universidad Carlos III de Madrid, Leganés 28911, Madrid (Spain) Abstract.
More informationUsing the Kalman Filter to Estimate the State of a Maneuvering Aircraft
1 Using the Kalman Filter to Estimate the State of a Maneuvering Aircraft K. Meier and A. Desai Abstract Using sensors that only measure the bearing angle and range of an aircraft, a Kalman filter is implemented
More informationEngineering Part IIB: Module 4F10 Statistical Pattern Processing Lecture 5: Single Layer Perceptrons & Estimating Linear Classifiers
Engineering Part IIB: Module 4F0 Statistical Pattern Processing Lecture 5: Single Layer Perceptrons & Estimating Linear Classifiers Phil Woodland: pcw@eng.cam.ac.uk Michaelmas 202 Engineering Part IIB:
More informationAdaptive Unscented Kalman Filter with Multiple Fading Factors for Pico Satellite Attitude Estimation
Adaptive Unscented Kalman Filter with Multiple Fading Factors for Pico Satellite Attitude Estimation Halil Ersin Söken and Chingiz Hajiyev Aeronautics and Astronautics Faculty Istanbul Technical University
More informationL06. LINEAR KALMAN FILTERS. NA568 Mobile Robotics: Methods & Algorithms
L06. LINEAR KALMAN FILTERS NA568 Mobile Robotics: Methods & Algorithms 2 PS2 is out! Landmark-based Localization: EKF, UKF, PF Today s Lecture Minimum Mean Square Error (MMSE) Linear Kalman Filter Gaussian
More informationIterative Filtering and Smoothing In Non-Linear and Non-Gaussian Systems Using Conditional Moments
Iterative Filtering and Smoothing In Non-Linear and Non-Gaussian Systems Using Conditional Moments Filip Tronarp, Ángel F. García-Fernández, and Simo Särkkä This is a post-print of a paper published in
More informationSparse Levenberg-Marquardt algorithm.
Sparse Levenberg-Marquardt algorithm. R. I. Hartley and A. Zisserman: Multiple View Geometry in Computer Vision. Cambridge University Press, second edition, 2004. Appendix 6 was used in part. The Levenberg-Marquardt
More informationj=1 r 1 x 1 x n. r m r j (x) r j r j (x) r j (x). r j x k
Maria Cameron Nonlinear Least Squares Problem The nonlinear least squares problem arises when one needs to find optimal set of parameters for a nonlinear model given a large set of data The variables x,,
More informationComparison of Kalman Filter Estimation Approaches for State Space Models with Nonlinear Measurements
Comparison of Kalman Filter Estimation Approaches for State Space Models with Nonlinear Measurements Fredri Orderud Sem Sælands vei 7-9, NO-7491 Trondheim Abstract The Etended Kalman Filter (EKF) has long
More informationNON-NEGATIVE MATRIX FACTORIZATION FOR PARAMETER ESTIMATION IN HIDDEN MARKOV MODELS. Balaji Lakshminarayanan and Raviv Raich
NON-NEGATIVE MATRIX FACTORIZATION FOR PARAMETER ESTIMATION IN HIDDEN MARKOV MODELS Balaji Lakshminarayanan and Raviv Raich School of EECS, Oregon State University, Corvallis, OR 97331-551 {lakshmba,raich@eecs.oregonstate.edu}
More informationOptimization 2. CS5240 Theoretical Foundations in Multimedia. Leow Wee Kheng
Optimization 2 CS5240 Theoretical Foundations in Multimedia Leow Wee Kheng Department of Computer Science School of Computing National University of Singapore Leow Wee Kheng (NUS) Optimization 2 1 / 38
More informationVisual SLAM Tutorial: Bundle Adjustment
Visual SLAM Tutorial: Bundle Adjustment Frank Dellaert June 27, 2014 1 Minimizing Re-projection Error in Two Views In a two-view setting, we are interested in finding the most likely camera poses T1 w
More informationGaussian Process Approximations of Stochastic Differential Equations
Gaussian Process Approximations of Stochastic Differential Equations Cédric Archambeau Dan Cawford Manfred Opper John Shawe-Taylor May, 2006 1 Introduction Some of the most complex models routinely run
More informationIPAM Summer School Optimization methods for machine learning. Jorge Nocedal
IPAM Summer School 2012 Tutorial on Optimization methods for machine learning Jorge Nocedal Northwestern University Overview 1. We discuss some characteristics of optimization problems arising in deep
More informationIntroduction to Unscented Kalman Filter
Introduction to Unscented Kalman Filter 1 Introdution In many scientific fields, we use certain models to describe the dynamics of system, such as mobile robot, vision tracking and so on. The word dynamics
More informationA Systematic Approach for Kalman-type Filtering with non-gaussian Noises
Tampere University of Technology A Systematic Approach for Kalman-type Filtering with non-gaussian Noises Citation Raitoharju, M., Piche, R., & Nurminen, H. (26). A Systematic Approach for Kalman-type
More informationA Comparison of Particle Filters for Personal Positioning
VI Hotine-Marussi Symposium of Theoretical and Computational Geodesy May 9-June 6. A Comparison of Particle Filters for Personal Positioning D. Petrovich and R. Piché Institute of Mathematics Tampere University
More informationStable Adaptive Momentum for Rapid Online Learning in Nonlinear Systems
Stable Adaptive Momentum for Rapid Online Learning in Nonlinear Systems Thore Graepel and Nicol N. Schraudolph Institute of Computational Science ETH Zürich, Switzerland {graepel,schraudo}@inf.ethz.ch
More informationNON-LINEAR NOISE ADAPTIVE KALMAN FILTERING VIA VARIATIONAL BAYES
2013 IEEE INTERNATIONAL WORKSHOP ON MACHINE LEARNING FOR SIGNAL PROCESSING NON-LINEAR NOISE ADAPTIVE KALMAN FILTERING VIA VARIATIONAL BAYES Simo Särä Aalto University, 02150 Espoo, Finland Jouni Hartiainen
More informationMachine Learning Basics III
Machine Learning Basics III Benjamin Roth CIS LMU München Benjamin Roth (CIS LMU München) Machine Learning Basics III 1 / 62 Outline 1 Classification Logistic Regression 2 Gradient Based Optimization Gradient
More informationCombined Particle and Smooth Variable Structure Filtering for Nonlinear Estimation Problems
14th International Conference on Information Fusion Chicago, Illinois, USA, July 5-8, 2011 Combined Particle and Smooth Variable Structure Filtering for Nonlinear Estimation Problems S. Andrew Gadsden
More informationNumerical Methods. Root Finding
Numerical Methods Solving Non Linear 1-Dimensional Equations Root Finding Given a real valued function f of one variable (say ), the idea is to find an such that: f() 0 1 Root Finding Eamples Find real
More informationAutonomous Mobile Robot Design
Autonomous Mobile Robot Design Topic: Extended Kalman Filter Dr. Kostas Alexis (CSE) These slides relied on the lectures from C. Stachniss, J. Sturm and the book Probabilistic Robotics from Thurn et al.
More informationA Gaussian Mixture Motion Model and Contact Fusion Applied to the Metron Data Set
1th International Conference on Information Fusion Chicago, Illinois, USA, July 5-8, 211 A Gaussian Mixture Motion Model and Contact Fusion Applied to the Metron Data Set Kathrin Wilkens 1,2 1 Institute
More information9 Multi-Model State Estimation
Technion Israel Institute of Technology, Department of Electrical Engineering Estimation and Identification in Dynamical Systems (048825) Lecture Notes, Fall 2009, Prof. N. Shimkin 9 Multi-Model State
More informationData assimilation with and without a model
Data assimilation with and without a model Tim Sauer George Mason University Parameter estimation and UQ U. Pittsburgh Mar. 5, 2017 Partially supported by NSF Most of this work is due to: Tyrus Berry,
More informationThe K-FAC method for neural network optimization
The K-FAC method for neural network optimization James Martens Thanks to my various collaborators on K-FAC research and engineering: Roger Grosse, Jimmy Ba, Vikram Tankasali, Matthew Johnson, Daniel Duckworth,
More informationDesign of Adaptive Filtering Algorithm for Relative Navigation
Design of Adaptive Filtering Algorithm for Relative Navigation Je Young Lee, Hee Sung Kim, Kwang Ho Choi, Joonhoo Lim, Sung Jin Kang, Sebum Chun, and Hyung Keun Lee Abstract Recently, relative navigation
More informationLecture 14: Newton s Method
10-725/36-725: Conve Optimization Fall 2016 Lecturer: Javier Pena Lecture 14: Newton s ethod Scribes: Varun Joshi, Xuan Li Note: LaTeX template courtesy of UC Berkeley EECS dept. Disclaimer: These notes
More informationROBUST CONSTRAINED ESTIMATION VIA UNSCENTED TRANSFORMATION. Pramod Vachhani a, Shankar Narasimhan b and Raghunathan Rengaswamy a 1
ROUST CONSTRINED ESTIMTION VI UNSCENTED TRNSFORMTION Pramod Vachhani a, Shankar Narasimhan b and Raghunathan Rengaswamy a a Department of Chemical Engineering, Clarkson University, Potsdam, NY -3699, US.
More informationState Estimation using Moving Horizon Estimation and Particle Filtering
State Estimation using Moving Horizon Estimation and Particle Filtering James B. Rawlings Department of Chemical and Biological Engineering UW Math Probability Seminar Spring 2009 Rawlings MHE & PF 1 /
More information2 Statistical Estimation: Basic Concepts
Technion Israel Institute of Technology, Department of Electrical Engineering Estimation and Identification in Dynamical Systems (048825) Lecture Notes, Fall 2009, Prof. N. Shimkin 2 Statistical Estimation:
More informationMini-Course 07 Kalman Particle Filters. Henrique Massard da Fonseca Cesar Cunha Pacheco Wellington Bettencurte Julio Dutra
Mini-Course 07 Kalman Particle Filters Henrique Massard da Fonseca Cesar Cunha Pacheco Wellington Bettencurte Julio Dutra Agenda State Estimation Problems & Kalman Filter Henrique Massard Steady State
More informationL20: MLPs, RBFs and SPR Bayes discriminants and MLPs The role of MLP hidden units Bayes discriminants and RBFs Comparison between MLPs and RBFs
L0: MLPs, RBFs and SPR Bayes discriminants and MLPs The role of MLP hidden units Bayes discriminants and RBFs Comparison between MLPs and RBFs CSCE 666 Pattern Analysis Ricardo Gutierrez-Osuna CSE@TAMU
More informationNonlinear and/or Non-normal Filtering. Jesús Fernández-Villaverde University of Pennsylvania
Nonlinear and/or Non-normal Filtering Jesús Fernández-Villaverde University of Pennsylvania 1 Motivation Nonlinear and/or non-gaussian filtering, smoothing, and forecasting (NLGF) problems are pervasive
More informationVasil Khalidov & Miles Hansard. C.M. Bishop s PRML: Chapter 5; Neural Networks
C.M. Bishop s PRML: Chapter 5; Neural Networks Introduction The aim is, as before, to find useful decompositions of the target variable; t(x) = y(x, w) + ɛ(x) (3.7) t(x n ) and x n are the observations,
More informationProgramming, numerics and optimization
Programming, numerics and optimization Lecture C-3: Unconstrained optimization II Łukasz Jankowski ljank@ippt.pan.pl Institute of Fundamental Technological Research Room 4.32, Phone +22.8261281 ext. 428
More informationArtificial Neural Networks 2
CSC2515 Machine Learning Sam Roweis Artificial Neural s 2 We saw neural nets for classification. Same idea for regression. ANNs are just adaptive basis regression machines of the form: y k = j w kj σ(b
More informationLast updated: Oct 22, 2012 LINEAR CLASSIFIERS. J. Elder CSE 4404/5327 Introduction to Machine Learning and Pattern Recognition
Last updated: Oct 22, 2012 LINEAR CLASSIFIERS Problems 2 Please do Problem 8.3 in the textbook. We will discuss this in class. Classification: Problem Statement 3 In regression, we are modeling the relationship
More informationKalman Filters with Uncompensated Biases
Kalman Filters with Uncompensated Biases Renato Zanetti he Charles Stark Draper Laboratory, Houston, exas, 77058 Robert H. Bishop Marquette University, Milwaukee, WI 53201 I. INRODUCION An underlying assumption
More informationLecture 3: Latent Variables Models and Learning with the EM Algorithm. Sam Roweis. Tuesday July25, 2006 Machine Learning Summer School, Taiwan
Lecture 3: Latent Variables Models and Learning with the EM Algorithm Sam Roweis Tuesday July25, 2006 Machine Learning Summer School, Taiwan Latent Variable Models What to do when a variable z is always
More informationMaximum Likelihood Estimation
Maximum Likelihood Estimation Prof. C. F. Jeff Wu ISyE 8813 Section 1 Motivation What is parameter estimation? A modeler proposes a model M(θ) for explaining some observed phenomenon θ are the parameters
More informationStatistical Geometry Processing Winter Semester 2011/2012
Statistical Geometry Processing Winter Semester 2011/2012 Linear Algebra, Function Spaces & Inverse Problems Vector and Function Spaces 3 Vectors vectors are arrows in space classically: 2 or 3 dim. Euclidian
More information17 Solution of Nonlinear Systems
17 Solution of Nonlinear Systems We now discuss the solution of systems of nonlinear equations. An important ingredient will be the multivariate Taylor theorem. Theorem 17.1 Let D = {x 1, x 2,..., x m
More informationCh 4. Linear Models for Classification
Ch 4. Linear Models for Classification Pattern Recognition and Machine Learning, C. M. Bishop, 2006. Department of Computer Science and Engineering Pohang University of Science and echnology 77 Cheongam-ro,
More informationGRADIENT DESCENT. CSE 559A: Computer Vision GRADIENT DESCENT GRADIENT DESCENT [0, 1] Pr(y = 1) w T x. 1 f (x; θ) = 1 f (x; θ) = exp( w T x)
0 x x x CSE 559A: Computer Vision For Binary Classification: [0, ] f (x; ) = σ( x) = exp( x) + exp( x) Output is interpreted as probability Pr(y = ) x are the log-odds. Fall 207: -R: :30-pm @ Lopata 0
More informationMinimization of Static! Cost Functions!
Minimization of Static Cost Functions Robert Stengel Optimal Control and Estimation, MAE 546, Princeton University, 2017 J = Static cost function with constant control parameter vector, u Conditions for
More informationLecture 3: Pattern Classification. Pattern classification
EE E68: Speech & Audio Processing & Recognition Lecture 3: Pattern Classification 3 4 5 The problem of classification Linear and nonlinear classifiers Probabilistic classification Gaussians, mitures and
More informationA Statistical Input Pruning Method for Artificial Neural Networks Used in Environmental Modelling
A Statistical Input Pruning Method for Artificial Neural Networks Used in Environmental Modelling G. B. Kingston, H. R. Maier and M. F. Lambert Centre for Applied Modelling in Water Engineering, School
More informationLeast Squares with Examples in Signal Processing 1. 2 Overdetermined equations. 1 Notation. The sum of squares of x is denoted by x 2 2, i.e.
Least Squares with Eamples in Signal Processing Ivan Selesnick March 7, 3 NYU-Poly These notes address (approimate) solutions to linear equations by least squares We deal with the easy case wherein the
More information