RESEARCH ARTICLE. Online quantization in nonlinear filtering
|
|
- Rodger Bennett
- 5 years ago
- Views:
Transcription
1 Journal of Statistical Computation & Simulation Vol. 00, No. 00, Month 200x, 3 RESEARCH ARTICLE Online quantization in nonlinear filtering A. Feuer and G. C. Goodwin Received 00 Month 200x; in final form 00 Month 200x We propose a novel approach to nonlinear filtering utilizing on-line quantization. We develop performance bounds for the algorithm. We also present an example which illustrates the advantages of the method relative to related schemes which utilize off-line quantization. Keywords: Nonlinear filter, Vector quantization, Kalman filter AMS Subject Classification 93E, 93E0, 93E25, 93C0. Introduction Sequential Baysian filtering arises in many practical problems. The complexity of this problem depends very much on the underlying mathematical model. When a linear Gaussian model is assumed, the well known Kalman filter provides the desired optimal solution. However, in many situations these assumptions do not hold, even approximately. We consider here a somewhat simplified version of the general set up. Namely, we have the following state space model X k F X k + W k, Y k G X k + V k, where X k R d and Y k R m are the system state and measurements respectively, and W k R d and V k R m are process and measurements noises, typically assumes i.i.d sequences independent of each other, with Gaussian probability density functions pdf p w w N 0, Q and p v v N 0, R. It is also assumed that the initial state is a random vector with pdf p 0 x 0 N 0, Q 0 and that the functions F and G are known. One would like to be able to estimate, at time t > 0, a function f : R d R of the state, f X t, given the measurements Y t {y k : k,..., t}. This estimate, which is the filtering operation, is given by Π y,t f E {f X t Y t } f x t p t x t Y t dx t. 2 Department of EE, Technion, Haifa, Israel. feuer@ee.technion.ac.il School of Electrical Engineering and Computer Science, University of Newcastle, Newcastle, Australia. graham.goodwin@newcastle.edu.au ISSN: print/issn online c 200x Taylor & Francis DOI: 0.080/ YYxxxxxxxx
2 2 Clearly, to be able to calculate this estimate one needs to know the pdf p t x t Y t. To this end one could use iteratively the Chapman-Kolmogorov equation see e.g. [] or [2]: Suppose that at time k the pdf p k x k Y k is available. Using the model equation we can derive the prior pdf p k x k Y k p k x k x k p k x k Y k dx k, 3 where see [3] p k x k x k p ω x k F x k. 4 At time step k the new measurement y k becomes available so that it can be used to update the pdf where p k x k Y k p k y k x k p k x k Y k, 5 p k y k Y k p k y k x k p v y k G x k 6 and p k y k Y k p k y k x k p k x k Y k dx k. 7 While the above expressions provide a theoretical framework, only a very limited number of special cases can be directly solved this way. A prime example is the linear Gaussian case for which closed form expressions can be derived resulting in the well known Kalman filter. However, for the general non-linear filtering problem no exact solutions can be derived hence the need for numerical approximation methods. There are many such methods in the literature - we refer the reader to comprehensive surveys and discussion in [], [2] and the references therein. Common to all these approximation methods is replacing the integral in 2 with a finite sum of the form Π y,t f E {f X t Y t } N t P t,i f x i t. 8 i These methods differ in the way the values { x i } Nt t i Rd and {P t,i } Nt i are generated. One class is based on using a Monte Carlo random approach, commonly referred to as Particle Filtering, while another class uses deterministic considerations. Let us consider, for a moment, the estimation error E {f X N t t Y t } P t,i f x i t i N t f x t p t x t Y t dx t P t,i f x i t. Since, for every choice {P t,i } Nt i, there exist a tiling {A t,i} Nt i such that P t,i i
3 A t,i p t x t Y t dx t by tiling we mean that A t,i A t,j for every i j and Nt i A t,i R d, we can write E {f X N t t Y t } P t,i f x i t N t f xt f x i t pt x t Y t dx t A t,i i i N t f x t f x i t p t x t Y t dx t, A t,i i and assuming f to be Lipshitz, namely, f xt f xt i [f]lip xt x i t, we obtain the bound E {f X N t t Y t } P t,i f x i N t t [f] xt Lip x i t pt x t Y t dx t. 9 A t,i i The bound derived above for the estimation error is readily recognized as the distortion in a quantization problem for the random variable X t with the pdf p t x t Y t, encoder {A t,i } N t i and decoder or code book { x i } Nt t. This clearly provides a motivation of applying optimal quantization as a means of obtaining the desired i approximation 8. In light of the above it hardly is surprising that an approach in the deterministic class, using vector quantization, has been suggested in [4]. We will describe this approach in some detail in the next section, quote some of the results and point to a potential weakness. This provides the basis for the novel method we propose here which is also based on vector quantization but, as we argue later, does not suffer from the same potential weakness. In [5] a comparison of the quantization methods described in [4] is presented and compared to particle filtering. The comparison indicates an advantage of the quantization approach over particle filtering with the same size grids especially for small grid size. Given this comparison, we focus our attention on quantization methods and demonstrate the advantage of our approach over the ones presented in [4]. In the sequel we assume that the reader has some familiarity with optimal vector quantization and suggest [7] as a reference on the subject. i 3 2. Quantization in nonlinear filtering Before we describe the quantization approaches it will be helpful to use 3-7 to rewrite 2 as follows: Π y,t f π y,tf π y,t E {f X t L X t, Y t }, 0 E {L X t, Y t } where we recall that Y t {y k : k,..., t} while X t {X k : k,..., t} is the set of random states, and L X t, Y t t p k y k X k. k0
4 4 We further denote π y,t E {L X t, Y t } φ t Y t. 2 Note that π y,t f is commonly referred to as the unnormalized filter. 2.. Marginal quantization Here we briefly describe the Marginal Quantization filtering algorithm of [4]. Using the sequence of pdfs, p k x k for the Markov chain {X k } t k0 can be calculated off-line and optimally quantized, resulting in the set of grids {Γ k {x k,,..., x k,n }} N k0 and the corresponding Voronoi cells {V k,,...v k,n } N k0 see [7]. One could choose different grid sizes for different times. However, for simplicity, we choose the same size for all times. These grids are now used to define the sequence X k proj Γk X k for k 0,..., t, 3 where we note that p k x k Y k has the form N j P k,jδ x k x k,j. Let us define the matrix H ij k p k y k X [ k x k,j Prob Xk x k,j X ] k x k,i p k y k X k x k,j p k x x k,i dx V k,j p k y k X k x k,j p ω x F x k,i dx, 4 V k,j and denote P k [P k,,..., P k,n ] and f Xk [f x k,,..., f x k,n ] T. Then, using Bayes rule we obtain as the estimate of the unnormalized filter π y,t f P t H t f Xt P 0 H H t f Xt, 5 where P 0 [P 0,,..., P 0,N ] T and P 0,i V 0,i p 0 x 0 dx 0. The estimate of the filter then is given by where { } Π y,t f E f Xt Y t π y,tf π y,t, 6 π y,t T t H k P 0 k φ t Y t. 7
5 5 Markovian quantization [4]: An alternative approach described in [4] is achieved by defining the Markovian process, with X 0 X 0 X k Proj Γk Xk, 8 X k F Xk + W k, Ŷ k G Xk + V k. 9 The grids Γ k here are generated by optimal quantization of the pdfs p k Xk which are again, calculated off-line. With this setup we can imitate 0 and define L Xt, Y t t k0 p k y k X k 20 { } π y,t f E f Xt L Xt, Y t 2 and { } Π y,t f E f Xt Y t π y,tf π y,t 22 with } π y,t E { L Xt, Y t φ t Y t On-line quantization We start with the observation that if our goal is to use optimal quantization to provide an estimate of the form N j P t,jδ x t x t,j of the pdf p t x t Y t or equivalently, an estimate of the filter 8, this is the pdf we need to quantize. Namely, both the grid and the respective weights should be data dependent and not calculated off-line. This, we feel, is the potential weakness inherent to both methods described above. On the other hand, any such attempt needs to be computationally feasible. The method we propose we believe yields an appropriate compromise for certain cases of practical importance.
6 6 We consider the Markovian process, X 0 X 0 and X k F Xk + Ŵk, 24 Ỹ k G Xk + V k, X k Proj Γk Xk Y k, 25 where Ŵk Proj Γw W k which is done off-line and is an optimal quantization of p w w resulting in a grid of size N w. Hence, N w pŵ w P w,s δ w w s. 26 s At each time k we have p k Xk Y k then and p k Xk Y k N i P k,iδ x k x k,i, N N w P k,i P w,s δ x k F x k.i w s i s N N w r P k,r δ x k x k,r, 27 p k Xk Y k N Nw r p k y k X k x k,r Pk,r δ x k x k,r N Nw r p k y k X k x k,r Pk,r N N w r Then, the approximate filter is P k,r δ x k x k,r. 28 { } Π y,t f E f Xt Y t N N w r P k,r f x k,r. 29 We note that p k Xk Y k which is defined on an N N w grid, can be calculated on-line at each time k, optimally quantized to provide us with the grid Γ k to be used to generate X k. The intuitive benefit in the proposed approximation is clear. We will in the sequel, attempt to support this intuition with analysis. Remark : The optimal quantization of X k, which is a discrete valued random vector, is in fact a clustering problem for which one could apply one of many available fast algorithms see e.g. [6].
7 7 3. Error analysis For our analysis we make the following assumption. Assumption : The functions F : R d R d, G : R d R d are Lipschitz with ratios [F ] Lip, [G] Lip respectively, and the function f : R d R is both bounded and Lipshitz with bound f and ratio [f] Lip. Remark 2: For example, we note that the Gaussian pdf N 0, Q in R d is Lipschitz with ratio eλ min Q 2 for 2π d 2 Q 2 2. Hence, since x 2 x and p k y k X k x N G x, R, by Assumption we have p k y k X k x p k y k X k x eλ min R 2 2π m 2 R 2 G x G x 2 eλ min R 2 [G] Lip 2π m 2 R 2 x x 2 [p ] Lip x x. 30 Furthermore, we also note that p k y k X k x We can now write the approximate filter 29 as where X { } t Xk : k,..., t, 2π m 2 R 2 K. 3 Π y,t f π y,tf π y,t { } E f Xt L Xt, Y t }, 32 E { L Xt, Y t L Xt, Y t t k0 p k y k X k, 33 and } π y,t E { L Xt, Y t Y t φ t Y t. 34 Our derivation closely imitates the one presented in [4]. We quote first the following Lemma 3. from [4] : Lemma 3.: Let µ y and ν y be two families of finite positive measures on a measurable space E, E. Assume that there exist two symmetric functions R and
8 8 S defined on µ y and ν y such that for every bounded Lipschitz function f, fdµ y then fdµy µ y E fdνy ν y E max µ y E, ν y E fdν y f R µ y, ν y + [f] Lip S µ y, ν y, 35 We are ready now to state our result on the error bound. 2 f R µ y, ν y + [f] Lip S µ y, ν y. Theorem 3.2 : Let Assumption hold, then, for every bounded Lipschitz continuous function f : R d R with ratio [f] Lip and sequence of measured data Y t {y,..., y t } generated by eqn., the error of the estimator 29 satisfies Π y,t f Π y,t f where K t max φ t Y t, φ t Y t 36 t D t E { w } + Ck t f, Y t E { k }, k0 37 Ck t f, Y t [f] Lip [F ] t k Lip + 2 f K [p ] Lip [F ] [F ] t k Lip Lip [F ] Lip, 38 and [F ] t Lip D t [f] Lip [F ] Lip [F ] Lip + 2 f K w W Ŵ [p ] Lip W Proj Γw W, k X k Proj Γk Xk Y k t k k [F ] t k Lip, 39 X k X k. 40 Note that E { w }, E { k } are the respective quantization distortions. Proof : We begin by applying 0 and 32 to π y,t f π y,t f { } E {f X t L X t, Y t } E f Xt L Xt, Y t { } Xt, Y t E L X t, Y t L f X t { } +E f X t f Xt L Xt, Y t { L f E Xt, Y t L } Xt, Y t + [f] Lip K t E { Xt X t }, 4
9 where we have used Remark 2 and 33 to conclude that L Xt, Y t K t. Applying Lemma 3. we obtain 9 Π y,t f Π y,t f max φ t Y t, φ t Y t + [f] Lip K t E { L 2 f E Xt, Y t L } Xt, Y t { Xt X t }. 42 Using and 33 we obtain L X k, Y k L Xk, Y k p k y k X k L X k, Y k p k y k X k L Xk, Y k p k y k X k p k y k X k L X k, Y k +p k y k X k L X k, Y k L Xk, Y k, so that L X k, Y k L Xk, Y k K k Xk [p ] Lip X k +K L X k, Y k L Xk, Y k. Since L X 0, Y 0 L X0, Y 0 we obtain from the above that L X k, Y k L k Xk, Y k K k [p ] Lip X j X j. 43 j From, 24, 25 and 40 we have X k X k F X k F Xk + W k Ŵk [F ] Lip Xk X k + w k [F ] Lip Xk X k + k + w k. Noting that X 0 X 0, we conclude that X k X k k j [F ] k j Lip [F ] Lip j + w j. 44
10 0 Substituting 44 into 43 we obtain L X k, Y k L k Xk, Y k K k [p ] Lip j j r [F ] j r Lip [F ] Lip r + w r 45 and substituting both 44 and 45 in 4a, using the fact that E { w r } E { w } does not depend on time, concludes the proof of the theorem. 3.. Discussion We recall that the purpose in all approaches aimed at the current problem is to generate an approximation of the form p t x t Y t N i P t,iδ x t x t,i. As argued in 9, the optimal quantization of p t x t Y t would guarantee a minimal estimation error bound. Clearly, when the grid is chosen off-line the best one can hope for is to have the weights as close as possible to V i p t x t Y t dx t as for a fixed grid this will give the optimal weights see [7]. The result may however be quite far from a good approximation. On the other hand, since the on-line approach uses the on-line data, it will result in a choice of grid which is closer to the optimal for p t x t Y t. Hence it is likely to result in a better approximation. On the other hand, since it involves more computation at each time step it is bound to be more computationally demanding. Indeed, all grid type optimizations become more and more difficult in high dimensions since they rely, amongst other things, on nearest neighbour searches - see, for example [8]. The only caveat we would place on this is when one uses fast sampling since then the optimal quantizer for one sample acts as a good initial condition for the next sample - see [9] for further discussion. The above intuitive discussion of relative performance can be made somewhat more rigorous when viewing the bounds derived in [4] for the two off-line algorithms. Both have the form Π y,t f Π y,t f K t max φ t Y t, φ t Y t t A t k E { k } k0 and differ in the expressions for the coefficients A t k and in the quantization errors k. However, when applying the bound, since φ t Y t can not be calculated, the denominator will be in fact φ t Y t. This value, by its definition see 4-7 and 20-23, depends on products of the terms p k y k X k x k,j. Clearly, as the grid points x k,j are chosen off-line, the conditional pdf, p k y k x k, evaluated at these points may be very small resulting in a very small value for φ t Y t and a large value for the performance bound. As our experiments indicate this actually results, not only in a poor performance bound, but actually provides poor performance in practice. When compared to the on-line method, as the quantization depends upon the data, the grid points will necessarily results in larger values of p k y k X k x k,j see comment following eqn. 29 and a smaller bound.
11 t Real real offline online Estimated online quantization Estimated offline marginal quantization x Figure. The true conditional distribution and its two approximations at time t 4. Numerical example We present here a very simple numerical example for which we could calculate to a high accuracy via the use of very fine griding the true pdfs and use it to demonstrate the points we made earlier regarding the relative accuracy of off-line and on-line approaches. Consider the process defined by X k 0.9X k + W k, Y k X t + V k. 46 We chose N, N w 25 and calculated the true pdf p t x t Y t, the marginal approximation p t x t Y t and our on-line approximation. Note that, in our method, at each time, we have the pdf p t x t Y t on a N N w size grid and after its on-line quantization we have p t x t Y t on a N size grid. In Figures -3 we present a comparison of the resulting pdfs at different times, the real p t x t Y t and the two estimated ones - one using the marginal off-line quantization and the other the on-line quantization. In fact, in the figures we present the respective distributions as they are easier to compare. We clearly observe how very close the on-line approximation is to the real one while the off-line provides a very poor approximation. Next, to demonstrate the filtering properties, we have chosen the function } Xt Y y - the off- { } f Xt Y y - the f x e x and calculated the errors E {f X t Y y } E line marginal quantization estimates and { f E {f X t Y y } E on-line quantization estimates. The results are presented in Figure 4. We observe that the errors for the off-line are consistently larger by an order of magnitude.
12 Real t30 Real Online Offline 0.7 Estimated online quantization Estimated offline marginal quantization x Figure 2. The true conditional distribution and its two approximations at time t t80 Real Real Offline Online Estimated online quantization Estimated offline marginal quantization x Figure 3. The true conditional distribution and its two approximations at time t Conclusion The idea of using quantization as a deterministic alternative to particle filtering for nonlinear systems has been discussed here. As we have pointed out, previous literature on this topic use off-line quantization. This approach suffers from a potential problem since the quantization grids, a crucial part of the approximation, are determined off-line with no consideration of the measured data. The approach we present here, which we believe to be novel, is based on carrying out the quantization on-line based on discrete versions of the posterior pdfs. This makes the on-line
13 REFERENCES Offline marginal quantization Filtering error online quantization time Figure 4. Filtering errors for the function f x e x using the online and offline approximations. quantization, while more on-line computation intensive, still feasible for modest size problems. We wish to point out that our main goal here is the comparison with off-line quantization methods. An extensive comparison of these off-line methods has been presented in [5] with clear advantages to the quantization approach at relatively small grid sizes. Hence, claiming potential advantage over the off-line quantization methods clearly implies advantage over particle filtering approaches in particular, with small grid sizes. For the new approach presented in the paper, we have derived a performance bound by appropriately adapting the methods previously described in [4]. We have also pointed to the potential advantages of our proposal when compared with the earlier off-line approach. Finally, we use a simple example to demonstrate the validity of our claims regarding the advantages of the on-line approach when compared to the off-line one. References [] S. Arulampalam, S. Maskell, N. Gordon and T. Clapp, A Tutorial on Particle Filters for On-line Non-linear/Non-Gaussian Bayesian Tracking, IEEE Trans. On Signal Proces. Vol. 50, No. 2, Feb. 2002, pp [2] Z. Chen, Baysian Filtering: From Kalman Filters to Particle Filters, and Beyond,on the Internet at URL: jpg/tfc0607/chen bayesian.pdf [3] N. Gordon, D. Salmond and A.F. M. Smith, Novel Approach to Non-linear and Non-Gaussian Baysian State Estimation, IEE Proceedings-F, Vol. 40, 993, pp [4] G. Pagès and H. Pham, Optimal quantization methods for nonlinear filtering with discrete time observations. Bernoulli, Vol., 2005, pp [5] A. Sellami, Comparative survey on nonlinear filtering methods: the quantization and the particle filtering approaches, Journal of Statistical Computation and Simulation, Vol. 78, No. 2, 2008, pp [6] G. Gan, C. Ma and J. Wu, Data Clustering: Theory, Algorithms and Applications, SIAM, [7] A. Gersho and R. M. Gray, Vector Quantization and Siganl Compression, Kluwer Academic Publishers, Boston, USA, 99. [8] G. Pagès and J. Printems, Optimal quadratic quantization for numerics: the Gaussian case, Monte Carlo Methods and Applications, 92, pp.35-65, [9] M.G. Cea, G.C. Goodwin and A. Feuer, A discrete nonlinear filter for fast sampled problems based on vector quantization, Proceedings of the American Control Conference, Baltimore, USA, 200.
Sensor Fusion: Particle Filter
Sensor Fusion: Particle Filter By: Gordana Stojceska stojcesk@in.tum.de Outline Motivation Applications Fundamentals Tracking People Advantages and disadvantages Summary June 05 JASS '05, St.Petersburg,
More informationA Monte Carlo Sequential Estimation for Point Process Optimum Filtering
2006 International Joint Conference on Neural Networks Sheraton Vancouver Wall Centre Hotel, Vancouver, BC, Canada July 16-21, 2006 A Monte Carlo Sequential Estimation for Point Process Optimum Filtering
More informationEvolution Strategies Based Particle Filters for Fault Detection
Evolution Strategies Based Particle Filters for Fault Detection Katsuji Uosaki, Member, IEEE, and Toshiharu Hatanaka, Member, IEEE Abstract Recent massive increase of the computational power has allowed
More informationUniform Convergence of a Multilevel Energy-based Quantization Scheme
Uniform Convergence of a Multilevel Energy-based Quantization Scheme Maria Emelianenko 1 and Qiang Du 1 Pennsylvania State University, University Park, PA 16803 emeliane@math.psu.edu and qdu@math.psu.edu
More informationLecture 2: From Linear Regression to Kalman Filter and Beyond
Lecture 2: From Linear Regression to Kalman Filter and Beyond January 18, 2017 Contents 1 Batch and Recursive Estimation 2 Towards Bayesian Filtering 3 Kalman Filter and Bayesian Filtering and Smoothing
More informationLecture 6: Bayesian Inference in SDE Models
Lecture 6: Bayesian Inference in SDE Models Bayesian Filtering and Smoothing Point of View Simo Särkkä Aalto University Simo Särkkä (Aalto) Lecture 6: Bayesian Inference in SDEs 1 / 45 Contents 1 SDEs
More informationNonlinear and/or Non-normal Filtering. Jesús Fernández-Villaverde University of Pennsylvania
Nonlinear and/or Non-normal Filtering Jesús Fernández-Villaverde University of Pennsylvania 1 Motivation Nonlinear and/or non-gaussian filtering, smoothing, and forecasting (NLGF) problems are pervasive
More informationContent.
Content Fundamentals of Bayesian Techniques (E. Sucar) Bayesian Filters (O. Aycard) Definition & interests Implementations Hidden Markov models Discrete Bayesian Filters or Markov localization Kalman filters
More informationIntroduction to Particle Filters for Data Assimilation
Introduction to Particle Filters for Data Assimilation Mike Dowd Dept of Mathematics & Statistics (and Dept of Oceanography Dalhousie University, Halifax, Canada STATMOS Summer School in Data Assimila5on,
More informationParticle Filters. Outline
Particle Filters M. Sami Fadali Professor of EE University of Nevada Outline Monte Carlo integration. Particle filter. Importance sampling. Degeneracy Resampling Example. 1 2 Monte Carlo Integration Numerical
More informationLecture 2: From Linear Regression to Kalman Filter and Beyond
Lecture 2: From Linear Regression to Kalman Filter and Beyond Department of Biomedical Engineering and Computational Science Aalto University January 26, 2012 Contents 1 Batch and Recursive Estimation
More informationRAO-BLACKWELLISED PARTICLE FILTERS: EXAMPLES OF APPLICATIONS
RAO-BLACKWELLISED PARTICLE FILTERS: EXAMPLES OF APPLICATIONS Frédéric Mustière e-mail: mustiere@site.uottawa.ca Miodrag Bolić e-mail: mbolic@site.uottawa.ca Martin Bouchard e-mail: bouchard@site.uottawa.ca
More informationUsing the Kalman Filter to Estimate the State of a Maneuvering Aircraft
1 Using the Kalman Filter to Estimate the State of a Maneuvering Aircraft K. Meier and A. Desai Abstract Using sensors that only measure the bearing angle and range of an aircraft, a Kalman filter is implemented
More informationDynamic System Identification using HDMR-Bayesian Technique
Dynamic System Identification using HDMR-Bayesian Technique *Shereena O A 1) and Dr. B N Rao 2) 1), 2) Department of Civil Engineering, IIT Madras, Chennai 600036, Tamil Nadu, India 1) ce14d020@smail.iitm.ac.in
More information4 Derivations of the Discrete-Time Kalman Filter
Technion Israel Institute of Technology, Department of Electrical Engineering Estimation and Identification in Dynamical Systems (048825) Lecture Notes, Fall 2009, Prof N Shimkin 4 Derivations of the Discrete-Time
More informationDensity Propagation for Continuous Temporal Chains Generative and Discriminative Models
$ Technical Report, University of Toronto, CSRG-501, October 2004 Density Propagation for Continuous Temporal Chains Generative and Discriminative Models Cristian Sminchisescu and Allan Jepson Department
More informationON SCALABLE CODING OF HIDDEN MARKOV SOURCES. Mehdi Salehifar, Tejaswi Nanjundaswamy, and Kenneth Rose
ON SCALABLE CODING OF HIDDEN MARKOV SOURCES Mehdi Salehifar, Tejaswi Nanjundaswamy, and Kenneth Rose Department of Electrical and Computer Engineering University of California, Santa Barbara, CA, 93106
More informationDensity Approximation Based on Dirac Mixtures with Regard to Nonlinear Estimation and Filtering
Density Approximation Based on Dirac Mixtures with Regard to Nonlinear Estimation and Filtering Oliver C. Schrempf, Dietrich Brunn, Uwe D. Hanebeck Intelligent Sensor-Actuator-Systems Laboratory Institute
More informationParticle filters, the optimal proposal and high-dimensional systems
Particle filters, the optimal proposal and high-dimensional systems Chris Snyder National Center for Atmospheric Research Boulder, Colorado 837, United States chriss@ucar.edu 1 Introduction Particle filters
More informationA new unscented Kalman filter with higher order moment-matching
A new unscented Kalman filter with higher order moment-matching KSENIA PONOMAREVA, PARESH DATE AND ZIDONG WANG Department of Mathematical Sciences, Brunel University, Uxbridge, UB8 3PH, UK. Abstract This
More informationA NEW NONLINEAR FILTER
COMMUNICATIONS IN INFORMATION AND SYSTEMS c 006 International Press Vol 6, No 3, pp 03-0, 006 004 A NEW NONLINEAR FILTER ROBERT J ELLIOTT AND SIMON HAYKIN Abstract A discrete time filter is constructed
More informationExpectation propagation for signal detection in flat-fading channels
Expectation propagation for signal detection in flat-fading channels Yuan Qi MIT Media Lab Cambridge, MA, 02139 USA yuanqi@media.mit.edu Thomas Minka CMU Statistics Department Pittsburgh, PA 15213 USA
More informationF denotes cumulative density. denotes probability density function; (.)
BAYESIAN ANALYSIS: FOREWORDS Notation. System means the real thing and a model is an assumed mathematical form for the system.. he probability model class M contains the set of the all admissible models
More informationBlind Equalization via Particle Filtering
Blind Equalization via Particle Filtering Yuki Yoshida, Kazunori Hayashi, Hideaki Sakai Department of System Science, Graduate School of Informatics, Kyoto University Historical Remarks A sequential Monte
More informationMini-course 07: Kalman and Particle Filters Particle Filters Fundamentals, algorithms and applications
Mini-course 07: Kalman and Particle Filters Particle Filters Fundamentals, algorithms and applications Henrique Fonseca & Cesar Pacheco Wellington Betencurte 1 & Julio Dutra 2 Federal University of Espírito
More informationA Note on Auxiliary Particle Filters
A Note on Auxiliary Particle Filters Adam M. Johansen a,, Arnaud Doucet b a Department of Mathematics, University of Bristol, UK b Departments of Statistics & Computer Science, University of British Columbia,
More informationTime Series Prediction by Kalman Smoother with Cross-Validated Noise Density
Time Series Prediction by Kalman Smoother with Cross-Validated Noise Density Simo Särkkä E-mail: simo.sarkka@hut.fi Aki Vehtari E-mail: aki.vehtari@hut.fi Jouko Lampinen E-mail: jouko.lampinen@hut.fi Abstract
More informationPart 1: Expectation Propagation
Chalmers Machine Learning Summer School Approximate message passing and biomedicine Part 1: Expectation Propagation Tom Heskes Machine Learning Group, Institute for Computing and Information Sciences Radboud
More informationKalman filtering and friends: Inference in time series models. Herke van Hoof slides mostly by Michael Rubinstein
Kalman filtering and friends: Inference in time series models Herke van Hoof slides mostly by Michael Rubinstein Problem overview Goal Estimate most probable state at time k using measurement up to time
More informationECE276A: Sensing & Estimation in Robotics Lecture 10: Gaussian Mixture and Particle Filtering
ECE276A: Sensing & Estimation in Robotics Lecture 10: Gaussian Mixture and Particle Filtering Lecturer: Nikolay Atanasov: natanasov@ucsd.edu Teaching Assistants: Siwei Guo: s9guo@eng.ucsd.edu Anwesan Pal:
More informationCh. 10 Vector Quantization. Advantages & Design
Ch. 10 Vector Quantization Advantages & Design 1 Advantages of VQ There are (at least) 3 main characteristics of VQ that help it outperform SQ: 1. Exploit Correlation within vectors 2. Exploit Shape Flexibility
More informationInferring biological dynamics Iterated filtering (IF)
Inferring biological dynamics 101 3. Iterated filtering (IF) IF originated in 2006 [6]. For plug-and-play likelihood-based inference on POMP models, there are not many alternatives. Directly estimating
More informationA Comparison of the EKF, SPKF, and the Bayes Filter for Landmark-Based Localization
A Comparison of the EKF, SPKF, and the Bayes Filter for Landmark-Based Localization and Timothy D. Barfoot CRV 2 Outline Background Objective Experimental Setup Results Discussion Conclusion 2 Outline
More informationHuman Pose Tracking I: Basics. David Fleet University of Toronto
Human Pose Tracking I: Basics David Fleet University of Toronto CIFAR Summer School, 2009 Looking at People Challenges: Complex pose / motion People have many degrees of freedom, comprising an articulated
More informationComputer Intensive Methods in Mathematical Statistics
Computer Intensive Methods in Mathematical Statistics Department of mathematics johawes@kth.se Lecture 7 Sequential Monte Carlo methods III 7 April 2017 Computer Intensive Methods (1) Plan of today s lecture
More informationNonlinear Filtering. With Polynomial Chaos. Raktim Bhattacharya. Aerospace Engineering, Texas A&M University uq.tamu.edu
Nonlinear Filtering With Polynomial Chaos Raktim Bhattacharya Aerospace Engineering, Texas A&M University uq.tamu.edu Nonlinear Filtering with PC Problem Setup. Dynamics: ẋ = f(x, ) Sensor Model: ỹ = h(x)
More informationLecture 5 Channel Coding over Continuous Channels
Lecture 5 Channel Coding over Continuous Channels I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw November 14, 2014 1 / 34 I-Hsiang Wang NIT Lecture 5 From
More informationMarkov localization uses an explicit, discrete representation for the probability of all position in the state space.
Markov Kalman Filter Localization Markov localization localization starting from any unknown position recovers from ambiguous situation. However, to update the probability of all positions within the whole
More informationThe No-Regret Framework for Online Learning
The No-Regret Framework for Online Learning A Tutorial Introduction Nahum Shimkin Technion Israel Institute of Technology Haifa, Israel Stochastic Processes in Engineering IIT Mumbai, March 2013 N. Shimkin,
More informationMachine Learning Lecture 5
Machine Learning Lecture 5 Linear Discriminant Functions 26.10.2017 Bastian Leibe RWTH Aachen http://www.vision.rwth-aachen.de leibe@vision.rwth-aachen.de Course Outline Fundamentals Bayes Decision Theory
More informationState Estimation using Moving Horizon Estimation and Particle Filtering
State Estimation using Moving Horizon Estimation and Particle Filtering James B. Rawlings Department of Chemical and Biological Engineering UW Math Probability Seminar Spring 2009 Rawlings MHE & PF 1 /
More informationRESEARCH ARTICLE. A strategy of finding an initial active set for inequality constrained quadratic programming problems
Optimization Methods and Software Vol. 00, No. 00, July 200, 8 RESEARCH ARTICLE A strategy of finding an initial active set for inequality constrained quadratic programming problems Jungho Lee Computer
More informationStochastic Optimization with Inequality Constraints Using Simultaneous Perturbations and Penalty Functions
International Journal of Control Vol. 00, No. 00, January 2007, 1 10 Stochastic Optimization with Inequality Constraints Using Simultaneous Perturbations and Penalty Functions I-JENG WANG and JAMES C.
More informationIntroduction to Machine Learning
Introduction to Machine Learning Brown University CSCI 1950-F, Spring 2012 Prof. Erik Sudderth Lecture 25: Markov Chain Monte Carlo (MCMC) Course Review and Advanced Topics Many figures courtesy Kevin
More informationAlternative Characterizations of Markov Processes
Chapter 10 Alternative Characterizations of Markov Processes This lecture introduces two ways of characterizing Markov processes other than through their transition probabilities. Section 10.1 describes
More informationAdvanced Computational Methods in Statistics: Lecture 5 Sequential Monte Carlo/Particle Filtering
Advanced Computational Methods in Statistics: Lecture 5 Sequential Monte Carlo/Particle Filtering Axel Gandy Department of Mathematics Imperial College London http://www2.imperial.ac.uk/~agandy London
More informationProbabilistic Graphical Models
Probabilistic Graphical Models Brown University CSCI 2950-P, Spring 2013 Prof. Erik Sudderth Lecture 12: Gaussian Belief Propagation, State Space Models and Kalman Filters Guest Kalman Filter Lecture by
More informationApproximate Inference Part 1 of 2
Approximate Inference Part 1 of 2 Tom Minka Microsoft Research, Cambridge, UK Machine Learning Summer School 2009 http://mlg.eng.cam.ac.uk/mlss09/ Bayesian paradigm Consistent use of probability theory
More informationEVALUATING SYMMETRIC INFORMATION GAP BETWEEN DYNAMICAL SYSTEMS USING PARTICLE FILTER
EVALUATING SYMMETRIC INFORMATION GAP BETWEEN DYNAMICAL SYSTEMS USING PARTICLE FILTER Zhen Zhen 1, Jun Young Lee 2, and Abdus Saboor 3 1 Mingde College, Guizhou University, China zhenz2000@21cn.com 2 Department
More informationTowards control over fading channels
Towards control over fading channels Paolo Minero, Massimo Franceschetti Advanced Network Science University of California San Diego, CA, USA mail: {minero,massimo}@ucsd.edu Invited Paper) Subhrakanti
More informationApproximate Inference Part 1 of 2
Approximate Inference Part 1 of 2 Tom Minka Microsoft Research, Cambridge, UK Machine Learning Summer School 2009 http://mlg.eng.cam.ac.uk/mlss09/ 1 Bayesian paradigm Consistent use of probability theory
More informationSequential Monte Carlo Methods for Bayesian Computation
Sequential Monte Carlo Methods for Bayesian Computation A. Doucet Kyoto Sept. 2012 A. Doucet (MLSS Sept. 2012) Sept. 2012 1 / 136 Motivating Example 1: Generic Bayesian Model Let X be a vector parameter
More informationSampling Methods (11/30/04)
CS281A/Stat241A: Statistical Learning Theory Sampling Methods (11/30/04) Lecturer: Michael I. Jordan Scribe: Jaspal S. Sandhu 1 Gibbs Sampling Figure 1: Undirected and directed graphs, respectively, with
More informationLearning Vector Quantization
Learning Vector Quantization Neural Computation : Lecture 18 John A. Bullinaria, 2015 1. SOM Architecture and Algorithm 2. Vector Quantization 3. The Encoder-Decoder Model 4. Generalized Lloyd Algorithms
More informationIntroduction to Mobile Robotics Probabilistic Robotics
Introduction to Mobile Robotics Probabilistic Robotics Wolfram Burgard 1 Probabilistic Robotics Key idea: Explicit representation of uncertainty (using the calculus of probability theory) Perception Action
More informationGeneralized Writing on Dirty Paper
Generalized Writing on Dirty Paper Aaron S. Cohen acohen@mit.edu MIT, 36-689 77 Massachusetts Ave. Cambridge, MA 02139-4307 Amos Lapidoth lapidoth@isi.ee.ethz.ch ETF E107 ETH-Zentrum CH-8092 Zürich, Switzerland
More informationσ(a) = a N (x; 0, 1 2 ) dx. σ(a) = Φ(a) =
Until now we have always worked with likelihoods and prior distributions that were conjugate to each other, allowing the computation of the posterior distribution to be done in closed form. Unfortunately,
More informationThe Kalman Filter ImPr Talk
The Kalman Filter ImPr Talk Ged Ridgway Centre for Medical Image Computing November, 2006 Outline What is the Kalman Filter? State Space Models Kalman Filter Overview Bayesian Updating of Estimates Kalman
More informationNonlinear Estimation Techniques for Impact Point Prediction of Ballistic Targets
Nonlinear Estimation Techniques for Impact Point Prediction of Ballistic Targets J. Clayton Kerce a, George C. Brown a, and David F. Hardiman b a Georgia Tech Research Institute, Georgia Institute of Technology,
More information5746 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 56, NO. 12, DECEMBER X/$ IEEE
5746 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 56, NO. 12, DECEMBER 2008 Nonlinear Filtering Using a New Proposal Distribution and the Improved Fast Gauss Transform With Tighter Performance Bounds Roni
More informationInformation theoretic perspectives on learning algorithms
Information theoretic perspectives on learning algorithms Varun Jog University of Wisconsin - Madison Departments of ECE and Mathematics Shannon Channel Hangout! May 8, 2018 Jointly with Adrian Tovar-Lopez
More informationAn introduction to particle filters
An introduction to particle filters Andreas Svensson Department of Information Technology Uppsala University June 10, 2014 June 10, 2014, 1 / 16 Andreas Svensson - An introduction to particle filters Outline
More informationTowards a Bayesian model for Cyber Security
Towards a Bayesian model for Cyber Security Mark Briers (mbriers@turing.ac.uk) Joint work with Henry Clausen and Prof. Niall Adams (Imperial College London) 27 September 2017 The Alan Turing Institute
More informationOn Optimal Coding of Hidden Markov Sources
2014 Data Compression Conference On Optimal Coding of Hidden Markov Sources Mehdi Salehifar, Emrah Akyol, Kumar Viswanatha, and Kenneth Rose Department of Electrical and Computer Engineering University
More informationStatistics: Learning models from data
DS-GA 1002 Lecture notes 5 October 19, 2015 Statistics: Learning models from data Learning models from data that are assumed to be generated probabilistically from a certain unknown distribution is a crucial
More informationDirac Mixture Density Approximation Based on Minimization of the Weighted Cramér von Mises Distance
Dirac Mixture Density Approximation Based on Minimization of the Weighted Cramér von Mises Distance Oliver C Schrempf, Dietrich Brunn, and Uwe D Hanebeck Abstract This paper proposes a systematic procedure
More informationBayesian Regression Linear and Logistic Regression
When we want more than point estimates Bayesian Regression Linear and Logistic Regression Nicole Beckage Ordinary Least Squares Regression and Lasso Regression return only point estimates But what if we
More informationRandomized Algorithms
Randomized Algorithms Prof. Tapio Elomaa tapio.elomaa@tut.fi Course Basics A new 4 credit unit course Part of Theoretical Computer Science courses at the Department of Mathematics There will be 4 hours
More informationLecture 8: Bayesian Estimation of Parameters in State Space Models
in State Space Models March 30, 2016 Contents 1 Bayesian estimation of parameters in state space models 2 Computational methods for parameter estimation 3 Practical parameter estimation in state space
More informationApril 20th, Advanced Topics in Machine Learning California Institute of Technology. Markov Chain Monte Carlo for Machine Learning
for for Advanced Topics in California Institute of Technology April 20th, 2017 1 / 50 Table of Contents for 1 2 3 4 2 / 50 History of methods for Enrico Fermi used to calculate incredibly accurate predictions
More informationSUBOPTIMALITY OF THE KARHUNEN-LOÈVE TRANSFORM FOR FIXED-RATE TRANSFORM CODING. Kenneth Zeger
SUBOPTIMALITY OF THE KARHUNEN-LOÈVE TRANSFORM FOR FIXED-RATE TRANSFORM CODING Kenneth Zeger University of California, San Diego, Department of ECE La Jolla, CA 92093-0407 USA ABSTRACT An open problem in
More informationSignal Modeling, Statistical Inference and Data Mining in Astrophysics
ASTRONOMY 6523 Spring 2013 Signal Modeling, Statistical Inference and Data Mining in Astrophysics Course Approach The philosophy of the course reflects that of the instructor, who takes a dualistic view
More informationPREDICTIVE quantization is one of the most widely-used
618 IEEE JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, VOL. 1, NO. 4, DECEMBER 2007 Robust Predictive Quantization: Analysis and Design Via Convex Optimization Alyson K. Fletcher, Member, IEEE, Sundeep
More informationStochastic Definition of State-Space Equation for Particle Filtering Algorithms
A publication of CHEMICAL ENGINEERING TRANSACTIONS VOL. 33, 2013 Guest Editors: Enrico Zio, Piero Baraldi Copyright 2013, AIDIC Servizi S.r.l., ISBN 978-88-95608-24-2; ISSN 1974-9791 The Italian Association
More informationFUNDAMENTAL FILTERING LIMITATIONS IN LINEAR NON-GAUSSIAN SYSTEMS
FUNDAMENTAL FILTERING LIMITATIONS IN LINEAR NON-GAUSSIAN SYSTEMS Gustaf Hendeby Fredrik Gustafsson Division of Automatic Control Department of Electrical Engineering, Linköpings universitet, SE-58 83 Linköping,
More informationConditional probabilities and graphical models
Conditional probabilities and graphical models Thomas Mailund Bioinformatics Research Centre (BiRC), Aarhus University Probability theory allows us to describe uncertainty in the processes we model within
More informationQuantization for Distributed Estimation
0 IEEE International Conference on Internet of Things ithings 0), Green Computing and Communications GreenCom 0), and Cyber-Physical-Social Computing CPSCom 0) Quantization for Distributed Estimation uan-yu
More informationReduction of Random Variables in Structural Reliability Analysis
Reduction of Random Variables in Structural Reliability Analysis S. Adhikari and R. S. Langley Department of Engineering University of Cambridge Trumpington Street Cambridge CB2 1PZ (U.K.) February 21,
More informationRobert Collins CSE586, PSU Intro to Sampling Methods
Robert Collins Intro to Sampling Methods CSE586 Computer Vision II Penn State Univ Robert Collins A Brief Overview of Sampling Monte Carlo Integration Sampling and Expected Values Inverse Transform Sampling
More informationStat 451 Lecture Notes Numerical Integration
Stat 451 Lecture Notes 03 12 Numerical Integration Ryan Martin UIC www.math.uic.edu/~rgmartin 1 Based on Chapter 5 in Givens & Hoeting, and Chapters 4 & 18 of Lange 2 Updated: February 11, 2016 1 / 29
More informationSparse Sensing for Statistical Inference
Sparse Sensing for Statistical Inference Model-driven and data-driven paradigms Geert Leus, Sundeep Chepuri, and Georg Kail ITA 2016, 04 Feb 2016 1/17 Power networks, grid analytics Health informatics
More information9 Multi-Model State Estimation
Technion Israel Institute of Technology, Department of Electrical Engineering Estimation and Identification in Dynamical Systems (048825) Lecture Notes, Fall 2009, Prof. N. Shimkin 9 Multi-Model State
More informationParticle Methods as Message Passing
Particle Methods as Message Passing Justin Dauwels RIKEN Brain Science Institute Hirosawa,2-1,Wako-shi,Saitama,Japan Email: justin@dauwels.com Sascha Korl Phonak AG CH-8712 Staefa, Switzerland Email: sascha.korl@phonak.ch
More informationan introduction to bayesian inference
with an application to network analysis http://jakehofman.com january 13, 2010 motivation would like models that: provide predictive and explanatory power are complex enough to describe observed phenomena
More informationUTILIZING PRIOR KNOWLEDGE IN ROBUST OPTIMAL EXPERIMENT DESIGN. EE & CS, The University of Newcastle, Australia EE, Technion, Israel.
UTILIZING PRIOR KNOWLEDGE IN ROBUST OPTIMAL EXPERIMENT DESIGN Graham C. Goodwin James S. Welsh Arie Feuer Milan Depich EE & CS, The University of Newcastle, Australia 38. EE, Technion, Israel. Abstract:
More information6.4 Kalman Filter Equations
6.4 Kalman Filter Equations 6.4.1 Recap: Auxiliary variables Recall the definition of the auxiliary random variables x p k) and x m k): Init: x m 0) := x0) S1: x p k) := Ak 1)x m k 1) +uk 1) +vk 1) S2:
More informationMachine Learning for OR & FE
Machine Learning for OR & FE Hidden Markov Models Martin Haugh Department of Industrial Engineering and Operations Research Columbia University Email: martin.b.haugh@gmail.com Additional References: David
More informationProbability Map Building of Uncertain Dynamic Environments with Indistinguishable Obstacles
Probability Map Building of Uncertain Dynamic Environments with Indistinguishable Obstacles Myungsoo Jun and Raffaello D Andrea Sibley School of Mechanical and Aerospace Engineering Cornell University
More informationSequential Monte Carlo Methods (for DSGE Models)
Sequential Monte Carlo Methods (for DSGE Models) Frank Schorfheide University of Pennsylvania, PIER, CEPR, and NBER October 23, 2017 Some References These lectures use material from our joint work: Tempered
More informationThe Scaled Unscented Transformation
The Scaled Unscented Transformation Simon J. Julier, IDAK Industries, 91 Missouri Blvd., #179 Jefferson City, MO 6519 E-mail:sjulier@idak.com Abstract This paper describes a generalisation of the unscented
More informationNUMERICAL COMPUTATION OF THE CAPACITY OF CONTINUOUS MEMORYLESS CHANNELS
NUMERICAL COMPUTATION OF THE CAPACITY OF CONTINUOUS MEMORYLESS CHANNELS Justin Dauwels Dept. of Information Technology and Electrical Engineering ETH, CH-8092 Zürich, Switzerland dauwels@isi.ee.ethz.ch
More informationA proof of the existence of good nested lattices
A proof of the existence of good nested lattices Dinesh Krithivasan and S. Sandeep Pradhan July 24, 2007 1 Introduction We show the existence of a sequence of nested lattices (Λ (n) 1, Λ(n) ) with Λ (n)
More informationEstimation Theory. as Θ = (Θ 1,Θ 2,...,Θ m ) T. An estimator
Estimation Theory Estimation theory deals with finding numerical values of interesting parameters from given set of data. We start with formulating a family of models that could describe how the data were
More informationLearning Vector Quantization (LVQ)
Learning Vector Quantization (LVQ) Introduction to Neural Computation : Guest Lecture 2 John A. Bullinaria, 2007 1. The SOM Architecture and Algorithm 2. What is Vector Quantization? 3. The Encoder-Decoder
More informationLecture 6: Multiple Model Filtering, Particle Filtering and Other Approximations
Lecture 6: Multiple Model Filtering, Particle Filtering and Other Approximations Department of Biomedical Engineering and Computational Science Aalto University April 28, 2010 Contents 1 Multiple Model
More informationTarget Tracking and Classification using Collaborative Sensor Networks
Target Tracking and Classification using Collaborative Sensor Networks Xiaodong Wang Department of Electrical Engineering Columbia University p.1/3 Talk Outline Background on distributed wireless sensor
More informationRecent Advances in Bayesian Inference Techniques
Recent Advances in Bayesian Inference Techniques Christopher M. Bishop Microsoft Research, Cambridge, U.K. research.microsoft.com/~cmbishop SIAM Conference on Data Mining, April 2004 Abstract Bayesian
More informationMachine Learning Lecture 7
Course Outline Machine Learning Lecture 7 Fundamentals (2 weeks) Bayes Decision Theory Probability Density Estimation Statistical Learning Theory 23.05.2016 Discriminative Approaches (5 weeks) Linear Discriminant
More informationA Novel Gaussian Sum Filter Method for Accurate Solution to Nonlinear Filtering Problem
A Novel Gaussian Sum Filter Method for Accurate Solution to Nonlinear Filtering Problem Gabriel Terejanu a Puneet Singla b Tarunraj Singh b Peter D. Scott a Graduate Student Assistant Professor Professor
More information