Quantifying Uncertainty

Similar documents
AN IMPROVED PARTICLE FILTER ALGORITHM BASED ON NEURAL NETWORK FOR TARGET TRACKING

Hidden Markov Models & The Multivariate Gaussian (10/26/04)

A Gauss Implementation of Particle Filters The PF library

Markov Chain Monte Carlo (MCMC), Gibbs Sampling, Metropolis Algorithms, and Simulated Annealing Bioinformatics Course Supplement

Probability Theory (revisited)

Appendix B: Resampling Algorithms

6 Supplementary Materials

A Particle Filter Algorithm based on Mixing of Prior probability density and UKF as Generate Importance Function

CS 2750 Machine Learning. Lecture 5. Density estimation. CS 2750 Machine Learning. Announcements

The Gaussian classifier. Nuno Vasconcelos ECE Department, UCSD

Tracking with Kalman Filter

Chapter 2 Transformations and Expectations. , and define f

Analytic Local Linearization Particle Filter for Bayesian State Estimation in Nonlinear Continuous Process

Application of Bayesian Filters to Heat Conduction Problems

EM and Structure Learning

j) = 1 (note sigma notation) ii. Continuous random variable (e.g. Normal distribution) 1. density function: f ( x) 0 and f ( x) dx = 1

Statistical analysis using matlab. HY 439 Presented by: George Fortetsanakis

Hidden Markov Models

Classification as a Regression Problem

CIS526: Machine Learning Lecture 3 (Sept 16, 2003) Linear Regression. Preparation help: Xiaoying Huang. x 1 θ 1 output... θ M x M

Particle Filter Approach to Fault Detection andisolation in Nonlinear Systems

Feature Selection & Dynamic Tracking F&P Textbook New: Ch 11, Old: Ch 17 Guido Gerig CS 6320, Spring 2013

Outline. Bayesian Networks: Maximum Likelihood Estimation and Tree Structure Learning. Our Model and Data. Outline

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur

STATS 306B: Unsupervised Learning Spring Lecture 10 April 30

Psychology 282 Lecture #24 Outline Regression Diagnostics: Outliers

Global Sensitivity. Tuesday 20 th February, 2018

STATISTICALLY LINEARIZED RECURSIVE LEAST SQUARES. Matthieu Geist and Olivier Pietquin. IMS Research Group Supélec, Metz, France

Probability Theory. The nth coefficient of the Taylor series of f(k), expanded around k = 0, gives the nth moment of x as ( ik) n n!

An adaptive SMC scheme for ABC. Bayesian Computation (ABC)

Chapter 9: Statistical Inference and the Relationship between Two Variables

A Robust Method for Calculating the Correlation Coefficient

Using T.O.M to Estimate Parameter of distributions that have not Single Exponential Family

2.29 Numerical Fluid Mechanics

Gaussian Mixture Models

Stanford University CS359G: Graph Partitioning and Expanders Handout 4 Luca Trevisan January 13, 2011

A Tutorial on Particle Filtering and Smoothing: Fifteen years later

4DVAR, according to the name, is a four-dimensional variational method.

8/25/17. Data Modeling. Data Modeling. Data Modeling. Patrice Koehl Department of Biological Sciences National University of Singapore

10.34 Fall 2015 Metropolis Monte Carlo Algorithm

Convergence of random processes

Boostrapaggregating (Bagging)

Feature Selection: Part 1

MIMA Group. Chapter 2 Bayesian Decision Theory. School of Computer Science and Technology, Shandong University. Xin-Shun SDU

Lossy Compression. Compromise accuracy of reconstruction for increased compression.

Linear Approximation with Regularization and Moving Least Squares

Fourier Transform. Additive noise. Fourier Tansform. I = S + N. Noise doesn t depend on signal. We ll consider:

Maximum Likelihood Estimation of Binary Dependent Variables Models: Probit and Logit. 1. General Formulation of Binary Dependent Variables Models

Time-Varying Systems and Computations Lecture 6

6. Stochastic processes (2)

CHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE

6. Stochastic processes (2)

Homework Assignment 3 Due in class, Thursday October 15

6.3.4 Modified Euler s method of integration

Statistics and Probability Theory in Civil, Surveying and Environmental Engineering

Hidden Markov Models

ST2352. Working backwards with conditional probability. ST2352 Week 8 1

Markov Chain Monte Carlo Lecture 6

Inner Product. Euclidean Space. Orthonormal Basis. Orthogonal

Maximum Likelihood Estimation of Binary Dependent Variables Models: Probit and Logit. 1. General Formulation of Binary Dependent Variables Models

What would be a reasonable choice of the quantization step Δ?

Why Bayesian? 3. Bayes and Normal Models. State of nature: class. Decision rule. Rev. Thomas Bayes ( ) Bayes Theorem (yes, the famous one)

Parameter Estimation for Dynamic System using Unscented Kalman filter

2E Pattern Recognition Solutions to Introduction to Pattern Recognition, Chapter 2: Bayesian pattern classification

Stat260: Bayesian Modeling and Inference Lecture Date: February 22, Reference Priors

e i is a random error

10-701/ Machine Learning, Fall 2005 Homework 3

3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X

Basic Statistical Analysis and Yield Calculations

Stat 543 Exam 2 Spring 2016

Parametric fractional imputation for missing data analysis. Jae Kwang Kim Survey Working Group Seminar March 29, 2010

Mean Field / Variational Approximations

sensors ISSN by MDPI

There are many problems in science in which the state of a

b ), which stands for uniform distribution on the interval a x< b. = 0 elsewhere

CS47300: Web Information Search and Management

Marginal Effects in Probit Models: Interpretation and Testing. 1. Interpreting Probit Coefficients

Econ107 Applied Econometrics Topic 3: Classical Model (Studenmund, Chapter 4)

Conjugacy and the Exponential Family

Stat 543 Exam 2 Spring 2016

Generalized Linear Methods

Lecture Notes on Linear Regression

Physics 181. Particle Systems

Problem Set 9 Solutions

Georgia Tech PHYS 6124 Mathematical Methods of Physics I

1 Convex Optimization

BAYESIAN ESTIMATION OF TIME-VARYING SYSTEMS:

( ) ( ) ( ) ( ) STOCHASTIC SIMULATION FOR BLOCKED DATA. Monte Carlo simulation Rejection sampling Importance sampling Markov chain Monte Carlo

ONLINE BAYESIAN KERNEL REGRESSION FROM NONLINEAR MAPPING OF OBSERVATIONS

Lecture 4: Constant Time SVD Approximation

JAB Chain. Long-tail claims development. ASTIN - September 2005 B.Verdier A. Klinger

risk and uncertainty assessment

Bayesian decision theory. Nuno Vasconcelos ECE Department, UCSD

8 Derivation of Network Rate Equations from Single- Cell Conductance Equations

Assessing the Performance of the Ensemble Kalman Filter for Land Surface Data Assimilation

An R implementation of bootstrap procedures for mixed models

V. Electrostatics. Lecture 25: Diffuse double layer structure

Chapter 2 - The Simple Linear Regression Model S =0. e i is a random error. S β2 β. This is a minimization problem. Solution is a calculus exercise.

Uncertainty as the Overlap of Alternate Conditional Distributions

Maximum Likelihood Estimation

Transcription:

Partcle Flters Quantfyng Uncertanty Sa Ravela M. I. T Last Updated: Sprng 2013 1 Quantfyng Uncertanty

Partcle Flters Partcle Flters Appled to Sequental flterng problems Can also be appled to smoothng problems Soluton va Recursve Bayesan Estmaton Approxmate Soluton Can work wth non-gaussan dstrbutons/non-lnear dynamcs Applcable to many other problems e.g. Spatal Inference 2 Quantfyng Uncertanty

Partcle Flters Notaton x t, X k : Models states n contnuous and dscrete space-tme respectvely. xt T : True system state y t,y k : Contnous and Dscrete measurements, respectvely. Xk n : n th sample of dscrete vector at step k. M: model, P: probablty mass functon. Q: Proposal Dstrbuton, δ : kronecker or drac delta functon. We follow Arulampalam et al. s paper. Non-Gaussanty Samplng SIS Kernel SIR RPF 3 Quantfyng Uncertanty

Partcle Flters Sequental Flterng Recall: Ensemble Kalman flter & Smoother y 1 y 2 Observatons x 0 x 1 x 2 Model States We are nterested n studyng the evoluton of y t f (x T ), observed t system, usng a model wth state x t. 4 Quantfyng Uncertanty

Partcle Flters Ths means (n dscrete tme, dscretzed space): P(X k Y 1:k ) Can be solved recursvely step P(X k, Y 1:k ) P(X k Y 1:k ) = P(Y 1:k ) 5 Quantfyng Uncertanty

Partcle Flters Sequental Flterng va Recursve Bayesan Estmaton Y 1:k s a collecton of varables Y 1... Y k So: P(X k Y 1:k ) = P(X k, Y 1:k ) P(Y 1:k ) = P(Y k X k )P(X k Y k ) P(Y 1:k 1) P(Y k Y 1:k 1 ) P(Y 1:k 1) = P(Y k X k )P(X k Y 1:k 1 ) P(Y k Y 1:k 1 ) 6 Quantfyng Uncertanty

Partcle Flters Contd. P(Y k X k ) P(X k X k 1 )P(X k 1 Y 1:k 1 ) _ X 2 k 1 _ 1 P(X k Y 1:k ) = P(Yk X k )P(X k X k 1 )P(X k 1 Y k 1 ) X k X k 1 _ 1. From the Chapman-Kolmogorov equaton 2. The measurement model/observaton equaton 3. Normalzaton Constant When can ths recursve master equaton be solved? 3 7 Quantfyng Uncertanty

Partcle Flters Let s say X k = F k X k 1 + V k Z k = H k X k + η k v k = N(, P k k ) η k = N(0, R) Lnear Gaussan Kalman Flter 8 Quantfyng Uncertanty

Partcle Flters For non lnear problems Extended Kalman Flter, va lnearzaton Ensemble Kalman flter No lnearzaton Gaussan assumpton Ensemble members are partcles that moved around n state space They represent the moments of uncertanty 9 Quantfyng Uncertanty

Partcle Flters How may we relax the Gaussan assumpton? If P(X k X k 1 ) and P(Y k X k ) are non-gaussan; How do we represent them, let alone perform these ntegratons n (2) & (3)? 10 Quantfyng Uncertanty

Partcle Flters Partcle Representaton Genercally N P(X) = w δ(x X ) =1 pmf/pdf defned as a weghted sum Recall from Samplng lecture Response Surface Modelng lecture 11 Quantfyng Uncertanty

Partcle Flters Contd. w 1 w 2 w 10 X 1 X 2 X 10 Even so, Whlst P(X) can be evaluated samplng from t may be dffcult. 12 Quantfyng Uncertanty

Partcle Flters Importance Samplng Suppose we wsh to evaluate x x f (x)p(x)dx (e.g. moment calculaton) P(x) f (x) Q(x)dx, X Q(x) Q(x) N 1 P(x = X ) = f (x = X )w, w = N Q(x = X ) =1 13 Quantfyng Uncertanty

Partcle Flters So: Sample from Q Proposal dstrbuton Evaluate from P the densty P(X ) Apply mportance weght = w = Q(X ) Now let s consder P(x) = Pˆ(x) P Q (x) = P(x)dx Q Z p Q(x) = Qˆ (x) Q(x) Q = QQ(x)dx Z q 14 Quantfyng Uncertanty

Partcle Flters So: N 1 Z q f (x = X )wq N Z p =1 where QP(x = X ) wq = These are un-normalzed mere potentals Q(x Q = X ) It turns out: NZ p = ŵ Z q N w =1 f (x = X ) ˆ f (x)p(x)dx = N j ŵ j 15 Quantfyng Uncertanty

Partcle Flters f (X )ŵ j ŵ j s just a weghted sum Where a proposal dstrbuton was used to get around samplng dffcultes and the mportance weghts manage all the normalzaton. It s mportant to select a good proposal dstrbuton. Not one that focus on a small part of the state space and perhaps better than an unnformatve pror. 16 Quantfyng Uncertanty

Partcle Flters Applcaton of Importance Samplng to Bayesan Recursve Estmaton Partcle Flter ŵ δ(x X ) P(X ) = = w δ(x X ) j ŵ j w s normalzed. 17 Quantfyng Uncertanty

Partcle Flters Let s consder agan: X k = f (X k 1 ) + V k Y k = h(x k ) + η k A relatonshp between the observaton and the state (measurement) Addtve nose, but can be generalzed 18 Quantfyng Uncertanty

Partcle Flters Let s consder the jont dstrbuton P(X 0:k Y 1:K ) Y 1 Y k X 0 X 1 X k IC We may factor ths dstrbuton usng partcles 19 Quantfyng Uncertanty

Partcle Flters Chan Rule wth Weghts And let s factor P(X 0:k Y 1:k ) as N P(X 0:k Y 1:k ) = w δ(x 0:k X 0 :k ) =1 P(X 0 :k Y 1:k ) w Q(X0 :k Y 1:k ) P(Y k X 0:k, Y 1:k 1 )P(X 0:k Y 1:k 1 ) P(X 0:k Y 1:k ) = P(Y k Y 1:k 1 ) P(Y k X k )P(X k X k 1 )P(X k 1 Y 1:k 1 ) = P(Y k Y 1:k ) 20 Quantfyng Uncertanty

Partcle Flters Proposal Dstrbuton Propertes Suppose we pck Q(X 0:k Y 1:k ) = Q(X k X 0:k 1, Y 1:k )Q(X 0:k 1 Y 1:k 1 ).e. there s some knd of recurson on the proposal dstrbuton. Further, f we approxmate.e. there s a Markov property. Q(X k X 0:k 1, Y 1:k ) = Q(X k X k 1, Y k ) 21 Quantfyng Uncertanty

Partcle Flters Recursve Weght Updates Then we may have found an update equaton for the weghts. P(X 0:k Y 1:k ) P(Y k X k )P(X k X k 1 )P(X 0:k 1, Y 1:k 1 ) = Q(X 0:k Y 1:k ) P(Y k Y 1:k 1 )Q(X k X k 1, Y k )Q(X 0:k 1 Y 1:k 1 ) P(Y k X k )P(X k X w k 1 = ) P(X 0:k 1, Y 1:k 1 ) k Q(X k X k 1, Y k )P(Y k Y 1:k 1 ) Q(X 0:k 1, Y 1:k 1 ) P(Y k X k )P(X k Xk 1 = ) Q(Xk X k 1, Y k )P(Y k Y 1:k 1 ) w k 1 P(Y k X k )P(X k Xk 1 ) w Q(X k X k 1, Y k ) 22 k 1 Quantfyng Uncertanty

Partcle Flters The Partcle Flter In the flterng problem P(X k Y 1:k ) P(Y k X k )P(X k X k 1 ) w k w k 1 Q(X k X k 1, Y k ) N (So) P(X k Y 1:k ) = w k δ(x k X Where the x k Q(X k X k 1, Y k ) The method essentally draws partcles from a proposal dstrbuton and recursvely update ts weghts. No gaussan assumpton Neat 23 =1 k ) Quantfyng Uncertanty

Partcle Flters Algorthm Sequental Importance Samplng end Input: {X k 1, w k 1 }, Y k = 1... N for: = 1... N Draw: X k Q(X k X k 1, Y k ) P(Y k X k )P(X k X k 1 ) w k w k 1 Q(X k X k 1,Y k ) 24 Quantfyng Uncertanty

Partcle Flters BUT The Problem X k 1 Q X k In a few ntervals one partcle wll have a non neglgble weght; all but one wll have neglgble weghts! 1 QN eff = N =1 (w k ) 2 25 Quantfyng Uncertanty

Partcle Flters Contd. QN eff Effectve Sample sze When N Q eff << N Degenaracy sets n Resamplng s a way to address ths problem Man dea weghts Resample weghts are reset You can sample unformly and set weghts to obtan a representaton. You can sample pdf to get partcles and reset ther weghts. 26 Quantfyng Uncertanty

Partcle Flters Resamplng algorthm w 4 w 5 w 3 w 6 w 2 w 7 w 1 Cdf(w) X 1 X 2 X 3 X 4 X 5 X 6 X 7 w 1 w 7 w 2 w 6 w 3 w 5 w 4 Unform weghts Many ponts Resamplng Sample more probable states more 27 Quantfyng Uncertanty

Partcle Flters Algorthm Input {X k, w k } 1. Construct cdf for = 2 : N C C 1 + w k (sorted) 2. Seed u 1 U[0, N 1 ] 3. for j = 1 : N 1 u j = u 1 + N (j 1) fnd(c u j ) Xˆ j = X w j = 1 k k k N Set Parent of j 28 Quantfyng Uncertanty

Partcle Flters Contd. So the resamplng method can avod degeneracy because t produces more samples for hgher probablty ponts But Sample mpovershment may result; Too many samples too close mpovershment or loss of dversty MCMC may be a way out 29 Quantfyng Uncertanty

Partcle Flters Generc Partcle flter end η = w k w k w k /η Input: {X k 1, w k 1 }, Y k for = 1 : N X k Q(X k X k 1, Y k ) w k w P(Y k X k )P(X k X k 1) k 1 Q(X k X k 1,Y k ) If N Q eff < N T {X k, w k } Resample {X k, w k } 1 QN eff = N =1 (w k ) 2 30 Quantfyng Uncertanty

Partcle Flters What s the optmal Q functon? N we try to mnmze =1 (w k ) 2 Then: Q (X k X k 1, Y k ) = P(X k X k 1, Y k ) P(Y k X k, X k 1 )P(X k X = P(Y k X k 1 ) k 1 ) w k w P(Y k X k )P(X k X k 1 ) P(Y k X P(Y k X k )P(X k X k 1 ) k 1 k 1) w k 1 P(Y k X k )P(X k X k 1 )dx k X _ k Not easy to do! 31 Quantfyng Uncertanty

Partcle Flters Asymptotcally: Q Q P(X k X k 1 ) Common choce Q P(X k X k 1 ) Sometmes feasble to use proposal from process nose Then w K w k 1 P(Y k X k ) If resamplng s done at every step: w k p(y k X k ) 1 (w k 1 N ) 32 Quantfyng Uncertanty

Partcle Flters SIR -Samplng Importance Resamplng Input {X k 1, w k 1 }, Y k for = 1 : N Xk P(X k X w = P(Y k X k 1 ) k k ) end η = w k w k = w k /η {xk, w k } Resample [{X k, w k }] 33 Quantfyng Uncertanty

Partcle Flters Example Y k = η k N(0, R) v k 1 N(0, Q k 1 ) X k 1 25X k 1 X k = + + 8 cos(1.2k) + v k 1 2 1 + X 2 k 1 X 2 k + η k w 34 Quantfyng Uncertanty

MIT OpenCourseWare http://ocw.mt.edu 12.S990 Quantfyng Uncertanty Fall 2012 For nformaton about ctng these materals or our Terms of Use, vst: http://ocw.mt.edu/terms.