Imprecise Filtering for Spacecraft Navigation
|
|
- Silvester Ross Foster
- 5 years ago
- Views:
Transcription
1 Imprecise Filtering for Spacecraft Navigation Tathagata Basu Cristian Greco Thomas Krak Durham University Strathclyde University Ghent University
2 Filtering for Spacecraft Navigation
3 The General Problem Dynamical system with state x t X, t 0 For simple orbital mechanics: X [0, 1] 6 General statement given by (stochastic) differential equation d x t d t = f (x t, t θ) + }{{} system dynamics d W t d t }{{} Brownian motion noise with stochastic initial condition x 0 P(X 0 θ). σ(x t, t θ) }{{} diffusion coefficient Collect all parameters in θ For given θ induces a stochastic process {X t θ} t 0 This process is a Markov chain
4 The General Problem Realisations of {X t θ} t 0 cannot be observed directly I.e. it is a hidden Markov chain But, noisy measurement model: Y t ψ( x t, θ) (Just writing θ for the parameters again) Filtering Problem Given collection y t1:n y t1,..., y tn, compute conditional probability P θ (X t y t1:n ) More generally: conditional expectation of function h : X R: E θ [ h(xt ) y t1:n ]
5 Imprecise Filtering What if we don t know θ exactly? If we can assume it belongs to a set ϑ: Induces an imprecise stochastic process I.e. a set of stochastic processes Imprecise Filtering Problem Compute lower- and upper expectation E [ h(x t ) ] yt1:n = inf E [ θ h(xt ) ] yt1:n θ ϑ E [ h(x t ) ] [ y t1:n = sup E θ h(xt ) ] y t1:n θ ϑ provide robust bounds on quantity of interest
6 Some Immediate Difficulties Even for fixed θ, no analytical solution for {X t θ} t 0 Let alone for E θ [ h(xt ) yt1:n ] Convergence of numerical approaches? Uniformly w.r.t ϑ? The optimisation to compute E or E is difficult Multi-modal objective surface,... Semantic issues in problem statement Physical interpretation of certain relaxations,...
7 Some Existing Approaches Precise case: Kalman Filter (/variants) Gaussian distr., linear dyn. P(X t y t1:n ) is Gaussian Extended Kalman Filter: Just re-linearise over time Particle Filters Essentially a Monte Carlo method Imprecise case: Special cases: propagation of p-boxes, set-valued observations,... Imprecise Kalman Filter [1] Provides general solution but so far only solved in specific cases E.g. Linear Gaussian-Vacuous Mixture [1] A. Benavoli, M. Zaffalon, E. Miranda: Robust filtering through coherent lower previsions, IEEE Transactions on Automatic Control, 2011
8 The Formal Setup Basically we just need to apply Bayes rule: E [ h(x t ) y t1:n ] = E [ h(x t ) n i=1 ψ(y t i X ti ) ] E [ n i=1 ψ(y t i X ti ) ] In the imprecise case (let s say E [ n i=1 I {y ti }(Y ti ) ] > 0): E [ [ ] (h(xt h(x t ) y t1:n = µ E ) µ ) n ] I {yti }(Y ti ) = 0 i=1 So, we need to compute a joint expectation.
9 Iterated (Lower) Expectation Write g(x t1:n, X t, Y t1:n ) = ( h(x t ) µ ) n i=1 I {y ti }(Y ti ) Use epistemic irrelevance imprecise Markov property Recursively decompose the joint: E [ g( ) ] = E [ E [ g( ) X 0 ]] = E [ E [ E [ ] ]] g( ) X 0, X t1 X 0 }{{} ] =E [g( ) X t1 = E [ E [ E [ g( ) X t1 ] X0 ]] = E [ E [ E [ E [ g( ) Y t1, X t1 ] Xt1 ] X0 ]]. = E [ E [ E [ E [ E [ E [ g( ) X tn, Y tn ] Xtn ] Yt1, X t1 ] Xt1 ] X0 ]]
10 Iterated (Lower) Expectation Now resolve the remaining conditionals inward-outward : E [ E [ E [ E [ E [ E [ ] ] ] ] ]] g( ) X tn, Y tn X tn Yt1, X t1 Xt1 X0 }{{} start here Fix all variables except X t : [ (h(xt E ) µ ) ] n i=1 I y ti (γ ti ) X tn = x tn, Y tn = γ tn = E [( h(x t ) µ ) X tn = x tn, Y tn = γ tn ] n i=1 I y ti (γ ti ) = E [( h(x t ) µ ) X tn = x tn ] n i=1 I y ti (γ ti ) =: g ( )
11 Iterated (Lower) Expectation We have a new function g that no longer depends on X t. After substitution E [ E [ E [ E [ E [ g ] ] ] ]] ( ) X tn Y t1, X t1 Xt1 X0 }{{} this part next Fix all variables except Y tn, E [ g ] ( ) X tn = x tn [ = E E [( h(x t ) µ ) ] X tn = x tn Iytn (Y tn ) ] n 1 i=1 I y ti (γ ti ) X tn = x tn = n 1 i=1 I y ti (γ ti )E [( h(x t ) µ ) ] X tn = x tn { [ E Iytn (Y ] t n ) X tn = x tn if E [( h(x t ) µ ) ] X tn = x tn 0 E [ ] I ytn (Y tn ) X tn = x tn otherwise =: g ( )
12 Iterated (Lower) Expectation We have E [ E [ E [ E [ g ( ) Y t1, X t1 ] Xt1 ] X0 ]] g ( ) depends on X t1,..., X tn 1, X tn and Y t1,..., X tn 1 This is what we started with, but reduced Now just recurse E [ E [ E [ E [ g ] ] ]] ( ) Y t1, X t1 Xt1 X0 }{{} =:g ( ) = E [ g (X 0 ) ]
13 Remaining Problem, in Summary Lower- and upper likelihoods of states given measurements: E [ I yt (Y t ) X t = x t ] and E [ Iyt (Y t ) X t = x t ] For arbitrary h : X R, need to evaluate E [ h(x 0 ) ] (okay) E [ h(x tn ) X tn 1 ] (not okay)
14 Some Simplifying Assumptions Assume dynamics are precise and time-homogeneous: d x t d t = f (x t) + d W t d t σ(x t ) Let X [0, 1] d and h t : x E[h(X t ) X 0 = x]. Then d h t (x) d t = d i=1 f i (x) h t(x) x i d d i=1 j=1 (σ(x)σ(x) ) ij 2 h t (x) x i x j with h 0 = h. To get h t (x) = E[h(X t ) X 0 = x] we need to solve this PDE. In general no analytical solution.
15 Some Further Simplifying Assumptions Assume the dynamics are also deterministic I.e. the process is degenerate and described by f (x t ) We keep stochastic (and imprecise) initial distribution and measurements We get with h 0 = h. d h t (x) d t = d i=1 f i (x) h t(x) x i Still need to solve a PDE, and still no analytical solution We re going to approximate this
16 Approximate Solution Easy enough to solve pointwise: for any x 0 X, E[h(X t ) X 0 = x 0 ] = h(x t ), with t x t = x 0 + f (x τ ) dτ 0 Just do this for a lot of different x 0 Then extend pointwise estimates to entire domain X
17 Dynamics Propagation Collection of starting points Z = z 1,..., z m Compute Z = z (t) 1,..., z(t) m, with t z (t) i = z i + 0 f ( z (τ) ) i dτ, i = 1,..., m This is just an ODE use your favourite numerical integrator I.e. Explicit/implicit Euler, Runge-Kutta schemes,... Note: this could be embarrassingly parallelized A modern GPU can solve hundreds to thousands in parallel
18 Propagated Dynamics We now know h t (z i ) = h ( x (t) ) i = E[h(Xt ) X 0 = z i ] for i = 1,..., m Now extend this to ĥ t on entire X
19 Radial Basis Function Interpolation Training points Z = z 1,..., z m X. Radial basis functions φ : X X R, φ(z, c) = φ( z c ) Basis expansion Φ z R m of any z X : Φ z = [φ(z, z 1 ) φ(z, z m )] Can find w h R m to approximate any ( nice ) function h : X R: h(z) ĥ(z) = m w i φ(z, z i ) = Φ z w h i=1
20 N. Flyer, G.B. Wright: A radial basis function method for the shallow water equations on a sphere, Proceedings Royal Soc A, 2009
21 Radial Basis Function Interpolation Construct design(/kernel/feature/...) matrix Φ z1 φ(z 1, z 1 ) φ(z 1, z m ) Φ Z =. =..... φ(z m, z 1 ) φ(z m, z m ) Φ zm Then Φ Z is usually invertible if all z 1,..., z m are unique. Let h(z) = [h(z 1 ) h(z m )] Find w h that interpolates h on Z: Φ Z w h = h(z) In other words, we get w h = Φ 1 Z h(z)
22 Radial Basis Function Interpolation Pros: Straightforward (conceptually and to implement) Meshless (i.e. arbitrary Z), so no need to partition X Just solve a linear system Well-developed theory Good performance for approximating PDE solutions Near spectral (exponential) convergence Many parameters to fine tune performance Basis functions, hyperparameters, regularisation,... Cons: Many parameters to fine tune Numerical instability (system is typically ill-conditioned)
23 Approximation in Summary We want to compute ĥ t (X 0 ) h t (X 0 ) := E[h(X t ) X 0 ] We know that h t (x 0 ) = h(x t ), where We proceed as follows 1 Choose some Z 2 Compute Z for time t t x t = x 0 + f (x τ ) dτ 0 3 Compute design matrix Φ Z 4 Compute w ht = Φ 1 Z h(z ) 5 Let ĥt(x) := Φ x w ht for any x X
24 Iterated Expectation, Again E[h(X tn )] = E [ E [ h(x tn ) X 0 ]] X 0 X tn
25 Iterated Expectation, Again E[h(X tn )] = E [ E [ E [ E[h(X tn ) X tn 1 ] X t1 ] X0 ]] X 0 X t1 X tn 1 X tn
26 Iterated Expectation, Again E[h(X tn )] = E [ E [ E [ E[h(X tn ) X tn 1 ] X t1 ] X0 ]] h tn := h X 0 X t1 X tn 1 X tn
27 Iterated Expectation, Again E[h(X tn )] = E [ E [ E [ E[h(X tn ) X tn 1 ] X t1 ] X0 ]] h tn := h X 0 X t1 X tn 1 X tn Z
28 Iterated Expectation, Again E[h(X tn )] = E [ E [ E [ E[h(X tn ) X tn 1 ] X t1 ] X0 ]] h tn := h X 0 X t1 X tn 1 X tn Z Z f ( )
29 Iterated Expectation, Again E[h(X tn )] = E [ E [ E [ E[h(X tn ) X tn 1 ] X t1 ] X0 ]] w htn = Φ 1 Z h t n (Z ) h tn := h X 0 X t1 X tn 1 X tn Z Z f ( )
30 Iterated Expectation, Again E[h(X tn )] = E [ E [ E [ E[h(X tn ) X tn 1 ] X t1 ] X0 ]] w htn = Φ 1 Z h t n (Z ) ĥ tn 1 h tn := h X 0 X t1 X tn 1 X tn Z Z f ( )
31 Iterated Expectation, Again E[h(X tn )] E [ E [ E [ ĥt n 1 (X tn 1 ) X t1 ] X0 ]] ĥ tn 1 X 0 X t1 X tn 1
32 Iterated Expectation, Again E[h(X tn )] E [ E [ E [ ĥt n 1 (X tn 1 ) X t1 ] X0 ]] ĥ tn 1 X 0 X t1 X tn 1 f ( )
33 Iterated Expectation, Again E[h(X tn )] E [ E [ E [ ĥt n 1 (X tn 1 ) X t1 ] X0 ]] ĥ t1 ĥ tn 1 X 0 X t1 X tn 1 f ( ) f ( )
34 Iterated Expectation, Again E[h(X tn )] E [ E [ ĥ t1 (X t1 ) X 0 ]] ĥ t1 X 0 X t1
35 Iterated Expectation, Again E[h(X tn )] E [ E [ ĥ t1 (X t1 ) X 0 ]] ĥ 0 ĥ t1 X 0 X t1 f ( )
36 Iterated Expectation, Again E[h(X tn )] E [ ĥ 0 (X 0 ) ] ĥ 0 X 0
37 Iterated Expectation, Again E[h(X tn )] E [ ĥ 0 (X 0 ) ] = X ĥ 0 (x 0 ) dp(x 0 = x 0 )
38 The Crucial Part How to make this tractable: 1 Make 0 =: t 0, t 1,..., t n a uniform partition of [0, t] So t i t i 1 = for all i = 1,..., n 2 Choose the same Z at every step
39 The Crucial Part Z is the same at each step (by homogeneity) You can precompute Z, Z, Φ Z, Φ Z, Φ 1 Z, (Φ Z Φ 1 Z ) =: K Only once, offline, and O( m) + O(m 3 )
40 The Crucial Part Z is the same at each step (by homogeneity) You can precompute Z, Z, Φ Z, Φ Z, Φ 1 Z, (Φ Z Φ 1 Z ) =: K Only once, offline, and O( m) + O(m 3 ) Remains to solve the recursion: ĥ tn (x) = h(x) ĥ tn 1 (x) = Φ x wĥtn ĥ tn 2 (x) = Φ x wĥtn 1 = Φ x Φ 1 Z h(z ) = Φ x Φ 1 Z K0 h(z ) = Φ x Φ 1 Z Φ Z Φ 1 Z h(z ) = Φ x Φ 1 Z K1 h(z ). ĥ tn l (x) = Φ x Φ 1 Z Kl 1 h(z ) for l = 1,..., n After precomputing, can find ĥ 0 for any h in O(nm 2 )
41 But does it work?
42 Dynamics of a Spacecraft mirquurius
43 Dynamics of a Spacecraft Pendulum mirquurius
44 Pendulum Setup Pendulum dynamics model: d x t = d [ ] [ ] αt α = t dt dt α t g/l sin α t ρ α t With l = 1, g 9.8 and ρ {0, 0.5}. Known true initial position x 0 = [ π /2 0] Sequential reinitialisation with best guess after 1 second
45 No Friction, Known Start, No Measurements RMSE Number of training points Angle Time
46 With Friction, Known Start, No Measurements RMSE Number of training points Angle Time
47 Pendulum Setup, With Measurements Model thinks friction term ρ = 0 Measurement model: Y t N ( l sin α t, ) Unknown initial position with uniform P(X 0 ) = U( 2, 2) U( 5, 5) Sequential reinitialisation with same uniform after 1 second
48 No Friction, Uniform Start, 20 Measurements/s RMSE EKF=0.171 Observations sd= Number of training points Angle Time
49 Friction, Uniform Start, 20 Measurements/s RMSE EKF=0.349 EKF'=0.200 Observations sd= Number of training points Angle Time
50 Pendulum Setup, Imprecise Imprecise measurement model with lower- and upper likelihood: ψ(y t x t ) = 0.5N ( y t l sin α t, ) ψ(y t x t ) = 1.5N ( y t l sin α t, ) Unknown initial position with vacuous on [ 2, 2] [ 5, 5] Sequential reinitialisation with same vacuous after 1 second
51 Vacuous Start, 20 Measurements/s, Imprecise Measurement Models Angle Time Angle Time
52 Open Questions Does this scale?? Numerical stability issues? Preconditioning Smart selection/pruning of training points? E.g. regularisation, LASSO Generalisation to stochastic (and/or imprecise) dynamics? Find informative sequential prior? Interpretation of K = Φ Z Φ 1 Z?
Lecture 6: Bayesian Inference in SDE Models
Lecture 6: Bayesian Inference in SDE Models Bayesian Filtering and Smoothing Point of View Simo Särkkä Aalto University Simo Särkkä (Aalto) Lecture 6: Bayesian Inference in SDEs 1 / 45 Contents 1 SDEs
More informationThe Kalman Filter ImPr Talk
The Kalman Filter ImPr Talk Ged Ridgway Centre for Medical Image Computing November, 2006 Outline What is the Kalman Filter? State Space Models Kalman Filter Overview Bayesian Updating of Estimates Kalman
More informationLinear Dynamical Systems
Linear Dynamical Systems Sargur N. srihari@cedar.buffalo.edu Machine Learning Course: http://www.cedar.buffalo.edu/~srihari/cse574/index.html Two Models Described by Same Graph Latent variables Observations
More informationLecture 2: From Linear Regression to Kalman Filter and Beyond
Lecture 2: From Linear Regression to Kalman Filter and Beyond Department of Biomedical Engineering and Computational Science Aalto University January 26, 2012 Contents 1 Batch and Recursive Estimation
More informationState-Space Methods for Inferring Spike Trains from Calcium Imaging
State-Space Methods for Inferring Spike Trains from Calcium Imaging Joshua Vogelstein Johns Hopkins April 23, 2009 Joshua Vogelstein (Johns Hopkins) State-Space Calcium Imaging April 23, 2009 1 / 78 Outline
More informationMultiple Model Tracking by Imprecise Markov Trees
1th International Conference on Information Fusion Seattle, WA, USA, July 6-9, 009 Multiple Model Tracking by Imprecise Markov Trees Alessandro Antonucci and Alessio Benavoli and Marco Zaffalon IDSIA,
More informationAn introduction to particle filters
An introduction to particle filters Andreas Svensson Department of Information Technology Uppsala University June 10, 2014 June 10, 2014, 1 / 16 Andreas Svensson - An introduction to particle filters Outline
More informationLecture 4: Numerical Solution of SDEs, Itô Taylor Series, Gaussian Process Approximations
Lecture 4: Numerical Solution of SDEs, Itô Taylor Series, Gaussian Process Approximations Simo Särkkä Aalto University Tampere University of Technology Lappeenranta University of Technology Finland November
More informationKalman filtering and friends: Inference in time series models. Herke van Hoof slides mostly by Michael Rubinstein
Kalman filtering and friends: Inference in time series models Herke van Hoof slides mostly by Michael Rubinstein Problem overview Goal Estimate most probable state at time k using measurement up to time
More informationLecture 2: From Linear Regression to Kalman Filter and Beyond
Lecture 2: From Linear Regression to Kalman Filter and Beyond January 18, 2017 Contents 1 Batch and Recursive Estimation 2 Towards Bayesian Filtering 3 Kalman Filter and Bayesian Filtering and Smoothing
More informationLecture 6: Multiple Model Filtering, Particle Filtering and Other Approximations
Lecture 6: Multiple Model Filtering, Particle Filtering and Other Approximations Department of Biomedical Engineering and Computational Science Aalto University April 28, 2010 Contents 1 Multiple Model
More informationSensor Fusion: Particle Filter
Sensor Fusion: Particle Filter By: Gordana Stojceska stojcesk@in.tum.de Outline Motivation Applications Fundamentals Tracking People Advantages and disadvantages Summary June 05 JASS '05, St.Petersburg,
More informationLecture 4: Numerical Solution of SDEs, Itô Taylor Series, Gaussian Approximations
Lecture 4: Numerical Solution of SDEs, Itô Taylor Series, Gaussian Approximations Simo Särkkä Aalto University, Finland November 18, 2014 Simo Särkkä (Aalto) Lecture 4: Numerical Solution of SDEs November
More informationMarkov Chain Monte Carlo Methods for Stochastic
Markov Chain Monte Carlo Methods for Stochastic Optimization i John R. Birge The University of Chicago Booth School of Business Joint work with Nicholas Polson, Chicago Booth. JRBirge U Florida, Nov 2013
More informationUndirected Graphical Models
Outline Hong Chang Institute of Computing Technology, Chinese Academy of Sciences Machine Learning Methods (Fall 2012) Outline Outline I 1 Introduction 2 Properties Properties 3 Generative vs. Conditional
More informationECE276A: Sensing & Estimation in Robotics Lecture 10: Gaussian Mixture and Particle Filtering
ECE276A: Sensing & Estimation in Robotics Lecture 10: Gaussian Mixture and Particle Filtering Lecturer: Nikolay Atanasov: natanasov@ucsd.edu Teaching Assistants: Siwei Guo: s9guo@eng.ucsd.edu Anwesan Pal:
More informationLecture 4: Extended Kalman filter and Statistically Linearized Filter
Lecture 4: Extended Kalman filter and Statistically Linearized Filter Department of Biomedical Engineering and Computational Science Aalto University February 17, 2011 Contents 1 Overview of EKF 2 Linear
More informationLatent Variable Models
Latent Variable Models Stefano Ermon, Aditya Grover Stanford University Lecture 5 Stefano Ermon, Aditya Grover (AI Lab) Deep Generative Models Lecture 5 1 / 31 Recap of last lecture 1 Autoregressive models:
More informationMarkov Chain Monte Carlo Methods for Stochastic Optimization
Markov Chain Monte Carlo Methods for Stochastic Optimization John R. Birge The University of Chicago Booth School of Business Joint work with Nicholas Polson, Chicago Booth. JRBirge U of Toronto, MIE,
More informationPattern Recognition and Machine Learning. Bishop Chapter 11: Sampling Methods
Pattern Recognition and Machine Learning Chapter 11: Sampling Methods Elise Arnaud Jakob Verbeek May 22, 2008 Outline of the chapter 11.1 Basic Sampling Algorithms 11.2 Markov Chain Monte Carlo 11.3 Gibbs
More informationParticle Filtering Approaches for Dynamic Stochastic Optimization
Particle Filtering Approaches for Dynamic Stochastic Optimization John R. Birge The University of Chicago Booth School of Business Joint work with Nicholas Polson, Chicago Booth. JRBirge I-Sim Workshop,
More informationMachine Learning for OR & FE
Machine Learning for OR & FE Hidden Markov Models Martin Haugh Department of Industrial Engineering and Operations Research Columbia University Email: martin.b.haugh@gmail.com Additional References: David
More informationBayesian Methods for Machine Learning
Bayesian Methods for Machine Learning CS 584: Big Data Analytics Material adapted from Radford Neal s tutorial (http://ftp.cs.utoronto.ca/pub/radford/bayes-tut.pdf), Zoubin Ghahramni (http://hunch.net/~coms-4771/zoubin_ghahramani_bayesian_learning.pdf),
More informationEfficient Computation of Updated Lower Expectations for Imprecise Continuous-Time Hidden Markov Chains
PMLR: Proceedings of Machine Learning Research, vol. 62, 193-204, 2017 ISIPTA 17 Efficient Computation of Updated Lower Expectations for Imprecise Continuous-Time Hidden Markov Chains Thomas Krak Department
More informationNonparametric Drift Estimation for Stochastic Differential Equations
Nonparametric Drift Estimation for Stochastic Differential Equations Gareth Roberts 1 Department of Statistics University of Warwick Brazilian Bayesian meeting, March 2010 Joint work with O. Papaspiliopoulos,
More informationLecture 1: Pragmatic Introduction to Stochastic Differential Equations
Lecture 1: Pragmatic Introduction to Stochastic Differential Equations Simo Särkkä Aalto University, Finland (visiting at Oxford University, UK) November 13, 2013 Simo Särkkä (Aalto) Lecture 1: Pragmatic
More informationc 2014 Krishna Kalyan Medarametla
c 2014 Krishna Kalyan Medarametla COMPARISON OF TWO NONLINEAR FILTERING TECHNIQUES - THE EXTENDED KALMAN FILTER AND THE FEEDBACK PARTICLE FILTER BY KRISHNA KALYAN MEDARAMETLA THESIS Submitted in partial
More informationComputational statistics
Computational statistics Markov Chain Monte Carlo methods Thierry Denœux March 2017 Thierry Denœux Computational statistics March 2017 1 / 71 Contents of this chapter When a target density f can be evaluated
More information9 Multi-Model State Estimation
Technion Israel Institute of Technology, Department of Electrical Engineering Estimation and Identification in Dynamical Systems (048825) Lecture Notes, Fall 2009, Prof. N. Shimkin 9 Multi-Model State
More informationExpectation propagation for signal detection in flat-fading channels
Expectation propagation for signal detection in flat-fading channels Yuan Qi MIT Media Lab Cambridge, MA, 02139 USA yuanqi@media.mit.edu Thomas Minka CMU Statistics Department Pittsburgh, PA 15213 USA
More informationApproximate inference in Energy-Based Models
CSC 2535: 2013 Lecture 3b Approximate inference in Energy-Based Models Geoffrey Hinton Two types of density model Stochastic generative model using directed acyclic graph (e.g. Bayes Net) Energy-based
More informationA variational radial basis function approximation for diffusion processes
A variational radial basis function approximation for diffusion processes Michail D. Vrettas, Dan Cornford and Yuan Shen Aston University - Neural Computing Research Group Aston Triangle, Birmingham B4
More informationOptimal filtering and the dual process
Optimal filtering and the dual process Omiros Papaspiliopoulos ICREA & Universitat Pompeu Fabra Joint work with Matteo Ruggiero Collegio Carlo Alberto & University of Turin The filtering problem X t0 X
More informationNote Set 5: Hidden Markov Models
Note Set 5: Hidden Markov Models Probabilistic Learning: Theory and Algorithms, CS 274A, Winter 2016 1 Hidden Markov Models (HMMs) 1.1 Introduction Consider observed data vectors x t that are d-dimensional
More informationLecture 16 Deep Neural Generative Models
Lecture 16 Deep Neural Generative Models CMSC 35246: Deep Learning Shubhendu Trivedi & Risi Kondor University of Chicago May 22, 2017 Approach so far: We have considered simple models and then constructed
More informationSignal Modeling, Statistical Inference and Data Mining in Astrophysics
ASTRONOMY 6523 Spring 2013 Signal Modeling, Statistical Inference and Data Mining in Astrophysics Course Approach The philosophy of the course reflects that of the instructor, who takes a dualistic view
More informationLagrangian Data Assimilation and Its Application to Geophysical Fluid Flows
Lagrangian Data Assimilation and Its Application to Geophysical Fluid Flows Laura Slivinski June, 3 Laura Slivinski (Brown University) Lagrangian Data Assimilation June, 3 / 3 Data Assimilation Setup:
More informationDynamic System Identification using HDMR-Bayesian Technique
Dynamic System Identification using HDMR-Bayesian Technique *Shereena O A 1) and Dr. B N Rao 2) 1), 2) Department of Civil Engineering, IIT Madras, Chennai 600036, Tamil Nadu, India 1) ce14d020@smail.iitm.ac.in
More informationLangevin Methods. Burkhard Dünweg Max Planck Institute for Polymer Research Ackermannweg 10 D Mainz Germany
Langevin Methods Burkhard Dünweg Max Planck Institute for Polymer Research Ackermannweg 1 D 55128 Mainz Germany Motivation Original idea: Fast and slow degrees of freedom Example: Brownian motion Replace
More informationSequential Monte Carlo Methods for Bayesian Computation
Sequential Monte Carlo Methods for Bayesian Computation A. Doucet Kyoto Sept. 2012 A. Doucet (MLSS Sept. 2012) Sept. 2012 1 / 136 Motivating Example 1: Generic Bayesian Model Let X be a vector parameter
More informationSTA 4273H: Statistical Machine Learning
STA 4273H: Statistical Machine Learning Russ Salakhutdinov Department of Statistics! rsalakhu@utstat.toronto.edu! http://www.utstat.utoronto.ca/~rsalakhu/ Sidney Smith Hall, Room 6002 Lecture 7 Approximate
More informationoutput dimension input dimension Gaussian evidence Gaussian Gaussian evidence evidence from t +1 inputs and outputs at time t x t+2 x t-1 x t+1
To appear in M. S. Kearns, S. A. Solla, D. A. Cohn, (eds.) Advances in Neural Information Processing Systems. Cambridge, MA: MIT Press, 999. Learning Nonlinear Dynamical Systems using an EM Algorithm Zoubin
More informationNonparametric Bayesian Methods (Gaussian Processes)
[70240413 Statistical Machine Learning, Spring, 2015] Nonparametric Bayesian Methods (Gaussian Processes) Jun Zhu dcszj@mail.tsinghua.edu.cn http://bigml.cs.tsinghua.edu.cn/~jun State Key Lab of Intelligent
More informationCPSC 540: Machine Learning
CPSC 540: Machine Learning MCMC and Non-Parametric Bayes Mark Schmidt University of British Columbia Winter 2016 Admin I went through project proposals: Some of you got a message on Piazza. No news is
More informationWhy do we care? Measurements. Handling uncertainty over time: predicting, estimating, recognizing, learning. Dealing with time
Handling uncertainty over time: predicting, estimating, recognizing, learning Chris Atkeson 2004 Why do we care? Speech recognition makes use of dependence of words and phonemes across time. Knowing where
More informationLagrangian Data Assimilation and Manifold Detection for a Point-Vortex Model. David Darmon, AMSC Kayo Ide, AOSC, IPST, CSCAMM, ESSIC
Lagrangian Data Assimilation and Manifold Detection for a Point-Vortex Model David Darmon, AMSC Kayo Ide, AOSC, IPST, CSCAMM, ESSIC Background Data Assimilation Iterative process Forecast Analysis Background
More informationFast-slow systems with chaotic noise
Fast-slow systems with chaotic noise David Kelly Ian Melbourne Courant Institute New York University New York NY www.dtbkelly.com May 12, 215 Averaging and homogenization workshop, Luminy. Fast-slow systems
More informationSequential Monte Carlo Samplers for Applications in High Dimensions
Sequential Monte Carlo Samplers for Applications in High Dimensions Alexandros Beskos National University of Singapore KAUST, 26th February 2014 Joint work with: Dan Crisan, Ajay Jasra, Nik Kantas, Alex
More informationHMM part 1. Dr Philip Jackson
Centre for Vision Speech & Signal Processing University of Surrey, Guildford GU2 7XH. HMM part 1 Dr Philip Jackson Probability fundamentals Markov models State topology diagrams Hidden Markov models -
More informationON THE POLICY IMPROVEMENT ALGORITHM IN CONTINUOUS TIME
ON THE POLICY IMPROVEMENT ALGORITHM IN CONTINUOUS TIME SAUL D. JACKA AND ALEKSANDAR MIJATOVIĆ Abstract. We develop a general approach to the Policy Improvement Algorithm (PIA) for stochastic control problems
More informationPart 1: Expectation Propagation
Chalmers Machine Learning Summer School Approximate message passing and biomedicine Part 1: Expectation Propagation Tom Heskes Machine Learning Group, Institute for Computing and Information Sciences Radboud
More informationEfficient Solvers for Stochastic Finite Element Saddle Point Problems
Efficient Solvers for Stochastic Finite Element Saddle Point Problems Catherine E. Powell c.powell@manchester.ac.uk School of Mathematics University of Manchester, UK Efficient Solvers for Stochastic Finite
More informationA Comparison of the EKF, SPKF, and the Bayes Filter for Landmark-Based Localization
A Comparison of the EKF, SPKF, and the Bayes Filter for Landmark-Based Localization and Timothy D. Barfoot CRV 2 Outline Background Objective Experimental Setup Results Discussion Conclusion 2 Outline
More informationConvergence of Particle Filtering Method for Nonlinear Estimation of Vortex Dynamics
Convergence of Particle Filtering Method for Nonlinear Estimation of Vortex Dynamics Meng Xu Department of Mathematics University of Wyoming February 20, 2010 Outline 1 Nonlinear Filtering Stochastic Vortex
More informationLecture 4: State Estimation in Hidden Markov Models (cont.)
EE378A Statistical Signal Processing Lecture 4-04/13/2017 Lecture 4: State Estimation in Hidden Markov Models (cont.) Lecturer: Tsachy Weissman Scribe: David Wugofski In this lecture we build on previous
More informationMarkov Chains and Hidden Markov Models
Chapter 1 Markov Chains and Hidden Markov Models In this chapter, we will introduce the concept of Markov chains, and show how Markov chains can be used to model signals using structures such as hidden
More informationData assimilation with and without a model
Data assimilation with and without a model Tim Sauer George Mason University Parameter estimation and UQ U. Pittsburgh Mar. 5, 2017 Partially supported by NSF Most of this work is due to: Tyrus Berry,
More informationComputer Intensive Methods in Mathematical Statistics
Computer Intensive Methods in Mathematical Statistics Department of mathematics johawes@kth.se Lecture 16 Advanced topics in computational statistics 18 May 2017 Computer Intensive Methods (1) Plan of
More informationVerified High-Order Optimal Control in Space Flight Dynamics
Verified High-Order Optimal Control in Space Flight Dynamics R. Armellin, P. Di Lizia, F. Bernelli-Zazzera K. Makino and M. Berz Fourth International Workshop on Taylor Methods Boca Raton, December 16
More informationBayesian Networks BY: MOHAMAD ALSABBAGH
Bayesian Networks BY: MOHAMAD ALSABBAGH Outlines Introduction Bayes Rule Bayesian Networks (BN) Representation Size of a Bayesian Network Inference via BN BN Learning Dynamic BN Introduction Conditional
More informationData assimilation with and without a model
Data assimilation with and without a model Tyrus Berry George Mason University NJIT Feb. 28, 2017 Postdoc supported by NSF This work is in collaboration with: Tim Sauer, GMU Franz Hamilton, Postdoc, NCSU
More informationVariational Methods in Bayesian Deconvolution
PHYSTAT, SLAC, Stanford, California, September 8-, Variational Methods in Bayesian Deconvolution K. Zarb Adami Cavendish Laboratory, University of Cambridge, UK This paper gives an introduction to the
More informationControlled sequential Monte Carlo
Controlled sequential Monte Carlo Jeremy Heng, Department of Statistics, Harvard University Joint work with Adrian Bishop (UTS, CSIRO), George Deligiannidis & Arnaud Doucet (Oxford) Bayesian Computation
More informationProbabilistic Graphical Models
Probabilistic Graphical Models Brown University CSCI 2950-P, Spring 2013 Prof. Erik Sudderth Lecture 12: Gaussian Belief Propagation, State Space Models and Kalman Filters Guest Kalman Filter Lecture by
More informationExpectation Propagation in Dynamical Systems
Expectation Propagation in Dynamical Systems Marc Peter Deisenroth Joint Work with Shakir Mohamed (UBC) August 10, 2012 Marc Deisenroth (TU Darmstadt) EP in Dynamical Systems 1 Motivation Figure : Complex
More informationAuxiliary Particle Methods
Auxiliary Particle Methods Perspectives & Applications Adam M. Johansen 1 adam.johansen@bristol.ac.uk Oxford University Man Institute 29th May 2008 1 Collaborators include: Arnaud Doucet, Nick Whiteley
More informationA Note on Auxiliary Particle Filters
A Note on Auxiliary Particle Filters Adam M. Johansen a,, Arnaud Doucet b a Department of Mathematics, University of Bristol, UK b Departments of Statistics & Computer Science, University of British Columbia,
More informationStochastic Collocation Methods for Polynomial Chaos: Analysis and Applications
Stochastic Collocation Methods for Polynomial Chaos: Analysis and Applications Dongbin Xiu Department of Mathematics, Purdue University Support: AFOSR FA955-8-1-353 (Computational Math) SF CAREER DMS-64535
More informationSTA 4273H: Statistical Machine Learning
STA 4273H: Statistical Machine Learning Russ Salakhutdinov Department of Statistics! rsalakhu@utstat.toronto.edu! http://www.utstat.utoronto.ca/~rsalakhu/ Sidney Smith Hall, Room 6002 Lecture 11 Project
More information. Frobenius-Perron Operator ACC Workshop on Uncertainty Analysis & Estimation. Raktim Bhattacharya
.. Frobenius-Perron Operator 2014 ACC Workshop on Uncertainty Analysis & Estimation Raktim Bhattacharya Laboratory For Uncertainty Quantification Aerospace Engineering, Texas A&M University. uq.tamu.edu
More informationNew Multigrid Solver Advances in TOPS
New Multigrid Solver Advances in TOPS R D Falgout 1, J Brannick 2, M Brezina 2, T Manteuffel 2 and S McCormick 2 1 Center for Applied Scientific Computing, Lawrence Livermore National Laboratory, P.O.
More informationLearning Gaussian Process Models from Uncertain Data
Learning Gaussian Process Models from Uncertain Data Patrick Dallaire, Camille Besse, and Brahim Chaib-draa DAMAS Laboratory, Computer Science & Software Engineering Department, Laval University, Canada
More informationFundamentals of Data Assimilation
National Center for Atmospheric Research, Boulder, CO USA GSI Data Assimilation Tutorial - June 28-30, 2010 Acknowledgments and References WRFDA Overview (WRF Tutorial Lectures, H. Huang and D. Barker)
More informationHigher order weak approximations of stochastic differential equations with and without jumps
Higher order weak approximations of stochastic differential equations with and without jumps Hideyuki TANAKA Graduate School of Science and Engineering, Ritsumeikan University Rough Path Analysis and Related
More informationExact Simulation of Diffusions and Jump Diffusions
Exact Simulation of Diffusions and Jump Diffusions A work by: Prof. Gareth O. Roberts Dr. Alexandros Beskos Dr. Omiros Papaspiliopoulos Dr. Bruno Casella 28 th May, 2008 Content 1 Exact Algorithm Construction
More informationAdvanced Computational Methods in Statistics: Lecture 5 Sequential Monte Carlo/Particle Filtering
Advanced Computational Methods in Statistics: Lecture 5 Sequential Monte Carlo/Particle Filtering Axel Gandy Department of Mathematics Imperial College London http://www2.imperial.ac.uk/~agandy London
More informationCheng Soon Ong & Christian Walder. Canberra February June 2018
Cheng Soon Ong & Christian Walder Research Group and College of Engineering and Computer Science Canberra February June 218 Outlines Overview Introduction Linear Algebra Probability Linear Regression 1
More informationMobile Robot Localization
Mobile Robot Localization 1 The Problem of Robot Localization Given a map of the environment, how can a robot determine its pose (planar coordinates + orientation)? Two sources of uncertainty: - observations
More informationCHAPTER 10: Numerical Methods for DAEs
CHAPTER 10: Numerical Methods for DAEs Numerical approaches for the solution of DAEs divide roughly into two classes: 1. direct discretization 2. reformulation (index reduction) plus discretization Direct
More informationCheng Soon Ong & Christian Walder. Canberra February June 2018
Cheng Soon Ong & Christian Walder Research Group and College of Engineering and Computer Science Canberra February June 2018 Outlines Overview Introduction Linear Algebra Probability Linear Regression
More informationStochastic optimal control with rough paths
Stochastic optimal control with rough paths Paul Gassiat TU Berlin Stochastic processes and their statistics in Finance, Okinawa, October 28, 2013 Joint work with Joscha Diehl and Peter Friz Introduction
More informationImproved diffusion Monte Carlo
Improved diffusion Monte Carlo Jonathan Weare University of Chicago with Martin Hairer (U of Warwick) October 4, 2012 Diffusion Monte Carlo The original motivation for DMC was to compute averages with
More informationReinforcement Learning with Reference Tracking Control in Continuous State Spaces
Reinforcement Learning with Reference Tracking Control in Continuous State Spaces Joseph Hall, Carl Edward Rasmussen and Jan Maciejowski Abstract The contribution described in this paper is an algorithm
More informationBayesian parameter estimation in predictive engineering
Bayesian parameter estimation in predictive engineering Damon McDougall Institute for Computational Engineering and Sciences, UT Austin 14th August 2014 1/27 Motivation Understand physical phenomena Observations
More informationControlled Diffusions and Hamilton-Jacobi Bellman Equations
Controlled Diffusions and Hamilton-Jacobi Bellman Equations Emo Todorov Applied Mathematics and Computer Science & Engineering University of Washington Winter 2014 Emo Todorov (UW) AMATH/CSE 579, Winter
More informationEnergy Based Models. Stefano Ermon, Aditya Grover. Stanford University. Lecture 13
Energy Based Models Stefano Ermon, Aditya Grover Stanford University Lecture 13 Stefano Ermon, Aditya Grover (AI Lab) Deep Generative Models Lecture 13 1 / 21 Summary Story so far Representation: Latent
More informationStatistical learning. Chapter 20, Sections 1 4 1
Statistical learning Chapter 20, Sections 1 4 Chapter 20, Sections 1 4 1 Outline Bayesian learning Maximum a posteriori and maximum likelihood learning Bayes net learning ML parameter learning with complete
More informationLecture 7: Optimal Smoothing
Department of Biomedical Engineering and Computational Science Aalto University March 17, 2011 Contents 1 What is Optimal Smoothing? 2 Bayesian Optimal Smoothing Equations 3 Rauch-Tung-Striebel Smoother
More informationACM/CMS 107 Linear Analysis & Applications Fall 2016 Assignment 4: Linear ODEs and Control Theory Due: 5th December 2016
ACM/CMS 17 Linear Analysis & Applications Fall 216 Assignment 4: Linear ODEs and Control Theory Due: 5th December 216 Introduction Systems of ordinary differential equations (ODEs) can be used to describe
More information17 : Markov Chain Monte Carlo
10-708: Probabilistic Graphical Models, Spring 2015 17 : Markov Chain Monte Carlo Lecturer: Eric P. Xing Scribes: Heran Lin, Bin Deng, Yun Huang 1 Review of Monte Carlo Methods 1.1 Overview Monte Carlo
More informationPattern Recognition and Machine Learning. Bishop Chapter 6: Kernel Methods
Pattern Recognition and Machine Learning Chapter 6: Kernel Methods Vasil Khalidov Alex Kläser December 13, 2007 Training Data: Keep or Discard? Parametric methods (linear/nonlinear) so far: learn parameter
More informationErgodic Subgradient Descent
Ergodic Subgradient Descent John Duchi, Alekh Agarwal, Mikael Johansson, Michael Jordan University of California, Berkeley and Royal Institute of Technology (KTH), Sweden Allerton Conference, September
More informationCheng Soon Ong & Christian Walder. Canberra February June 2018
Cheng Soon Ong & Christian Walder Research Group and College of Engineering and Computer Science Canberra February June 2018 Outlines Overview Introduction Linear Algebra Probability Linear Regression
More informationBasic math for biology
Basic math for biology Lei Li Florida State University, Feb 6, 2002 The EM algorithm: setup Parametric models: {P θ }. Data: full data (Y, X); partial data Y. Missing data: X. Likelihood and maximum likelihood
More informationCSC487/2503: Foundations of Computer Vision. Visual Tracking. David Fleet
CSC487/2503: Foundations of Computer Vision Visual Tracking David Fleet Introduction What is tracking? Major players: Dynamics (model of temporal variation of target parameters) Measurements (relation
More informationGaussian processes. Chuong B. Do (updated by Honglak Lee) November 22, 2008
Gaussian processes Chuong B Do (updated by Honglak Lee) November 22, 2008 Many of the classical machine learning algorithms that we talked about during the first half of this course fit the following pattern:
More informationLecture 4: Numerical solution of ordinary differential equations
Lecture 4: Numerical solution of ordinary differential equations Department of Mathematics, ETH Zürich General explicit one-step method: Consistency; Stability; Convergence. High-order methods: Taylor
More informationMultiple-step Time Series Forecasting with Sparse Gaussian Processes
Multiple-step Time Series Forecasting with Sparse Gaussian Processes Perry Groot ab Peter Lucas a Paul van den Bosch b a Radboud University, Model-Based Systems Development, Heyendaalseweg 135, 6525 AJ
More informationModeling and state estimation Examples State estimation Probabilities Bayes filter Particle filter. Modeling. CSC752 Autonomous Robotic Systems
Modeling CSC752 Autonomous Robotic Systems Ubbo Visser Department of Computer Science University of Miami February 21, 2017 Outline 1 Modeling and state estimation 2 Examples 3 State estimation 4 Probabilities
More informationStochastic contraction BACS Workshop Chamonix, January 14, 2008
Stochastic contraction BACS Workshop Chamonix, January 14, 2008 Q.-C. Pham N. Tabareau J.-J. Slotine Q.-C. Pham, N. Tabareau, J.-J. Slotine () Stochastic contraction 1 / 19 Why stochastic contraction?
More information