Announcements. Recap: Filtering. Recap: Reasoning Over Time. Example: State Representations for Robot Localization. Particle Filtering

Similar documents
CSE-473. A Gentle Introduction to Particle Filters

Sequential Importance Resampling (SIR) Particle Filter

Two Popular Bayesian Estimators: Particle and Kalman Filters. McGill COMP 765 Sept 14 th, 2017

Probabilistic Robotics

Introduction to Mobile Robotics

Zürich. ETH Master Course: L Autonomous Mobile Robots Localization II

Temporal probability models

SEIF, EnKF, EKF SLAM. Pieter Abbeel UC Berkeley EECS

Robot Motion Model EKF based Localization EKF SLAM Graph SLAM

Temporal probability models. Chapter 15, Sections 1 5 1

Estimation of Poses with Particle Filters

2016 Possible Examination Questions. Robotics CSCE 574

L07. KALMAN FILTERING FOR NON-LINEAR SYSTEMS. NA568 Mobile Robotics: Methods & Algorithms

Probabilistic Robotics The Sparse Extended Information Filter

Probabilistic Robotics SLAM

Using the Kalman filter Extended Kalman filter

CSE-571 Robotics. Sample-based Localization (sonar) Motivation. Bayes Filter Implementations. Particle filters. Density Approximation

EKF SLAM vs. FastSLAM A Comparison

Probabilistic Robotics SLAM

7630 Autonomous Robotics Probabilistic Localisation

Hidden Markov Models

Anno accademico 2006/2007. Davide Migliore

Augmented Reality II - Kalman Filters - Gudrun Klinker May 25, 2004

Data Fusion using Kalman Filter. Ioannis Rekleitis

Deep Learning: Theory, Techniques & Applications - Recurrent Neural Networks -

Introduction to Mobile Robotics SLAM: Simultaneous Localization and Mapping

Lecture 1 Overview. course mechanics. outline & topics. what is a linear dynamical system? why study linear systems? some examples

Tracking. Many slides adapted from Kristen Grauman, Deva Ramanan

Probabilistic Fundamentals in Robotics

Notes on Kalman Filtering

m = 41 members n = 27 (nonfounders), f = 14 (founders) 8 markers from chromosome 19

Applications in Industry (Extended) Kalman Filter. Week Date Lecture Title

Overview. COMP14112: Artificial Intelligence Fundamentals. Lecture 0 Very Brief Overview. Structure of this course

CS 188: Artificial Intelligence

Georey E. Hinton. University oftoronto. Technical Report CRG-TR February 22, Abstract

2.160 System Identification, Estimation, and Learning. Lecture Notes No. 8. March 6, 2006

Tracking. Announcements

Tracking. Many slides adapted from Kristen Grauman, Deva Ramanan

CS 343: Artificial Intelligence

Chapter 14. (Supplementary) Bayesian Filtering for State Estimation of Dynamic Systems

Mapping in Dynamic Environments

Fundamental Problems In Robotics

מקורות לחומר בשיעור ספר הלימוד: Forsyth & Ponce מאמרים שונים חומר באינטרנט! פרק פרק 18

Written HW 9 Sol. CS 188 Fall Introduction to Artificial Intelligence

CSE 473: Ar+ficial Intelligence. Example. Par+cle Filters for HMMs. An HMM is defined by: Ini+al distribu+on: Transi+ons: Emissions:

CS 343: Artificial Intelligence

CS 4495 Computer Vision Tracking 1- Kalman,Gaussian

Speaker Adaptation Techniques For Continuous Speech Using Medium and Small Adaptation Data Sets. Constantinos Boulis

State-Space Models. Initialization, Estimation and Smoothing of the Kalman Filter

Hidden Markov Models. Adapted from. Dr Catherine Sweeney-Reed s slides

CSE 473: Artificial Intelligence

Linear Gaussian State Space Models

CS 5522: Artificial Intelligence II

CS 5522: Artificial Intelligence II

Uncertainty & Localization I

Localization. Mobile robot localization is the problem of determining the pose of a robot relative to a given map of the environment.

Book Corrections for Optimal Estimation of Dynamic Systems, 2 nd Edition

MULTI-MODAL PARTICLE FILTERING FOR HYBRID SYSTEMS WITH AUTONOMOUS MODE TRANSITIONS

Self assessment due: Monday 4/29/2019 at 11:59pm (submit via Gradescope)

Introduction to Mobile Robotics Summary

Object tracking: Using HMMs to estimate the geographical location of fish

An recursive analytical technique to estimate time dependent physical parameters in the presence of noise processes

INTRODUCTION TO MACHINE LEARNING 3RD EDITION

Financial Econometrics Kalman Filter: some applications to Finance University of Evry - Master 2

Markov Models. CS 188: Artificial Intelligence Fall Example. Mini-Forward Algorithm. Stationary Distributions.

Machine Learning 4771

Lecture 4 Kinetics of a particle Part 3: Impulse and Momentum

Math Week 14 April 16-20: sections first order systems of linear differential equations; 7.4 mass-spring systems.

Particle Filtering and Smoothing Methods

Planning in POMDPs. Dominik Schoenberger Abstract

Lecture 2-1 Kinematics in One Dimension Displacement, Velocity and Acceleration Everything in the world is moving. Nothing stays still.

AUV positioning based on Interactive Multiple Model

Embedded Systems and Software. A Simple Introduction to Embedded Control Systems (PID Control)

Monte Carlo data association for multiple target tracking

A PROBABILISTIC MULTIMODAL ALGORITHM FOR TRACKING MULTIPLE AND DYNAMIC OBJECTS

A Rao-Blackwellized Parts-Constellation Tracker

Modal identification of structures from roving input data by means of maximum likelihood estimation of the state space model

Maximum Likelihood Parameter Estimation in State-Space Models

Announcements: Warm-up Exercise:

Simultaneous Localisation and Mapping. IAR Lecture 10 Barbara Webb

Localization and Map Making

FastSLAM with Stereo Vision

Stationary Distribution. Design and Analysis of Algorithms Andrei Bulatov

) were both constant and we brought them from under the integral.

CSEP 573: Artificial Intelligence

On-line Adaptive Optimal Timing Control of Switched Systems

Recognising Behaviours of Multiple People with Hierarchical Probabilistic Model and Statistical Data Association

Learning a Class from Examples. Training set X. Class C 1. Class C of a family car. Output: Input representation: x 1 : price, x 2 : engine power

Recursive Least-Squares Fixed-Interval Smoother Using Covariance Information based on Innovation Approach in Linear Continuous Stochastic Systems

Learning a Class from Examples. Training set X. Class C 1. Class C of a family car. Output: Input representation: x 1 : price, x 2 : engine power

Timed Circuits. Asynchronous Circuit Design. Timing Relationships. A Simple Example. Timed States. Timing Sequences. ({r 6 },t6 = 1.

CSE 473: Ar+ficial Intelligence. Probability Recap. Markov Models - II. Condi+onal probability. Product rule. Chain rule.

Vehicle Arrival Models : Headway

Recursive Bayes Filtering Advanced AI

Some Basic Information about M-S-D Systems

CHAPTER 10 VALIDATION OF TEST WITH ARTIFICAL NEURAL NETWORK

Ordinary differential equations. Phys 750 Lecture 7

Hidden Markov Models. Hal Daumé III. Computer Science University of Maryland CS 421: Introduction to Artificial Intelligence 19 Apr 2012

Look-ahead Proposals for Robust Grid-based SLAM

STATE-SPACE MODELLING. A mass balance across the tank gives:

Transcription:

Inroducion o Arificial Inelligence V22.0472-001 Fall 2009 Lecure 18: aricle & Kalman Filering Announcemens Final exam will be a 7pm on Wednesday December 14 h Dae of las class 1.5 hrs long I won ask anyhing abou he las few classes. Rob Fergus Dep of Compuer Science, Couran Insiue, NYU Slides from John DeNero, Dan Klein, Haris Balzakis, Dieer Fox 2 Recap: Reasoning Over ime Recap: Filering Saionary Markov models 0.3 Elapse ime: compue ( X e 1:-1 ) 0.7 X 1 X 2 X 3 X 4 0.7 Observe: compue ( X e 1: ) 0.3 Hidden Markov models X 1 X 2 X 3 X 4 X 5 E 1 E 2 E 3 E 4 E 5 X E umbrella 0.9 no umbrella 0.1 umbrella 0.2 no umbrella 0.8 X 1 E 1 X 2 E 2 Belief: <(), ()> <0.5, 0.5> rior on X 1 <0.82, 0.18> Observe <0.63, 0.37> Elapse ime <0.88, 0.12> Observe aricle Filering Someimes X is oo big o use exac inference X may be oo big o even sore B(X) E.g. X is coninuous X 2 may be oo big o do updaes Sli Soluion: approximae inference if rack samples of X, no all values Samples are called paricles ime per sep is linear in he number of samples Bu: number needed may be large In memory: lis of paricles, no saes his is how robo localizaion works in pracice 0.0 0.1 0.0 0.0 0.0 0.2 0.0 0.2 0.5 Example: Sae Represenaions for Robo Localizaion aricle Filers (Mone Carlo localizaion) Grid Based approaches (Markov localizaion) 6 1

Represenaion: aricles aricle Filering: Elapse ime Our represenaion of (X) is now a lis of N paricles (samples) Generally, N << X Soring map from X o couns would defea he poin Each paricle is moved by sampling is nex posiion from he ransiion model (x) approximaed by number of paricles wih value x So, many x will have (x) = 0! More paricles, more accuracy For now, all paricles have a weigh of 1 aricles: (2,3) (3,2) (3,2) (2,1) (2,1) 7 his is like prior sampling samples frequencies reflec he ransiion probs Here, mos samples move clockwise, bu some move in anoher direcion or say in place his capures he passage of ime If we have enough samples, close o he exac values before and afer (consisen) aricle Filering: Observe aricle Filering: Resample Slighly rickier: Don do rejecion sampling (why no?) We don sample he observaion, we fix i his is similar o likelihood weighing, so we downweigh our samples based on he evidence Noe ha, as before, he probabiliies don sum o one, since mos have been downweighed (in fac hey sum o an approximaion of (e)) Raher han racking weighed samples, we resample N imes, we choose from our weighed sample disribuion (i.e. draw wih replacemen) his is equivalen o renormalizing he disribuion Now he updae is complee for his ime sep, coninue wih he nex one Old aricles: w=0.1 (2,1) w=0.9 (2,1) w=0.9 (3,1) w=0.4 (3,2) w=0.3 (2,2) w=0.4 (1,1) w=0.4 (3,1) w=0.4 (2,1) w=0.9 (3,2) w=0.3 Old aricles: (3,2) w=1 (2,2) w=1 (1,1) w=1 (3,1) w=1 (1,1) w=1 Bel aricle Filer Algorihm ( x ) = η p( z x ) p( x x 1, u 1) Bel( x 1) dx 1 Imporance facor for x i : draw x i 1 from Bel(x 1 ) draw x i from p(x x i 1,u 1 ) i arge disribuion w = proposal disribuion η p( z x ) p( x x 1, u 1) Bel ( x 1) = p( x x 1, u 1) Bel ( x 1) p( z x ) Robo Localizaion In robo localizaion: We know he map, bu no he robo s posiion Observaions may be vecors of range finder readings Sae space and readings are ypically coninuous (works basically like a very fine grid) and so we canno sore B(X) aricle filering is a main echnique 2

Robo Moion Model roximiy Sensor Model Sar Laser sensor Sonar sensor 3

4

5

Roboic Cars DARA Grand Challenge DARA Urban Challenge hp://www.youube.com/wach?v=sqfemr50hak SLAM SLAM = Simulaneous Localizaion And Mapping We do no know he map or our locaion Our belief sae is over maps and posiions! Main echniques: Kalman filering (Gaussian HMMs) and paricle mehods Example: Sae Represenaions for Robo Localizaion Grid Based approaches (Markov localizaion) D-SLAM, Ron arr aricle Filers (Mone Carlo localizaion) Kalman racking 36 6

Kalman Filers - Equaions Kalman Filers - Updae Recursive filer for esimaing sae of linear dynamical sysem from noisy measuremens ( x x 1 ) N( Ax 1, Γ) ( ) y x N( Cx, Σ) x = A x 1 + w y = C x + v w N v N Where : A: Sae ransiion marix (n x n) C: Measuremen marix (m x n) w: rocess noise (є R n ), v: Measuremen noise(є R m ) ( 0, Γ) ( ) 0, Σ measuremens (observaion model) N( x; m, V ) = 1 1 1 exp ( x m) V ( x m) 1/ 2 π 2 2 V rocess dynamics (moion model) 37 x = A x 1 + w y = C x + v w N v N ( 0, Γ ) ( ) 0, Σ redic sae, covariance Compue Gain Compue Innovaion Updae = A 1 = A A 1 + Γ K = C ( C C J = y Cx = Κ J = ( I K C) k + Σ ) 1 38 Kalman Filer - Example Kalman Filer - Example x = A x 1 + w ( 0, Γ) ( 0 Σ) A = [1] B = [ u ] y = C x + D + v w N C = [1] v N, = [1] D redic = A 1 = A A + Γ 1 x = x 1 + u + w y = d x + v w N v N ( 0, Γ) ( ) 0, Σ 39 40 Kalman Filer - Example Kalman Filer - Example redic = A 1 = A A + Γ 1 redic = A 1 = A A + Γ 1 Compue Innovaion J = y Cx Compue Gain K = C ( C C + Σ) 1 41 42 7

Kalman Filer Example Kalman Filer Example redic = A 1 = A A + Γ 1 redic = A 1 = A A + Γ 1 Compue Innovaion J = y Cx Compue Gain K = C ( C C + Σ) 1 Updae = Κ J = ( I K C) k 43 44 Kalman Filer Applicaions Coninuous Sae Approaches Apollo guidance compuer Cruise missiles Airplane auopilo Roboics Finance erform very accuraely if he inpus are precise (performance is opimal wih respec o any crierion in he linear case). Compuaional efficiency. Requiremen ha he iniial sae is known. Inabiliy o recover from caasrophic failures Inabiliy o rack Muliple Hypoheses he sae (Gaussians have only one mode) 46 Discree Sae Approaches Bes Explanaion Queries Abiliy (o some degree) o operae even when is iniial pose is unknown (sar from uniform disribuion). Abiliy o deal wih noisy measuremens. Abiliy o represen ambiguiies (muli modal disribuions). Compuaional ime scales heavily wih he number of possible saes (dimensionaliy of he grid, number of samples, size of he map). Accuracy is limied by he size of he grid cells/number of paricles-sampling mehod. Required number of paricles is unknown 47 X 1 X 2 X 3 X 4 X 5 E 1 E 2 E 3 E 4 E 5 Query: mos likely seq: 48 8

Sae ah rellis Vierbi Algorihm Sae rellis: graph of saes and ransiions over ime Each arc represens some ransiion Each arc has weigh Each pah is a sequence of saes he produc of weighs on a pah is he seq s probabiliy Can hink of he Forward (and now Vierbi) algorihms as compuing sums of all pahs (bes pahs) in his graph 49 50 Example Andrew Vierbi 51 9