CSE 483: Mobile Robotics. Extended Kalman filter for localization(worked out example)

Similar documents
1 Kalman Filter Introduction

Bayes Filter Reminder. Kalman Filter Localization. Properties of Gaussians. Gaussians. Prediction. Correction. σ 2. Univariate. 1 2πσ e.

Autonomous Mobile Robot Design

A Study of Covariances within Basic and Extended Kalman Filters

the robot in its current estimated position and orientation (also include a point at the reference point of the robot)

Using the Kalman Filter for SLAM AIMS 2015

Statistical Techniques in Robotics (16-831, F12) Lecture#17 (Wednesday October 31) Kalman Filters. Lecturer: Drew Bagnell Scribe:Greydon Foil 1

E190Q Lecture 11 Autonomous Robot Navigation

Autonomous Navigation for Flying Robots

Vlad Estivill-Castro. Robots for People --- A project for intelligent integrated systems

The Kalman Filter (part 1) Definition. Rudolf Emil Kalman. Why do we need a filter? Definition. HCI/ComS 575X: Computational Perception.

ESTIMATOR STABILITY ANALYSIS IN SLAM. Teresa Vidal-Calleja, Juan Andrade-Cetto, Alberto Sanfeliu

Pose Tracking II! Gordon Wetzstein! Stanford University! EE 267 Virtual Reality! Lecture 12! stanford.edu/class/ee267/!

COS Lecture 16 Autonomous Robot Navigation

Time-Varying Parameters

Kalman Filter. Predict: Update: x k k 1 = F k x k 1 k 1 + B k u k P k k 1 = F k P k 1 k 1 F T k + Q

Introduction to Unscented Kalman Filter

EKF and SLAM. McGill COMP 765 Sept 18 th, 2017

CS491/691: Introduction to Aerial Robotics

Open Economy Macroeconomics: Theory, methods and applications

L06. LINEAR KALMAN FILTERS. NA568 Mobile Robotics: Methods & Algorithms

UAV Navigation: Airborne Inertial SLAM

CS 532: 3D Computer Vision 6 th Set of Notes

Joint GPS and Vision Estimation Using an Adaptive Filter

Multi-Robotic Systems

L11. EKF SLAM: PART I. NA568 Mobile Robotics: Methods & Algorithms

Mobile Robot Localization

Statistics 910, #15 1. Kalman Filter

SLAM Techniques and Algorithms. Jack Collier. Canada. Recherche et développement pour la défense Canada. Defence Research and Development Canada

Kalman filtering and friends: Inference in time series models. Herke van Hoof slides mostly by Michael Rubinstein

Minimal representations of orientation

Lecture 2: From Linear Regression to Kalman Filter and Beyond

Support Vector Machines and Bayes Regression

Inertial Odometry using AR Drone s IMU and calculating measurement s covariance

From Bayes to Extended Kalman Filter

TSRT14: Sensor Fusion Lecture 9

Navigation. Global Pathing. The Idea. Diagram I: Overall Navigation

2D Image Processing (Extended) Kalman and particle filter

Time Series Analysis

Comparision of Probabilistic Navigation methods for a Swimming Robot

Multiple Autonomous Robotic Systems Laboratory Technical Report Number

1 Review and Overview

The Belief Roadmap: Efficient Planning in Belief Space by Factoring the Covariance. Samuel Prentice and Nicholas Roy Presentation by Elaine Short

Mobile Robot Localization

Lecture 2: From Linear Regression to Kalman Filter and Beyond

Anytime Planning for Decentralized Multi-Robot Active Information Gathering

Automatic Control II Computer exercise 3. LQG Design

Optimization-Based Control

Localización Dinámica de Robots Móviles Basada en Filtrado de Kalman y Triangulación

Manipulators. Robotics. Outline. Non-holonomic robots. Sensors. Mobile Robots

CIS 390 Fall 2016 Robotics: Planning and Perception Final Review Questions

Data Fusion of Dual Foot-Mounted Zero Velocity Update (ZUPT) Aided Inertial Navigation Systems (INSs) using Centroid Method

Robot Localisation. Henrik I. Christensen. January 12, 2007

Vlad Estivill-Castro (2016) Robots for People --- A project for intelligent integrated systems

Consistent Triangulation for Mobile Robot Localization Using Discontinuous Angular Measurements

CS 188: Artificial Intelligence Spring 2009

Partially Observable Markov Decision Processes (POMDPs)

CSE 554 Lecture 6: Deformation I

Markov Models. CS 188: Artificial Intelligence Fall Example. Mini-Forward Algorithm. Stationary Distributions.

Localization. Howie Choset Adapted from slides by Humphrey Hu, Trevor Decker, and Brad Neuman

Robotics 2 Target Tracking. Kai Arras, Cyrill Stachniss, Maren Bennewitz, Wolfram Burgard

Simultaneous Localization and Mapping (SLAM) Corso di Robotica Prof. Davide Brugali Università degli Studi di Bergamo

Announcements. CS 188: Artificial Intelligence Fall Markov Models. Example: Markov Chain. Mini-Forward Algorithm. Example

Model-building and parameter estimation

Chapter 4 State Estimation

Miscellaneous. Regarding reading materials. Again, ask questions (if you have) and ask them earlier

Introduction to Mobile Robotics Probabilistic Robotics

Quaternion based Extended Kalman Filter

ROBOTICS 01PEEQW. Basilio Bona DAUIN Politecnico di Torino

Mobile Robots Localization

CS 188: Artificial Intelligence Spring Announcements

Lecture 4: Extended Kalman filter and Statistically Linearized Filter

Robotics. Mobile Robotics. Marc Toussaint U Stuttgart

Online estimation of covariance parameters using extended Kalman filtering and application to robot localization

Probabilistic Fundamentals in Robotics. DAUIN Politecnico di Torino July 2010

E190Q Lecture 10 Autonomous Robot Navigation

1 Bayesian Linear Regression (BLR)

ME5286 Robotics Spring 2017 Quiz 2

2D Image Processing. Bayes filter implementation: Kalman filter

Factor Analysis and Kalman Filtering (11/2/04)

Scalable Sparsification for Efficient Decision Making Under Uncertainty in High Dimensional State Spaces

Probabilistic Graphical Models

EKF, UKF. Pieter Abbeel UC Berkeley EECS. Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics

CSE 473: Artificial Intelligence Spring 2014

EKF, UKF. Pieter Abbeel UC Berkeley EECS. Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics

SC-KF Mobile Robot Localization: A Stochastic Cloning-Kalman Filter for Processing Relative-State Measurements

Probabilistic Graphical Models

Lie Groups for 2D and 3D Transformations

Statistical Techniques in Robotics (16-831, F12) Lecture#21 (Monday November 12) Gaussian Processes

Nonlinear State Estimation! Particle, Sigma-Points Filters!

A method to convert floating to fixed-point EKF-SLAM for embedded robotics

Introduction to Mobile Robotics Bayes Filter Particle Filter and Monte Carlo Localization

Simultaneous Localization and Map Building Using Natural features in Outdoor Environments

Announcements. CS 188: Artificial Intelligence Fall VPI Example. VPI Properties. Reasoning over Time. Markov Models. Lecture 19: HMMs 11/4/2008

Statistical Techniques in Robotics (16-831, F12) Lecture#20 (Monday November 12) Gaussian Processes

Automated Tuning of the Nonlinear Complementary Filter for an Attitude Heading Reference Observer

Sun Sensor Model. Center for Distributed Robotics Technical Report Number

A Theoretical Overview on Kalman Filtering

Particle lter for mobile robot tracking and localisation

Cooperative relative robot localization with audible acoustic sensing

Transcription:

DRAFT a final version will be posted shortly CSE 483: Mobile Robotics Lecture by: Prof. K. Madhava Krishna Lecture # 4 Scribe: Dhaivat Bhatt, Isha Dua Date: 14th November, 216 (Monday) Extended Kalman filter for localization(worked out example) This document briefly walks through computational aspects of EKF localization for a toy example. This document expects a reader to have familiarity with EKF filtering technique to solve localization problem. Here is the algorithm to perform EKF localization, At each iteration, state space representation of the the robot(µ t 1 ), co-variance of the state space(σ t 1 ) and control(u t ) at time t are fed to the algorithm. The algorithm returns updated state space representation(µ t ) of the robot and uncertainty associated with the robot state. (Σ t ). 1 : EXTENDED KALMAN FILTER(µ t 1, Σ t 1, u t, z t ) 2 : µ t g ( u t, µ t 1 ) 3 : Σ t G t Σ t 1 G T t + R t 4 : K t Σ t H T t ( H t Σ t H T t + Q t ) 1 5 : µ t µ t + K t ( z t h ( µ t )) 6 : Σ t ( I K t H t ) Σ t 7 : return µ t, Σ t Overview: Here, we are considering a simple problem where our robot can navigate in a 2D environment. Features of the environment(landmarks) are already known. For this problem, our environment has 4 landmarks. In this document, we are considering 2 timesteps. We are solving this problem using batch mode of Extended Kalman filter. Meaning, we execute correction step of EKF only once. We incorporate entire sensor observation in a single go to correct predicted state of the robot. Data association problem is assumed to be solved. One landmark measurement will have two components. Absolute distance between robot and the landmark, relative heading of the robot and landmark. The four landmarks considered are l1 (5, 5), l2 ( 5, 5), l3 ( 5, 5) and l4 (5, 5). ID of l1 is 1, ID of l2 is 2, ID of l3 is 3 and ID of l4 is 4.

Environmental configuration: Here is the figure of initial state of the environment. Red objects in the figure are the landmarks and Blue object at the origin is initial position of the robot. The pointy end of the object is heading of the robot. At time t, robot is at the origin of the co-ordinate system. Heading of the robot is in positive direction of X axis. Heading of the robot for this problem is measured with respect to positive direction of X axis. Robot orientation varies between ( π, π]. At time t, uncertainty in robot position is assumed to be. Meaning, co-variance matrix of state space representation is a 3x3 zero matrix at initialization. µ (,, ) T Σ Controls: u 1 ( ) 3 π/6 u 2 ( ) 4 7π/36 Sensor observations are initialized with zeros. Z t (,,,,,,, }{{}}{{}}{{}}{{} l1 l2 l3 l4 ) T 2

Control noise(r t ) is a 3x3 matrix initialized as below..1 Control noise R t.2.3 Observation noise Q.1.2.1.2.1.2.1.2 Here, Q t is co-variance of the sensor noise. Let s say the control u 1 is executed. Now out task is to get best estimate of the new state by incorporating all the uncertainties we have modeled. Now, we will extensively go through aspects of applying EKF for localization setting. Estimating new robot state after applying control u 1 : Step 1: Predicting new state of the robot using motion model µ 1 g ( u 1, µ ) µ 1 µ 1 µ x1 µ x µ y1 µ y + µ θ1 µ θ T cos(µ θ + φ) T sin(µ θ + φ) φ 3 cos( + π/6) 2.5981 + 3 sin( + π/6) 1.5 π/6.5236 Step 2: Predicting new co-variance of the robot state Σ 1 Σ 1 G 1 Σ G T 1 + R t G 1 1. 1.5 1. 2.5981 1. 1. 1.5 1..1 1. 2.5981 1. +.2 1. 1.5 2.5981 1..3 Σ 1.1.2 (1).3 3

Above equation(1) is predicted estimate of uncertainty in the state of the robot. As we can see, uncertainty in robot state has increased because of the control co-variance.(r t ) Because, uncertainty in robot state was zero before applying control(u 1 ). Step 3: Estimating Kalman Gain(K t ) K t Σ 1 H T t ( H t Σ t H T t + Q) 1 In this step, we compute Kalman gain(k t ). K t signifies which of the two estimates to trust more. We fuse two independent estimates of robot poses. One we get from motion model(step 1) and other we get by incorporating sensor observation of the surrounding environment. The disparity between what robot is perceiving(z t ) and what robot is expected to perceive(h( µ t )) will help us to get more precise estimate of robot state and will also reduce uncertainty of the same. ( ) ( ) r (µ xt+1 m x ) 2 + (µ yt+1 m y ) 2 (2) ψ tan 1 ((µ yt+1 m y )/(µ xt+1 m x )) µ θt+1 In order to estimate Kalman gain, we need to estimate observation jacobian(h t ). Since we are performing batch mode update, size of H t will be 2nx3. Where, n is the number of landmarks. In our case, size of H t will be 8x3. Let s say our robot is firing sensors, and is able to perceive two landmarks l1 and l2 out of four. Hence, first two rows(for l1 and next two rows(for l2 will have non-zero entries. Rows 5, 6, 7, 8 will have all entries as. Let s say following are the expected observations for landmarks l1 and l2. computed using equation 2. ( ) ( ) r1 4.2445 ψ1.4457 These are (3) ( ) r2 ψ2 ( ) 8.3654 1.9775 (4) H 1 cos(µ θ1 + ψ1) sin(µ θ1 + ψ1) sin(µ θ1 + ψ1)/r1 cos(µ θ1 + ψ1)/r1 1 cos(µ θ1 + ψ2) sin(µ θ1 + ψ2) sin(µ θ1 + ψ2)/r2 cos(µ θ1 + ψ2)/r2 1 4

Now, we estimate jacobian of the observation model and evaluate it at our expected observations(3, 4). we plug appropriate values in the above equation to compute the jacobian..566.8244.1942 133 1..818.5976 H 1.714.958 1. Now, we have Σ 1, H 1 and Q t. We plug in these matrices and estimate the Kalman gain(k 1 )..2926.236.428.68 Kalman gain K 1.5357.381.383.353.217.3722.454.3762 Step 4: Correcting robot state: Following is the observation vector(h( µ t )) for expected observation. r1 4.2425 φ1.4457 r2 8.3654 h( µ 1 ) φ2 1.9775 Let s say this is what robot is perceiving after through it sensors. Let s say following are the readings of robot sensors(z t ). 4.2194 Z 1.4861 8.376 2.483 Now, we take disparity between sensor observations(z t ) and expected observations(h( µ t )) to correct predicted state of the robot. Following is the updated estimate of robot pose(µ 1 ). 2.5826 µ 1 1.5364.4799 5

Step 5: Updating robot uncertainty: Σ 1 ( I K 1 H 1 ) Σ 1 This is the estimated robot uncertainty after fusing sensor data(σ 1 ). Σ 1.57.7.5.7.645.8.5.8.755 6

This is end of the algorithm. We run it each time we apply control. In the above figure. Red are the landmarks locations. Robot was initially at (, ). After applying u 1, it should have reached at (2.5981, 1.5), which is denoted by green dot in second figure of the previous page. But because of control error, it didn t reach its destination. Once we estimate final state using Kalman filter, the best estimate is denoted by blue dot previous figure. This is full cycle of Extended Kalman filter for localization setting. Now, new state of the robot is µ 1 (2.5826, 1.5364). New co-variance will be Σ 1. After executing control u 2, Σ 1, µ 1 and u 2 will be fed to the EKF algorithm, and again same procedure will be followed to estimate new state. If you find any mistake in calculation, feel free to shoot an email at dhaivat1994@gmail.com. 7