Mobile Robots Localization

Similar documents
Vlad Estivill-Castro (2016) Robots for People --- A project for intelligent integrated systems

Vlad Estivill-Castro. Robots for People --- A project for intelligent integrated systems

Introduction to Mobile Robotics Probabilistic Motion Models

Distributed Intelligent Systems W4 An Introduction to Localization Methods for Mobile Robots

been developed to calibrate for systematic errors of a two wheel robot. This method has been used by other authors (Chong, 1997). Goel, Roumeliotis an

Cinematica dei Robot Mobili su Ruote. Corso di Robotica Prof. Davide Brugali Università degli Studi di Bergamo

COS Lecture 16 Autonomous Robot Navigation

E190Q Lecture 11 Autonomous Robot Navigation

E190Q Lecture 10 Autonomous Robot Navigation

Mobile Robot Localization

Introduction to Mobile Robotics Bayes Filter Particle Filter and Monte Carlo Localization

Week 3: Wheeled Kinematics AMR - Autonomous Mobile Robots

L11. EKF SLAM: PART I. NA568 Mobile Robotics: Methods & Algorithms

ROBOTICS 01PEEQW. Basilio Bona DAUIN Politecnico di Torino

Bayes Filter Reminder. Kalman Filter Localization. Properties of Gaussians. Gaussians. Prediction. Correction. σ 2. Univariate. 1 2πσ e.

Mobile Robot Localization

Modeling and state estimation Examples State estimation Probabilities Bayes filter Particle filter. Modeling. CSC752 Autonomous Robotic Systems

Probabilistic Fundamentals in Robotics. DAUIN Politecnico di Torino July 2010

the robot in its current estimated position and orientation (also include a point at the reference point of the robot)

EE565:Mobile Robotics Lecture 6

Robotics. Mobile Robotics. Marc Toussaint U Stuttgart

Accurate Odometry and Error Modelling for a Mobile Robot

Sensors Fusion for Mobile Robotics localization. M. De Cecco - Robotics Perception and Action

1 Introduction. Systems 2: Simulating Errors. Mobile Robot Systems. System Under. Environment

Robot Localization and Kalman Filters

ARW Lecture 04 Point Tracking

EE Mobile Robots

Robot Control Basics CS 685

SLAM for Ship Hull Inspection using Exactly Sparse Extended Information Filters

Robot Localisation. Henrik I. Christensen. January 12, 2007

Localización Dinámica de Robots Móviles Basada en Filtrado de Kalman y Triangulación

Consistent Triangulation for Mobile Robot Localization Using Discontinuous Angular Measurements

Autonomous Mobile Robot Design

Control of Mobile Robots

1 Kalman Filter Introduction

Rao-Blackwellized Particle Filtering for 6-DOF Estimation of Attitude and Position via GPS and Inertial Sensors

Localization. Howie Choset Adapted from slides by Humphrey Hu, Trevor Decker, and Brad Neuman

Probabilistic Fundamentals in Robotics

Introduction to Mobile Robotics Probabilistic Sensor Models

Lego NXT: Navigation and localization using infrared distance sensors and Extended Kalman Filter. Miguel Pinto, A. Paulo Moreira, Aníbal Matos

Inertial Odometry using AR Drone s IMU and calculating measurement s covariance

ELEC4631 s Lecture 2: Dynamic Control Systems 7 March Overview of dynamic control systems

JUST THE MATHS UNIT NUMBER INTEGRATION APPLICATIONS 10 (Second moments of an arc) A.J.Hobson

Recursive Bayes Filtering

Pose Tracking II! Gordon Wetzstein! Stanford University! EE 267 Virtual Reality! Lecture 12! stanford.edu/class/ee267/!

Particle Filters. Pieter Abbeel UC Berkeley EECS. Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics

From Bayes to Extended Kalman Filter

Robotics 2 Target Tracking. Giorgio Grisetti, Cyrill Stachniss, Kai Arras, Wolfram Burgard

Robotics 2 Target Tracking. Kai Arras, Cyrill Stachniss, Maren Bennewitz, Wolfram Burgard

Probabilistic Structure from Sound and Probabilistic Sound Source Localization

Robust Motion Planning using Markov Decision Processes and Quadtree Decomposition

Instrumentation Commande Architecture des Robots Evolués

Probabilistic Fundamentals in Robotics. DAUIN Politecnico di Torino July 2010

Fuzzy Logic Based Nonlinear Kalman Filter Applied to Mobile Robots Modelling

Proprioceptive Navigation, Slip Estimation and Slip Control for Autonomous Wheeled Mobile Robots

Sensor Fusion of Inertial-Odometric Navigation as a Function of the Actual Manoeuvres of Autonomous Guided Vehicles

Using the Distribution Theory to Simultaneously Calibrate the Sensors of a Mobile Robot

ME 597: AUTONOMOUS MOBILE ROBOTICS SECTION 3 MOTION MODELING. Prof. Steven Waslander

Probabilistic Fundamentals in Robotics. DAUIN Politecnico di Torino July 2010

Lecture 10. Rigid Body Transformation & C-Space Obstacles. CS 460/560 Introduction to Computational Robotics Fall 2017, Rutgers University

Autonomous Mobile Robot Design

L03. PROBABILITY REVIEW II COVARIANCE PROJECTION. NA568 Mobile Robotics: Methods & Algorithms

UAV Navigation: Airborne Inertial SLAM

Inertial Navigation and Various Applications of Inertial Data. Yongcai Wang. 9 November 2016

SLAM Techniques and Algorithms. Jack Collier. Canada. Recherche et développement pour la défense Canada. Defence Research and Development Canada

Lecture Schedule Week Date Lecture (M: 2:05p-3:50, 50-N202)

Particle Filters; Simultaneous Localization and Mapping (Intelligent Autonomous Robotics) Subramanian Ramamoorthy School of Informatics

Vision for Mobile Robot Navigation: A Survey

Gradient Sampling for Improved Action Selection and Path Synthesis

(W: 12:05-1:50, 50-N202)

Odometry Error Covariance Estimation for Two Wheel Robot Vehicles

Probability Map Building of Uncertain Dynamic Environments with Indistinguishable Obstacles

Multi-Sensor Fusion for Localization of a Mobile Robot in Outdoor Environments

Advanced Techniques for Mobile Robotics Least Squares. Wolfram Burgard, Cyrill Stachniss, Kai Arras, Maren Bennewitz

Manipulators. Robotics. Outline. Non-holonomic robots. Sensors. Mobile Robots

Model Reference Adaptive Control of Underwater Robotic Vehicle in Plane Motion

Position correction by fusion of estimated position and plane.

Miscellaneous. Regarding reading materials. Again, ask questions (if you have) and ask them earlier

Autonomous Navigation for Flying Robots

CS491/691: Introduction to Aerial Robotics

Sensors for mobile robots

Today. Why idealized? Idealized physical models of robotic vehicles. Noise. Idealized physical models of robotic vehicles

Robotics. Lecture 4: Probabilistic Robotics. See course website for up to date information.

Local Probabilistic Models: Continuous Variable CPDs

Semi-Analytical Guidance Algorithm for Fast Retargeting Maneuvers Computation during Planetary Descent and Landing

A Deterministic Filter for Simultaneous Localization and Odometry Calibration of Differential-Drive Mobile Robots

Chapter 1. Functions and Graphs. 1.5 More on Slope

Cooperative relative robot localization with audible acoustic sensing

Automatic Self-Calibration of a Vision System during Robot Motion

Exam - TTK 4190 Guidance & Control Eksamen - TTK 4190 Fartøysstyring

Stochastic Cloning: A generalized framework for processing relative state measurements

EKF and SLAM. McGill COMP 765 Sept 18 th, 2017

Lecture 13 Visual Inertial Fusion

Course on SLAM Joan Sol` a January 26, 2017

Lecture 9: Modeling and motion models

CS491/691: Introduction to Aerial Robotics

Fast Incremental Square Root Information Smoothing

A Study of Covariances within Basic and Extended Kalman Filters

Simultaneous Localization and Mapping

Line following of a mobile robot

Transcription:

Mobile Robots Localization Institute for Software Technology 1

Today s Agenda Motivation for Localization Odometry Odometry Calibration Error Model 2

Robotics is Easy control behavior perception modelling domain model environment model information extraction raw data planning task cognition reasoning path planning navigation path execution actuator commands sensing acting environment/world 3

Motivation [Siegwart, Nourbakhsh, Scaramuzza, 2011, MIT Press] self-localization the most fundamental problem to providing a mobile robot with autonomous capabilities [Cox 1991] 4

RoboCup Four-Legged League 5

RoboCup Four-Legged League 6

Geometry Again we are looking for a transformation between a reference frame (the environment) a robot frame we use the following terms interchangeably transformation pose (position + orientation) location pose 2D space: 2D position + orientation (1 angle) 3D space: 3D position + orientation (quaternion, Euler angles) 7

Taxonomy of Localization Problems position tracking initial pose is known, only track the change of pose global localization the initial is unknown, the pose has to be estimated from scratch kidnaped robot problem robot is teleported without telling it passive/active localization passive: robot estimates its pose on the fly active: robot is able to set actions to improve its pose estimation dynamic versus static environment the environment may change over time single robot versus mutli robots 8

Why Localization is Hard? in general the pose cannot be directly estimated integration of local motions match against landmarks landmarks or observations can be ambiguous sensors have systematic/nonsystematic errors action execution is not completely deterministic 9

General Concept global position information e.g. GPS position update robot-mounted local motion sensors e.g. encoder, IMU, prediction of the position change e.g. odometry environment knowledge e.g. map, landmarks position update local observations e.g. vision, laser scanner 10

Dead Reckoning/Odometry simple solution for position tracking or pose prediction Odometry in general for wheeled robots use wheel encoder to estimate change of pose Dead Reckoning historic term also used for marine application means the combination of actual pose, heading, speed and time in robotics use a heading sensor, e.g. compass or IMU [Wikipedia] 11

we assume Ideal and Continuous Time a differential drive robot a function over time for robot s linear and rotational velocity, v t, ω(t) ideal execution continuous time then we can simply integrate the functions in two steps to get the pose t θ t = ω τ dτ 0 t 0 t 0 + θ 0 x t = v τ cos θ(t) dτ + x 0 y t = v τ sin θ(t) dτ + y 0 12

Example Constant Velocities constant functions: v t = v ω t = ω assume x 0 = y 0 = θ 0 = 0 lead to a circle with radius v ω x t = v sin ωt ω y t = v v cos ωt ω ω θ t = ωt 13

Differential Drive Odometry in reality we cannot rely on motion execution solution: measure the motion equip both wheel with a wheel encoder the odometry sensor delivers the traveled distance of the left and right wheel during the last sampling period: Δs l, Δs r measure the distances for a small time interval assume the robot drives on a circle during the interval 14

if we have discrete time Discrete Time we assume equidistance time steps t 0 = 0, t i+1 = t i + ΔT we assume piecewise constant functions for robot travels on a constant arc between time steps we discretely update the poses x i+1 = x i v ω sin θ i + v ω sin θ i + ωδt y i+1 = y i + v ω cos θ i v ω cos θ i + ωδt θ i+1 = θ i + ωδt 15

y Motion Model I ICR y 1 y 0 Δy θ 0 = 0 Δx x 0 x 1 x 16

Motion Model II the average travel distance of the robot is Δs = Δs l+δs r 2 the change of orientation is Δθ = Δs r Δs l b if the distances are small we can assume Δd Δs then the pose update is x t = f x t 1, u t = x t 1 y t 1 θ t 1 + Δs cos Δs sin θ t 1 + Δθ 2 θ t 1 + Δθ 2 Δθ 17

Relation to Transformations odometry can also be represented as transformation from a odometry coordinate system in the world to a fixed robot-centric coordinate system ROS provides odometry as 3D transformation between the frames odom and base_link, rotation is represented as quaternion, usually z, roll and pitch assumed 0 2D example A t 1,t = cosδθ sinδθ Δs cos θ t 1 + Δθ 2 sinδθ cosδθ Δs sin θ t 1 + Δθ 2 0 0 1 discrete transformations can be easily combined A t 2,t = A t 2,t 1 A t 1,t 18

Integration of Odometry odometry information is only available at distinct time steps we integrate the information to estimate the pose we make an additional error due to integration errors accumulate boundless t 0 t 1 t 2 t 3 t 4 19

Systematic Errors in Odometry can be determined and corrected in general sources unequal wheel diameters actual diameter differs from nominal diameter actual wheelbase differs from nominal wheelbase misaligned wheels finite encoder resolution finite encoder sampling rate 20

Non-Systematic Errors in Odometry stochastic in their nature can not be corrected can be probabilistically modelled sources travel over uneven floor travel over unexpected obstacles on the floor wheel slippage slippery floor to high acceleration internal forces non-point contact of wheel, e.g. tracks instead of wheels 21

Reasons for Odometry Errors ideal case different wheel diameters bump and many more carpet 22 22

Calibration of Odometry deals with systematic errors determine correction factors to minimize the error classical approach UMBmark (square path experiment) [Borenstein, Feng 1996] for differential drive robots mainly 23

Two Major Types of Errors Type A uncertain wheelbase E b = b actual bnominal leads to an incorrect estimation of orientation b actual b nominal 24

Two Major Types of Errors Type B unequal wheel diameters E d = D R DL leads to an incremental orientation error only relative not absolute in respect to the diameter L 25

UMBmark I standard test to calibrate odometry 1. determine absolute initial position x 0, y 0 of the robot, reset odomety 2. drive the robot trough a 4mx4m square path in CW direction a) stop after a 4m straight line b) turn 90 on the spot c) drive slowly 3. record the absolute end position x abs, y abs and the odometry (x 4, y 4 ) 4. repeat 1-3 n times, usually n = 5 repeat the procedure CCW [Borenstein, Feng 1996] 26

UMBmark II the procedure leads to n positions differences for each CW and CCW ε x = x abs x odo ε y = y abs y odo the repetition allows to limit the influence of nonsystematic errors in the procedure the position usually form two clusters: x c.g.,cw/ccw = 1 n y c.g.,cw/ccw = 1 n n i=1 n i=1 εx i,cw/ccw εy i,cw/ccw [Borenstein, Feng 1996] 27

Results for Type A Errors Only consider first a robot only affected by type A errors assume the initial position (0,0) ε x, ε y simply becomes x c.g. respectively y c.g. approximate trigonometry for small angles: Lsinγ Lγ, Lcos γ L CCW CW x 4 2Lα y 4 2Lα x 4 = 2Lα y 4 = 2Lα [Borenstein, Feng 1996] 28

Results for Type B Error Only consider first a robot only affected by type B errors CCW CW x 4 2Lβ y 4 2Lβ x 4 = 2Lβ y 4 = 2Lβ [Borenstein, Feng 1996] 29

Determine the Angles superimpose the results for the individual errors x, CW: 2Lα 2Lβ = 2L α + β = x c.g,cw x, CCW: 2Lα + 2Lβ = 2L α β = x c.g,ccw y, CW: 2Lα 2Lβ = 2L α + β = y c.g,cw y, CCW: 2Lα 2Lβ = 2L α + β = y c.g,ccw calculate the angles by combining β = x c.g.,cw x c.g.,ccw 4L β = y c.g.,cw+y c.g.,ccw 4L α = x c.g.,cw+x c.g.,ccw 4L α = y c.g.,cw y c.g.,ccw 4L 30

Determine the Correction Factors for E d determine the ratio of wheel diameters E d determine the radius of the curve traveled by the robot on the straight segment R = L 2 sin β 2 determine actual ratio E d = D R D L = R+b 2 R b 2 correct the odometry assume the average wheel diameter D a = D L+D R 2 stays constant solving a system of two equations: D L = 2 E d +1 D a, D R = 2 1 E d +1 D a 31

Determine the Correction Factors for E b determine the ratio for the actual wheelbase E b the wheelbase is inversely proportion to the amount of rotation we can state the following proportion b_actual π 2 = b nominal π 2 α determine actual ratio E b = π 2 π 2 α correct the odometry b actual = π 2 π 2 α b nominal 32

Description of Random Errors modeling of the nonsystematic (random) error is important for consistent perception simple solution: treat the observation/estimation m as normal distributed random variable m N μ m, σ m PDF m = if probability density function (PDF) is not Gaussian other means have to be used, e.g. sample-based σ m 1 2π e (m μ m ) 2 2σ m 2 P a < m b = PDF x dx a b 33

Nonlinear Error Propagation question: how is the error propagated by a nonlinear function f assume vectors of normal distributed random variables, x N μ x, Σ x, y N μ y, Σ y, Σ covariance matrix assume function f as vector of functions y i = f i (x) use the Taylor-approximation of f [Siegwart, Nourbakhsh, Scaramuzza, 2011, MIT Press] 34

Nonlinear Error Propagation Taylor-approximation of error propagation μ y = f μ x Σ y = F x Σ x F x T F x = f = x f(x) T T = f 1 f m = x 1 x n f 1 f 1 x 1 n f m f m x 1 x n F x Jacobean Matrix 35

Differential Drive y x = x y θ x t = x t 1 + Δx Δy Δθ = f(x t 1, u t ) x [Siegwart, Nourbakhsh, Scaramuzza, 2011, MIT Press] 36

Differential Drive Odometry the odometry sensor delivers the traveled distance of the left and right wheel during the last sampling period: Δs l, Δs r the average travel distance of the robot is Δs = Δs l+δs r 2 the change of orientation is Δθ = Δs r Δs l b the pose update is x t = f x t 1, u t = x t 1 y t 1 θ t 1 + Δs cos Δs sin θ t 1 + Δθ 2 θ t 1 + Δθ 2 Δθ 37

Error Model I we treat the pose x as normal distributed vector of random variables x N μ x, Σ x we assume a covariance matrix to represent the nonsystematic errors of Δs l, Δs r Σ Δ = k r Δs r 0 0 k l Δs l the errors of Δs l and Δs r assumed as independent the error is proportional the distance, k r and k l are error constants using Taylor-approximation we get the following error propagation Σ xt = F xt 1 Σ x t 1 F x t 1 T + F Δs Σ Δ F Δs T 38

Error Model II we use the following Jacobean matrices: F xt 1 = f xt 1 = f x F Δ s = Δ s f = 1 2 1 2 f Δ r cos θ + Δθ 2 sin θ + Δθ 2 f Δ l = f y Δ s Δθ sin θ + 2b 2 Δθ cos θ + 2b 2 + Δ s 1 b 1 0 Δ f s sin θ + θ 2 θ = 0 1 Δ s cos θ + θ 2 0 0 1 1 2 1 2 cos θ + Δθ 2 sin θ + Δθ 2 + Δ s Δθ sin θ + 2b 2 Δ s Δθ cos θ + 2b 2 1 b 39

Growth of Uncertainty Straight Line [Siegwart, Nourbakhsh, Scaramuzza, 2011, MIT Press] errors perpendicular to the direction of movement are growing much faster! 40

Growth of Uncertainty Circle [Siegwart, Nourbakhsh, Scaramuzza, 2011, MIT Press] errors ellipse does not remain perpendicular to the direction of movement! 41

Non-Gaussian Error Model [Fox, Thrun, Burgard, Dellaert, 2000] errors are not shaped like ellipses! use non-parametric representations, e.g. particles 42

Questions? Thank you! 43