Robot Mapping. Least Squares. Cyrill Stachniss

Similar documents
Advanced Techniques for Mobile Robotics Least Squares. Wolfram Burgard, Cyrill Stachniss, Kai Arras, Maren Bennewitz

Robotics 2 Least Squares Estimation. Giorgio Grisetti, Cyrill Stachniss, Kai Arras, Maren Bennewitz, Wolfram Burgard

L11. EKF SLAM: PART I. NA568 Mobile Robotics: Methods & Algorithms

Autonomous Mobile Robot Design

Introduction to Mobile Robotics Compact Course on Linear Algebra. Wolfram Burgard, Cyrill Stachniss, Kai Arras, Maren Bennewitz

Introduction to Mobile Robotics Compact Course on Linear Algebra. Wolfram Burgard, Cyrill Stachniss, Maren Bennewitz, Diego Tipaldi, Luciano Spinello

Introduction to Mobile Robotics Compact Course on Linear Algebra. Wolfram Burgard, Bastian Steder

Mobile Robot Localization

Pose Tracking II! Gordon Wetzstein! Stanford University! EE 267 Virtual Reality! Lecture 12! stanford.edu/class/ee267/!

Numerical computation II. Reprojection error Bundle adjustment Family of Newtonʼs methods Statistical background Maximum likelihood estimation

Mobile Robotics 1. A Compact Course on Linear Algebra. Giorgio Grisetti

Non-linear least squares

Mobile Robot Localization

EKF, UKF. Pieter Abbeel UC Berkeley EECS. Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics

EKF, UKF. Pieter Abbeel UC Berkeley EECS. Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics

2D Image Processing (Extended) Kalman and particle filter

ROBOTICS 01PEEQW. Basilio Bona DAUIN Politecnico di Torino

Linear Dynamical Systems

L03. PROBABILITY REVIEW II COVARIANCE PROJECTION. NA568 Mobile Robotics: Methods & Algorithms

Bayes Filter Reminder. Kalman Filter Localization. Properties of Gaussians. Gaussians. Prediction. Correction. σ 2. Univariate. 1 2πσ e.

Introduction to Mobile Robotics Information Gain-Based Exploration. Wolfram Burgard, Cyrill Stachniss, Maren Bennewitz, Giorgio Grisetti, Kai Arras

Probabilistic Fundamentals in Robotics. DAUIN Politecnico di Torino July 2010

ECE 680 Modern Automatic Control. Gradient and Newton s Methods A Review

SLAM Techniques and Algorithms. Jack Collier. Canada. Recherche et développement pour la défense Canada. Defence Research and Development Canada

E190Q Lecture 10 Autonomous Robot Navigation

Gradient Descent. Sargur Srihari

Visual SLAM Tutorial: Bundle Adjustment

Iterative Methods. Splitting Methods

CSC2515 Winter 2015 Introduction to Machine Learning. Lecture 2: Linear regression

Particle Filters; Simultaneous Localization and Mapping (Intelligent Autonomous Robotics) Subramanian Ramamoorthy School of Informatics

Introduction to Machine Learning

Method 1: Geometric Error Optimization

From Bayes to Extended Kalman Filter

Maximum Likelihood (ML), Expectation Maximization (EM) Pieter Abbeel UC Berkeley EECS

Introduction to Graphical Models

Stochastic Analogues to Deterministic Optimizers

Linear Models for Regression CS534

Neural Network Training

Introduction to Mobile Robotics SLAM: Simultaneous Localization and Mapping

Generalized Gradient Descent Algorithms

Chapter 3 Numerical Methods

Statistical Methods in Particle Physics

Parametric Models. Dr. Shuang LIANG. School of Software Engineering TongJi University Fall, 2012

Numerical Optimization

Robotics 2 Target Tracking. Kai Arras, Cyrill Stachniss, Maren Bennewitz, Wolfram Burgard

Probabilistic Graphical Models

Matrix Derivatives and Descent Optimization Methods

Modeling and state estimation Examples State estimation Probabilities Bayes filter Particle filter. Modeling. CSC752 Autonomous Robotic Systems

1 Computing with constraints

Lecture 6: Bayesian Inference in SDE Models

Preface to Second Edition... vii. Preface to First Edition...

MA3025 Course Prerequisites

(Extended) Kalman Filter

ECEN 615 Methods of Electric Power Systems Analysis Lecture 18: Least Squares, State Estimation

Numerical Linear Algebra Primer. Ryan Tibshirani Convex Optimization

Simultaneous Localization and Mapping

Introduction to gradient descent

nonrobust estimation The n measurement vectors taken together give the vector X R N. The unknown parameter vector is P R M.

Fundamentals of Data Assimilation

Linear Models for Regression CS534

Graphical Models for Collaborative Filtering

Math 273a: Optimization Netwon s methods

2D Image Processing. Bayes filter implementation: Kalman filter

Probabilistic Graphical Models

Robotics. Mobile Robotics. Marc Toussaint U Stuttgart

6.3. MULTIVARIABLE LINEAR SYSTEMS

Hidden Markov Models. Hal Daumé III. Computer Science University of Maryland CS 421: Introduction to Artificial Intelligence 19 Apr 2012

Regression with Numerical Optimization. Logistic

Probabilistic Fundamentals in Robotics. DAUIN Politecnico di Torino July 2010

Numerical Linear Algebra Primer. Ryan Tibshirani Convex Optimization /36-725

Learning with Noisy Labels. Kate Niehaus Reading group 11-Feb-2014

Robotics 2 Data Association. Giorgio Grisetti, Cyrill Stachniss, Kai Arras, Wolfram Burgard

Vlad Estivill-Castro. Robots for People --- A project for intelligent integrated systems

Introduction to Mobile Robotics Bayes Filter Kalman Filter

Linear Models for Regression CS534

Logistic Regression. Seungjin Choi

Particle Filters. Pieter Abbeel UC Berkeley EECS. Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)

Vasil Khalidov & Miles Hansard. C.M. Bishop s PRML: Chapter 5; Neural Networks

Computational Methods. Least Squares Approximation/Optimization

Linear Regression. CSL603 - Fall 2017 Narayanan C Krishnan

Linear Regression. CSL465/603 - Fall 2016 Narayanan C Krishnan

Numerical solutions of nonlinear systems of equations

SLAM for Ship Hull Inspection using Exactly Sparse Extended Information Filters

Image restoration: numerical optimisation

4. DATA ASSIMILATION FUNDAMENTALS

Fundamentals of Data Assimila1on

Ch 4. Linear Models for Classification

The Particle Filter. PD Dr. Rudolph Triebel Computer Vision Group. Machine Learning for Computer Vision

Math 302 Outcome Statements Winter 2013

ECE521 week 3: 23/26 January 2017

Lecture 4: Types of errors. Bayesian regression models. Logistic regression

2D Image Processing. Bayes filter implementation: Kalman filter

CIS 390 Fall 2016 Robotics: Planning and Perception Final Review Questions

Simultaneous Localization and Mapping (SLAM) Corso di Robotica Prof. Davide Brugali Università degli Studi di Bergamo

Nonlinear Optimization for Optimal Control Part 2. Pieter Abbeel UC Berkeley EECS. From linear to nonlinear Model-predictive control (MPC) POMDPs

In the derivation of Optimal Interpolation, we found the optimal weight matrix W that minimizes the total analysis error variance.

A THEORETICAL INTRODUCTION TO NUMERICAL ANALYSIS

Laplacian Filters. Sobel Filters. Laplacian Filters. Laplacian Filters. Laplacian Filters. Laplacian Filters

Machine Learning Linear Classification. Prof. Matteo Matteucci

Transcription:

Robot Mapping Least Squares Cyrill Stachniss 1

Three Main SLAM Paradigms Kalman filter Particle filter Graphbased least squares approach to SLAM 2

Least Squares in General Approach for computing a solution for an overdetermined system More equations than unknowns Minimizes the sum of the squared errors in the equations Standard approach to a large set of problems 3

Least Squares History Method developed by Carl Friedrich Gauss in 1795 (he was 18 years old) First showcase: predicting the future location of the asteroid Ceres in 1801 Courtesy: Astronomische Nachrichten, 1828 4

Problem Given a system described by a set of n observation functions Let be the state vector be a measurement of the state x be a function which maps to a predicted measurement Given n noisy measurements the state about Goal: Estimate the state which bests explains the measurements 5

Graphical Explanation state (unknown) predicted measurements real measurements 6

Example position of 3D features coordinates of the 3D features projected on camera images Estimate the most likely 3D position of the features based on the image projections (given the camera poses) 7

Error Function Error is typically the difference between the predicted and actual measurement We assume that the error has zero mean and is normally distributed Gaussian error with information matrix The squared error of a measurement depends only on the state and is a scalar 8

Goal: Find the Minimum Find the state x* which minimizes the error given all measurements global error (scalar) squared error terms (scalar) error terms (vector) 9

Goal: Find the Minimum Find the state x* which minimizes the error given all measurements A general solution is to derive the global error function and find its nulls In general complex and no closed form solution Numerical approaches 10

Assumption A good initial guess is available The error functions are smooth in the neighborhood of the (hopefully global) minima Then, we can solve the problem by iterative local linearizations 11

Solve Via Iterative Local Linearizations Linearize the error terms around the current solution/initial guess Compute the first derivative of the squared error function Set it to zero and solve linear system Obtain the new state (that is hopefully closer to the minimum) Iterate 12

Linearizing the Error Function Approximate the error functions around an initial guess x via Taylor expansion Reminder: Jacobian 13

Squared Error With the previous linearization, we can fix and carry out the minimization in the increments We replace the Taylor expansion in the squared error terms: 14

Squared Error With the previous linearization, we can fix and carry out the minimization in the increments We replace the Taylor expansion in the squared error terms: 15

Squared Error (cont.) All summands are scalar so the transposition has no effect By grouping similar terms, we obtain: 16

Global Error The global error is the sum of the squared errors terms corresponding to the individual measurements Form a new expression which approximates the global error in the neighborhood of the current solution 17

Global Error (cont.) with 18

Quadratic Form We can write the global error terms as a quadratic form in We need to compute the derivative of w.r.t. (given ) 19

Deriving a Quadratic Form Assume a quadratic form The first derivative is See: The Matrix Cookbook, Section 2.2.4 20

Quadratic Form We can write the global error terms as a quadratic form in The derivative of the approximated w.r.t. is then: 21

Minimizing the Quadratic Form Derivative of Setting it to zero leads to Which leads to the linear system The solution for the increment is 22

Gauss-Newton Solution Iterate the following steps: Linearize around x and compute for each measurement Compute the terms for the linear system Solve the linear system Updating state 23

Example: Odometry Calibration Odometry measurements Eliminate systematic error through calibration Assumption: Ground truth odometry is available Ground truth by motion capture, scanmatching, or a SLAM system 24

Example: Odometry Calibration There is a function which, given some bias parameters, returns a an unbiased (corrected) odometry for the reading as follows To obtain the correction function, we need to find the parameters 25

Odometry Calibration (cont.) The state vector is The error function is Its derivative is: Does not depend on x, why? What are the consequences? e is linear, no need to iterate! 26

Questions How do the parameters look like if the odometry is perfect? How many measurements (at least) are needed to find a solution for the calibration problem? is symmetric. Why? How does the structure of the measurement function affects the structure of? 27

How to Efficiently Solve the Linear System? Linear system Can be solved by matrix inversion (in theory) In practice: Cholesky factorization QR decomposition Iterative methods such as conjugate gradients (for large systems) 28

Cholesky Decomposition for Solving a Linear System symmetric and positive definite System to solve Cholesky leads to with being a lower triangular matrix Solve first an then 29

Gauss-Newton Summary Method to minimize a squared error: Start with an initial guess Linearize the individual error functions This leads to a quadratic form One obtains a linear system by settings its derivative to zero Solving the linear systems leads to a state update Iterate 30

Relation to Probabilistic State Estimation So far, we minimized an error function How does this relate to state estimation in the probabilistic sense? 31

General State Estimation Bayes rule, independence and Markov assumptions allow us to write 32

Log Likelihood Written as the log likelihood, leads to 33

Gaussian Assumption Assuming Gaussian distributions 34

Log of a Gaussian Log likelihood of a Gaussian 35

Error Function as Exponent Log likelihood of a Gaussian is up to a constant equivalent to the error functions used before 36

Log Likelihood with Error Terms Assuming Gaussian distributions 37

Maximizing the Log Likelihood Assuming Gaussian distributions Maximizing the log likelihood leads to 38

Minimizing the Squared Error is Equivalent to Maximizing the Log Likelihood of Independent Gaussian Distributions with individual error terms for the motions, measurements, and prior: 39

Summary Technique to minimize squared error functions Gauss-Newton is an iterative approach for non-linear problems Uses linearization (approximation!) Equivalent to maximizing the log likelihood of independent Gaussians Popular method in a lot of disciplines 40

Literature Least Squares and Gauss-Newton Basically every textbook on numeric calculus or optimization Wikipedia (for a brief summary) Relation to Probability Theory Thrun et al.: Probabilistic Robotics, Chapter 11.4 41