CIS 390 Fall 2016 Robotics: Planning and Perception Final Review Questions

Similar documents
Parallel Particle Filter in Julia

AUTOMOTIVE ENVIRONMENT SENSORS

Particle Filters; Simultaneous Localization and Mapping (Intelligent Autonomous Robotics) Subramanian Ramamoorthy School of Informatics

CS491/691: Introduction to Aerial Robotics

EKF and SLAM. McGill COMP 765 Sept 18 th, 2017

Introduction to Mobile Robotics Bayes Filter Particle Filter and Monte Carlo Localization

Robotics. Lecture 4: Probabilistic Robotics. See course website for up to date information.

Simultaneous Localization and Mapping (SLAM) Corso di Robotica Prof. Davide Brugali Università degli Studi di Bergamo

Bayes Filter Reminder. Kalman Filter Localization. Properties of Gaussians. Gaussians. Prediction. Correction. σ 2. Univariate. 1 2πσ e.

Introduction to Mobile Robotics Probabilistic Robotics

Markov localization uses an explicit, discrete representation for the probability of all position in the state space.

Choosing among models

F denotes cumulative density. denotes probability density function; (.)

Introduction to Probabilistic Graphical Models: Exercises

PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 13: SEQUENTIAL DATA

Robot Localization and Kalman Filters

Manipulators. Robotics. Outline. Non-holonomic robots. Sensors. Mobile Robots

Mobile Robot Localization

Definition: A "system" of equations is a set or collection of equations that you deal with all together at once.

Data Modeling & Analysis Techniques. Probability & Statistics. Manfred Huber

Particle Filters. Pieter Abbeel UC Berkeley EECS. Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics

Approximate Inference

Mobile Robot Localization

Basic Sampling Methods

SLAM Techniques and Algorithms. Jack Collier. Canada. Recherche et développement pour la défense Canada. Defence Research and Development Canada

Bayesian Regression Linear and Logistic Regression

2D Image Processing. Bayes filter implementation: Kalman filter

L11. EKF SLAM: PART I. NA568 Mobile Robotics: Methods & Algorithms

2D Image Processing (Extended) Kalman and particle filter

Lecture 2: From Linear Regression to Kalman Filter and Beyond

Mathematical Formulation of Our Example

Recursive Estimation

L06. LINEAR KALMAN FILTERS. NA568 Mobile Robotics: Methods & Algorithms

Linear Dynamical Systems

Robots Autónomos. Depto. CCIA. 2. Bayesian Estimation and sensor models. Domingo Gallardo

Simultaneous Localization and Mapping

The Kalman Filter ImPr Talk

The Kalman Filter. An Algorithm for Dealing with Uncertainty. Steven Janke. May Steven Janke (Seminar) The Kalman Filter May / 29

Parametric Unsupervised Learning Expectation Maximization (EM) Lecture 20.a

Robotics. Mobile Robotics. Marc Toussaint U Stuttgart

Strong Lens Modeling (II): Statistical Methods

Autonomous Navigation for Flying Robots

Mobile Robotics II: Simultaneous localization and mapping

Particle Filters. Outline

Bayesian Methods in Positioning Applications

Computer Vision Group Prof. Daniel Cremers. 11. Sampling Methods

Preliminary Statistics Lecture 2: Probability Theory (Outline) prelimsoas.webs.com

Machine Learning 4771

Bayesian Inference for Normal Mean

Modeling and state estimation Examples State estimation Probabilities Bayes filter Particle filter. Modeling. CSC752 Autonomous Robotic Systems

COS Lecture 16 Autonomous Robot Navigation

Problem Set 2. MAS 622J/1.126J: Pattern Recognition and Analysis. Due: 5:00 p.m. on September 30

Statistical Techniques in Robotics (16-831, F12) Lecture#17 (Wednesday October 31) Kalman Filters. Lecturer: Drew Bagnell Scribe:Greydon Foil 1

SYDE 372 Introduction to Pattern Recognition. Probability Measures for Classification: Part I

Markov Models. CS 188: Artificial Intelligence Fall Example. Mini-Forward Algorithm. Stationary Distributions.

2D Image Processing. Bayes filter implementation: Kalman filter

Parametric Models. Dr. Shuang LIANG. School of Software Engineering TongJi University Fall, 2012

Computer Vision Group Prof. Daniel Cremers. 14. Sampling Methods

Time Series Prediction by Kalman Smoother with Cross-Validated Noise Density

Introduction to Mobile Robotics SLAM: Simultaneous Localization and Mapping

Introduction to Graphical Models

Particle lter for mobile robot tracking and localisation

TSRT14: Sensor Fusion Lecture 9

The exam is closed book, closed calculator, and closed notes except your one-page crib sheet.

Final Exam December 12, 2017

CSE 473: Artificial Intelligence

STA 414/2104, Spring 2014, Practice Problem Set #1

Bayesian Inference and MCMC

Machine Learning, Midterm Exam: Spring 2009 SOLUTION

Based on slides by Richard Zemel

Dimension Reduction. David M. Blei. April 23, 2012

Kalman Filter Computer Vision (Kris Kitani) Carnegie Mellon University

Probabilistic Graphical Models Homework 2: Due February 24, 2014 at 4 pm

Introduction to Machine Learning

A Brief Review of Probability, Bayesian Statistics, and Information Theory

Introduction to Spring 2006 Artificial Intelligence Practice Final

Convergence of Square Root Ensemble Kalman Filters in the Large Ensemble Limit

ECE276A: Sensing & Estimation in Robotics Lecture 10: Gaussian Mixture and Particle Filtering

Chris Bishop s PRML Ch. 8: Graphical Models

CSEP 573: Artificial Intelligence

Sensor Fusion: Particle Filter

Chapter 1 Review of Equations and Inequalities

Kalman filtering and friends: Inference in time series models. Herke van Hoof slides mostly by Michael Rubinstein

1 Kalman Filter Introduction

Final Exam December 12, 2017

Bayesian Methods / G.D. Hager S. Leonard

The Kalman Filter (part 1) Definition. Rudolf Emil Kalman. Why do we need a filter? Definition. HCI/ComS 575X: Computational Perception.

Bayesian Approaches Data Mining Selected Technique

Machine Learning. Gaussian Mixture Models. Zhiyao Duan & Bryan Pardo, Machine Learning: EECS 349 Fall

ECE295, Data Assimila0on and Inverse Problems, Spring 2015

Ways to make neural networks generalize better

Lecture 2: From Linear Regression to Kalman Filter and Beyond

Results: MCMC Dancers, q=10, n=500

Content.

STA 4273H: Sta-s-cal Machine Learning

STA 4273H: Statistical Machine Learning

DEEP LEARNING CHAPTER 3 PROBABILITY & INFORMATION THEORY

Machine Learning CMPT 726 Simon Fraser University. Binomial Parameter Estimation

CS 188: Artificial Intelligence Spring Announcements

(Refer Slide Time: 0:16)

Transcription:

CIS 390 Fall 2016 Robotics: Planning and Perception Final Review Questions December 14, 2016 Questions Throughout the following questions we will assume that x t is the state vector at time t, z t is the measurement at time t, p(x t x t 1, u t ) is the conditional probability that state x t 1 will go to state x t when moved by u t, p(z t x t ) is the probability that we obtain measurement z t by observing state x t p(x t z 1:t ) is the sought posterior probability that state is at x t after all measurements from time 1 to t. Bayes filtering 1. Name the Bayes filtering step accomplished for each of those formulae p(x t z 1:t ) = p(z t x t )p(x t z 1:t 1 ) p(z t ) Update p(x t+1 z 1:t ) = x t p(x t+1 x t, u t+1 )p(x t z 1:t ) Prediction 2. Explain why does the last step involve an integration/summation over x t? In which case, can this be written just as a product p(x t+1 x t, u t+1 )p(x t z 1:t ). Because x t is a random variable (the state is only known through its pdf); the summation is to marginalize x t. If the true value of x t is known, it can be written as a product. 3. A robot vacuum moves in a house with 4 rooms R 1 R 2 R 3 R 4 We will simplify its location as a discrete state taking 4 values R i for 4 rooms respectively. The prior location probabilities p(x 0 = R i ) = 1/4 are uniform. The robot measures its location by recognizing a landmark in the room. The probability that the robot is in the room where it thinks it is is 0.6. The probability that it confuses a room is 0.4. The robot moves to 4 directions North, South, East, West and when it hits a wall it just bounces. The robot moves always to the right position with probability 1 unless it bounces on a wall where it can stay at the same room or a room in the opposite direction with probability 0.5 each respectively. For example, if the robot tries to move from R 3 West it will land with probability 0.5 in R 4 and probability 0.5 will stay in R 3. Compute the 4 tables after each step Initial table: 1

1/4 1/4 1/4 1/4 measuring z 1 = R 2, 2/9 1/3 2/9 2/9 moving North p(r 1 ) = 0.5p(R 1 ) + p(r 3 ) p(r 2 ) = 0.5p(R 2 ) + p(r 4 ) p(r 3 ) = 0.5p(R 1 ) p(r 4 ) = 0.5p(R 2 ) 1/3 7/18 1/9 1/6 measuring z 2 = R 4 12/39 14/39 4/39 9/39 moving West. 20/39 6/39 11/39 2/39 4. Assume that p(x t z 1:t 1 ) is a Gaussian mean µ 1 and variance σ 2 = 1, and p(z t x t ) is a Gaussian with mean µ 2 and same variance σ 2 = 1. Calculate their product. If it is a Gaussian what are its mean and variance? Particle Filters 1. Describe with simple sentences the basic steps of particle filtering. What is the extra step that is needed in addition to the two steps of Bayesian Filtering? Prediction: for each particle, sample next state from a random distribution given inputs Update: given a new measurement, update likelihood of each particle Resample: draw particles with replacement with probabilities proportional to their weights 2. Assume the same vacuum cleaner as in the example of the 4 rooms above and the probabilities of correct location measurement 0.6, wrong location measurement 0.4 and assume noise-free motion (robot stays at same position if it bounces). (a) Start with each room containing two particles. Let w represent the weights of each particle, then w i = 1/8. 2

(b) Update: Robot measures z 1 = R 2. What is the value of each particle weight. w = [1/9, 1/9, 1/6, 1/6, 1/9, 1/9, 1/9, 1/9] (c) Prediction: Robot moves South. What are the locations of all particles? 4 in R 3, 4 in R 4. (d) Describe a procedure for resampling. Compute cumulative sum of weights, sample random number uniformly between 0 and 1, find the corresponding weight in the cumsum vector, then add that particle to the new list of particles. 3

Kalman Filter When the motion model is linear, x t+1 = F t x t, with Gaussian uncertainty of zero mean and covariance Q t, and the measurement model is linear, z t = H t x t, with measurement noise of zero mean and covariance R t, then the Bayesian filter is equivalent to the Kalman filter estimate µ t with covariance P t with prediction step and update step µ t+1 = F t µ t P t+1 = F t P + t F T t + Q t µ + t = µ t + K t (z t H t µ t ) P t + = (I K t H t )Pt with K t = Pt Ht T (H t Pt Ht T + R t ) 1 1. Write the update and prediction equations for a system with motion model with unknown 1D position x t+1 = x t + v t t, and unknown 1D constant velocity v t+1 = v t +Gaussian noise of zero mean with variance q. Assume that the time-step t = 1. The measurement is z t = x t +Gaussian noise of zero mean with variance r. Make sure that the state vector is 2D. Let the state be ( x v). Substitute F t = ( 1 1 0 1 ), H t = ( 1 0 ) in the equations above. 2. In a Kalman filter does a state covariance depend on the measurement value? No 3. What happens to the Kalman update step when the measurement covariance is zero? K t H t = I, so µ t = z t in the update step, which means the estimated state comes straight from the measurement (no filtering really done). 4. What happens to the Kalman update step when the measurement covariance is infinite? K t = 0, so the update step doesn t change the state estimation The estimated state comes straight from the prediction (no filtering really done). 5. For a 1D system with F t = H t = 1 present the update and prediction step. Prediction Update µ t+1 = µ t p t+1 = p t + q µ + t = µ t + k t (z t µ t ) p + t = (1 k t )p t k t = 6. Compute a formula for the state covariance after N measurements and motions. p t p t + r 4

SLAM etc 1. Describe the probabilistic setting of the SLAM problem as well as the graph of poses, landmarks, and measurements. https://canvas.upenn.edu/courses/1335873/files/folder/readings?preview=61648861 2. Describe how a robot can localize itself in a 2D map when looking at 3 vertical wall edges at bearings β 1, β 2, β 3. Pick landmarks 1 and 2, find the circle that contains the possible points looking with an angle β 1 β 2 at the line connecting both landmarks. Now consider landmarks 1, 3, and find the circle for angle β 1 β 3. The circles intersect at two points, one of them is a landmark, the other is the robot location. 3. Describe the 3D-3D registration problem and its solution for 3 points. Procrustes problem: https://fling.seas.upenn.edu/~cis390/dynamic/slides/cis390_ Lecture11.pdf 4. Describe the basic steps of the 2D-3D pose estimation problem without solving it. PnP problem: https://fling.seas.upenn.edu/~cis390/dynamic/slides/cis390_lecture13. pdf 5