CIS 390 Fall 2016 Robotics: Planning and Perception Final Review Questions December 14, 2016 Questions Throughout the following questions we will assume that x t is the state vector at time t, z t is the measurement at time t, p(x t x t 1, u t ) is the conditional probability that state x t 1 will go to state x t when moved by u t, p(z t x t ) is the probability that we obtain measurement z t by observing state x t p(x t z 1:t ) is the sought posterior probability that state is at x t after all measurements from time 1 to t. Bayes filtering 1. Name the Bayes filtering step accomplished for each of those formulae p(x t z 1:t ) = p(z t x t )p(x t z 1:t 1 ) p(z t ) Update p(x t+1 z 1:t ) = x t p(x t+1 x t, u t+1 )p(x t z 1:t ) Prediction 2. Explain why does the last step involve an integration/summation over x t? In which case, can this be written just as a product p(x t+1 x t, u t+1 )p(x t z 1:t ). Because x t is a random variable (the state is only known through its pdf); the summation is to marginalize x t. If the true value of x t is known, it can be written as a product. 3. A robot vacuum moves in a house with 4 rooms R 1 R 2 R 3 R 4 We will simplify its location as a discrete state taking 4 values R i for 4 rooms respectively. The prior location probabilities p(x 0 = R i ) = 1/4 are uniform. The robot measures its location by recognizing a landmark in the room. The probability that the robot is in the room where it thinks it is is 0.6. The probability that it confuses a room is 0.4. The robot moves to 4 directions North, South, East, West and when it hits a wall it just bounces. The robot moves always to the right position with probability 1 unless it bounces on a wall where it can stay at the same room or a room in the opposite direction with probability 0.5 each respectively. For example, if the robot tries to move from R 3 West it will land with probability 0.5 in R 4 and probability 0.5 will stay in R 3. Compute the 4 tables after each step Initial table: 1
1/4 1/4 1/4 1/4 measuring z 1 = R 2, 2/9 1/3 2/9 2/9 moving North p(r 1 ) = 0.5p(R 1 ) + p(r 3 ) p(r 2 ) = 0.5p(R 2 ) + p(r 4 ) p(r 3 ) = 0.5p(R 1 ) p(r 4 ) = 0.5p(R 2 ) 1/3 7/18 1/9 1/6 measuring z 2 = R 4 12/39 14/39 4/39 9/39 moving West. 20/39 6/39 11/39 2/39 4. Assume that p(x t z 1:t 1 ) is a Gaussian mean µ 1 and variance σ 2 = 1, and p(z t x t ) is a Gaussian with mean µ 2 and same variance σ 2 = 1. Calculate their product. If it is a Gaussian what are its mean and variance? Particle Filters 1. Describe with simple sentences the basic steps of particle filtering. What is the extra step that is needed in addition to the two steps of Bayesian Filtering? Prediction: for each particle, sample next state from a random distribution given inputs Update: given a new measurement, update likelihood of each particle Resample: draw particles with replacement with probabilities proportional to their weights 2. Assume the same vacuum cleaner as in the example of the 4 rooms above and the probabilities of correct location measurement 0.6, wrong location measurement 0.4 and assume noise-free motion (robot stays at same position if it bounces). (a) Start with each room containing two particles. Let w represent the weights of each particle, then w i = 1/8. 2
(b) Update: Robot measures z 1 = R 2. What is the value of each particle weight. w = [1/9, 1/9, 1/6, 1/6, 1/9, 1/9, 1/9, 1/9] (c) Prediction: Robot moves South. What are the locations of all particles? 4 in R 3, 4 in R 4. (d) Describe a procedure for resampling. Compute cumulative sum of weights, sample random number uniformly between 0 and 1, find the corresponding weight in the cumsum vector, then add that particle to the new list of particles. 3
Kalman Filter When the motion model is linear, x t+1 = F t x t, with Gaussian uncertainty of zero mean and covariance Q t, and the measurement model is linear, z t = H t x t, with measurement noise of zero mean and covariance R t, then the Bayesian filter is equivalent to the Kalman filter estimate µ t with covariance P t with prediction step and update step µ t+1 = F t µ t P t+1 = F t P + t F T t + Q t µ + t = µ t + K t (z t H t µ t ) P t + = (I K t H t )Pt with K t = Pt Ht T (H t Pt Ht T + R t ) 1 1. Write the update and prediction equations for a system with motion model with unknown 1D position x t+1 = x t + v t t, and unknown 1D constant velocity v t+1 = v t +Gaussian noise of zero mean with variance q. Assume that the time-step t = 1. The measurement is z t = x t +Gaussian noise of zero mean with variance r. Make sure that the state vector is 2D. Let the state be ( x v). Substitute F t = ( 1 1 0 1 ), H t = ( 1 0 ) in the equations above. 2. In a Kalman filter does a state covariance depend on the measurement value? No 3. What happens to the Kalman update step when the measurement covariance is zero? K t H t = I, so µ t = z t in the update step, which means the estimated state comes straight from the measurement (no filtering really done). 4. What happens to the Kalman update step when the measurement covariance is infinite? K t = 0, so the update step doesn t change the state estimation The estimated state comes straight from the prediction (no filtering really done). 5. For a 1D system with F t = H t = 1 present the update and prediction step. Prediction Update µ t+1 = µ t p t+1 = p t + q µ + t = µ t + k t (z t µ t ) p + t = (1 k t )p t k t = 6. Compute a formula for the state covariance after N measurements and motions. p t p t + r 4
SLAM etc 1. Describe the probabilistic setting of the SLAM problem as well as the graph of poses, landmarks, and measurements. https://canvas.upenn.edu/courses/1335873/files/folder/readings?preview=61648861 2. Describe how a robot can localize itself in a 2D map when looking at 3 vertical wall edges at bearings β 1, β 2, β 3. Pick landmarks 1 and 2, find the circle that contains the possible points looking with an angle β 1 β 2 at the line connecting both landmarks. Now consider landmarks 1, 3, and find the circle for angle β 1 β 3. The circles intersect at two points, one of them is a landmark, the other is the robot location. 3. Describe the 3D-3D registration problem and its solution for 3 points. Procrustes problem: https://fling.seas.upenn.edu/~cis390/dynamic/slides/cis390_ Lecture11.pdf 4. Describe the basic steps of the 2D-3D pose estimation problem without solving it. PnP problem: https://fling.seas.upenn.edu/~cis390/dynamic/slides/cis390_lecture13. pdf 5