Programming Robots in ROS Slides adapted from CMU, Harvard and TU Stuttgart

Similar documents
Robotics. Path Planning. Marc Toussaint U Stuttgart

Robotics. Path Planning. University of Stuttgart Winter 2018/19

Algorithms for Sensor-Based Robotics: Potential Functions

MEM380 Applied Autonomous Robots I Fall BUG Algorithms Potential Fields

A motion planner for nonholonomic mobile robots

Robotics. Dynamics. Marc Toussaint U Stuttgart

Lecture 8: Kinematics: Path and Trajectory Planning

Jeffrey D. Ullman Stanford University

LOCAL NAVIGATION. Dynamic adaptation of global plan to local conditions A.K.A. local collision avoidance and pedestrian models

Gradient Sampling for Improved Action Selection and Path Synthesis

Control of Mobile Robots Prof. Luca Bascetta

Guest Lecture: Steering in Games. the. gamedesigninitiative at cornell university

Improved Action Selection and Path Synthesis using Gradient Sampling

Classification: The rest of the story

Robot Arm Transformations, Path Planning, and Trajectories!

Robot Control Basics CS 685

Robotics. Dynamics. University of Stuttgart Winter 2018/19

Introduction to Machine Learning Prof. Sudeshna Sarkar Department of Computer Science and Engineering Indian Institute of Technology, Kharagpur

CSE 417T: Introduction to Machine Learning. Final Review. Henry Chai 12/4/18

Potential Field Methods

Introduction to Robotics

Given an arc of length s on a circle of radius r, the radian measure of the central angle subtended by the arc is given by θ = s r :

Selected Topics in Optimization. Some slides borrowed from

Forces of Constraint & Lagrange Multipliers

Given an arc of length s on a circle of radius r, the radian measure of the central angle subtended by the arc is given by θ = s r :

Extremal Trajectories for Bounded Velocity Mobile Robots

Continuous Curvature Path Generation Based on Bézier Curves for Autonomous Vehicles

Smooth Path Generation Based on Bézier Curves for Autonomous Vehicles

Multi-Robotic Systems

The Belief Roadmap: Efficient Planning in Belief Space by Factoring the Covariance. Samuel Prentice and Nicholas Roy Presentation by Elaine Short

A PROVABLY CONVERGENT DYNAMIC WINDOW APPROACH TO OBSTACLE AVOIDANCE

Robotics. Control Theory. Marc Toussaint U Stuttgart

CPSC 540: Machine Learning

Generalization and Function Approximation

Probability Map Building of Uncertain Dynamic Environments with Indistinguishable Obstacles

Lecture 12 Combinatorial Planning in High Dimensions

CS599 Lecture 2 Function Approximation in RL

Lecture 20: Bezier Curves & Splines

Algorithms for Picture Analysis. Lecture 07: Metrics. Axioms of a Metric

Topic 9 Potential fields: Follow your potential

Math 121 (Lesieutre); 9.1: Polar coordinates; November 22, 2017

CONTROL OF THE NONHOLONOMIC INTEGRATOR

Logistic Regression Introduction to Machine Learning. Matt Gormley Lecture 8 Feb. 12, 2018

3 : Representation of Undirected GM

Gradient Descent. Ryan Tibshirani Convex Optimization /36-725

Motion Planning for LTL Specifications: A Satisfiability Modulo Convex Optimization Approach

Gross Motion Planning

Manipulators. Robotics. Outline. Non-holonomic robots. Sensors. Mobile Robots

CSE 190: Reinforcement Learning: An Introduction. Chapter 8: Generalization and Function Approximation. Pop Quiz: What Function Are We Approximating?

Principles of AI Planning

1 Introduction. 2 Successive Convexification Algorithm

Protein Folding by Robotics

Discrete Mathematics for CS Spring 2005 Clancy/Wagner Notes 25. Minesweeper. Optimal play in Minesweeper

Bayes Nets III: Inference

CS 188: Artificial Intelligence Spring Announcements

1. Introductory Examples

Linear Classifiers and the Perceptron

ELEC4631 s Lecture 2: Dynamic Control Systems 7 March Overview of dynamic control systems

Physics 20 Homework 2 SIMS 2016

Data Structures for Efficient Inference and Optimization

Physics-Aware Informative Coverage Planning for Autonomous Vehicles

Reinforcement Learning

Introduction to Mobile Robotics

Autonomous navigation of unicycle robots using MPC

Machine Learning for Signal Processing Bayes Classification and Regression

Robotics, Geometry and Control - A Preview

Lecture 10. Rigid Body Transformation & C-Space Obstacles. CS 460/560 Introduction to Computational Robotics Fall 2017, Rutgers University

Intelligent Systems:

COMP3702/7702 Artificial Intelligence Week 5: Search in Continuous Space with an Application in Motion Planning " Hanna Kurniawati"

EN Nonlinear Control and Planning in Robotics Lecture 2: System Models January 28, 2015

First Derivative Test

18.9 SUPPORT VECTOR MACHINES

STA 4273H: Statistical Machine Learning

Value Function Approximation in Reinforcement Learning using the Fourier Basis

Gradient descent. Barnabas Poczos & Ryan Tibshirani Convex Optimization /36-725

Instance-based Learning CE-717: Machine Learning Sharif University of Technology. M. Soleymani Fall 2016

Lecture 2: Linear Algebra Review

Calculus of Variations Summer Term 2014

MAE 598: Multi-Robot Systems Fall 2016

Faculty of Engineering and Department of Physics Engineering Physics 131 Midterm Examination Monday February 24, 2014; 7:00 pm 8:30 pm

I may not have gone where I intended to go, but I think I have ended up where I needed to be. Douglas Adams

Physics 200 Lecture 4. Integration. Lecture 4. Physics 200 Laboratory

Gaussian and Linear Discriminant Analysis; Multiclass Classification

Advanced Dynamics. - Lecture 4 Lagrange Equations. Paolo Tiso Spring Semester 2017 ETH Zürich

Answers and Mark Scheme. Holiday Revision Ten minutes a day for ten days

Fall 2008 RED Barcode Here Physics 105, sections 1 and 2 Please write your CID Colton

Numerical Methods for Inverse Kinematics

Introduction to Spring 2006 Artificial Intelligence Practice Final

12.1 The Extrema of a Function

Control of probabilistic diffusion in motion planning

Announcements. CS 188: Artificial Intelligence Fall Causality? Example: Traffic. Topology Limits Distributions. Example: Reverse Traffic

Artificial Intelligence

STA 414/2104: Lecture 8

Preliminary Physics. Moving About. DUXCollege. Week 2. Student name:. Class code:.. Teacher name:.

Principles of AI Planning

Vectors a vector is a quantity that has both a magnitude (size) and a direction

Lecture 10. Example: Friction and Motion

Solutions for the Practice Final - Math 23B, 2016

24. Nonlinear programming

Chapter 7 Turing Machines

Transcription:

Programming Robots in ROS Slides adapted from CMU, Harvard and TU Stuttgart

Path Planning Problem Given an initial configuration q_start and a goal configuration q_goal,, we must generate the best continuous path of legal configurations between these two points, if such a path exists.

Illustration of Basic Definitions border q_start Legal configurations obstacles Optimal path q_goal

Feedback control, path finding, trajectory optim. path finding trajectory optimization start feedback control goal Feedback Control: E.g., q t+1 = q t + J (y φ(q t )) Trajectory Optimization: argmin q0:t f(q 0:T ) Path Finding: Find some q 0:T with only valid configurations 6/61

Control, path finding, trajectory optimization Combining methods: 1) Path Finding 2) Trajectory Optimization ( smoothing ) 3) Feedback Control start path finding trajectory optimization feedback control goal Many problems can be solved with only feedback control (though not optimally) Many more problems can be solved locally optimal with only trajectory optimization Tricky problems need path finding: global search for valid paths 7/61

Types of Navigation Approaches Reactive (Purely Local) Visual Homing, Wall Following Bug-Based Algorithms Sense Direction and Distance to Goal Metric Path Planning Path Represented as a Series of Waypoints Assumes Distance-Based Map or Graph

Outline Bug Algorithms Artificial Potential Fields Wavefront Planning Visibility/Voronoi Graphs Configuration-Space (C-Space) Quad-Trees Probabilistic Road Maps

Buginner Strategy Bug 0 algorithm Goal 1) head toward goal 2) follow obstacles until you can head toward the goal again 3) continue known direction to goal otherwise local sensing walls/obstacles & encoders Start path?

Buginner Strategy Bug 0 algorithm 1) head toward goal 2) follow obstacles until you can head toward the goal again 3) continue assume a leftturning robot The turning direction might be decided beforehand OK?

Bug Zapper What map will foil Bug 0? Bug 0 algorithm 1) head toward goal 2) follow obstacles until you can head toward the goal again goal 3) continue start

A better bug? Call the line from the starting point to the goal the m-line Bug 2 Algorithm 1) head toward goal on the m-line

A better bug? Call the line from the starting point to the goal the m-line Bug 2 Algorithm 1) head toward goal on the m-line 2) if an obstacle is in the way, follow it until you encounter the m-line again.

A better bug? m-line Bug 2 Algorithm 1) head toward goal on the m-line 2) if an obstacle is in the way, follow it until you encounter the m-line again. 3) Leave the obstacle and continue toward the goal OK?

A better bug? Bug 2 Algorithm 1) head toward goal on the m-line Start 2) if an obstacle is in the way, follow it until you encounter the m- line again closer to the goal. 3) Leave the obstacle and continue toward the goal Goal Better or worse than Bug1?

Outline Bug Algorithms Artificial Potential Fields Wavefront Planning Visibility/Voronoi Graphs Configuration-Space (C-Space) Quad-Trees Probabilistic Road Maps

The Basic Idea A really simple idea: Suppose the goal is a point g 2 Suppose the robot is a point r 2 Think of a spring drawing the robot toward the goal and away from obstacles: Can also think of like and opposite charges Adapted from slides by Howie Choset, with slides from Ji Yeong Lee, G.D. Hager and Z. Dodds

Attractive/Repulsive Potential Field U att is the attractive potential --- move to the goal U rep is the repulsive potential --- avoid obstacles Adapted from slides by Howie Choset, with slides from Ji Yeong Lee, G.D. Hager and Z. Dodds

Artificial Potential Field Methods: Attractive Potential Conical Potential Quadratic Potential F att ( q) U k att goal ( q) ( q) Adapted from slides by Howie Choset, with slides from Ji Yeong Lee, G.D. Hager and Z. Dodds

Artificial Potential Field Methods: Attractive Potential Combined Potential In some cases, it may be desirable to have distance functions that grow more slowly to avoid huge velocities far from the goal one idea is to use the quadratic potential near the goal (< d*) and the conic farther away One minor issue: what? Adapted from slides by Howie Choset, with slides from Ji Yeong Lee, G.D. Hager and Z. Dodds

The Repulsive Potential Adapted from slides by Howie Choset, with slides from Ji Yeong Lee, G.D. Hager and Z. Dodds

Repulsive Potential Adapted from slides by Howie Choset, with slides from Ji Yeong Lee, G.D. Hager and Z. Dodds

Total Potential Function U( q) Uatt ( q) Urep ( q) F( q) U ( q) + = Adapted from slides by Howie Choset, with slides from Ji Yeong Lee, G.D. Hager and Z. Dodds

Potential Fields Adapted from slides by Howie Choset, with slides from Ji Yeong Lee, G.D. Hager and Z. Dodds

Outline Bug Algorithms Artificial Potential Fields Wavefront Planning Visibility/Voronoi Graphs Configuration-Space (C-Space) Quad-Trees Probabilistic Road Maps

Computing Distance: Use a Grid use a discrete version of space and work from there The Brushfire algorithm is one way to do this need to define a grid on space need to define connectivity (4/8) obstacles start with a 1 in grid; free space is zero 4 8 Adapted from slides by Howie Choset, with slides from Ji Yeong Lee, G.D. Hager and Z. Dodds

The Wave-front Planner Apply the brushfire algorithm starting from the goal Label the goal pixel 2 and add all zero neighbors to L While L dequeue the top element of L, t set d(t) to 1+min t N(t),d(t) > 1 d(t ) Enqueue all t N(t) with d(t)=0 to L (at the end) The result is now a distance for every cell gradient descent is again a matter of moving to the neighbor with the lowest distance value Adapted from slides by Howie Choset, with slides from Ji Yeong Lee, G.D. Hager and Z. Dodds

The Wavefront Planner: Setup Adapted from slides by Howie Choset, with slides from Ji Yeong Lee, G.D. Hager and Z. Dodds

The Wavefront in Action (Part 1) Starting with the goal, set all adjacent cells with 0 to the current cell + 1 4-Point Connectivity or 8-Point Connectivity? Your Choice. We ll use 8-Point Connectivity in our example Adapted from slides by Howie Choset, with slides from Ji Yeong Lee, G.D. Hager and Z. Dodds

The Wavefront in Action (Part 2) Now repeat with the modified cells This will be repeated until no 0 s are adjacent to cells with values >= 2 0 s will only remain when regions are unreachable Adapted from slides by Howie Choset, with slides from Ji Yeong Lee, G.D. Hager and Z. Dodds

The Wavefront in Action (Part 3) Repeat again... Adapted from slides by Howie Choset, with slides from Ji Yeong Lee, G.D. Hager and Z. Dodds

The Wavefront in Action (Part 4) And again... Adapted from slides by Howie Choset, with slides from Ji Yeong Lee, G.D. Hager and Z. Dodds

The Wavefront in Action (Part 5) And again until... Adapted from slides by Howie Choset, with slides from Ji Yeong Lee, G.D. Hager and Z. Dodds

The Wavefront in Action (Done) You re done Remember, 0 s should only remain if unreachable regions exist Adapted from slides by Howie Choset, with slides from Ji Yeong Lee, G.D. Hager and Z. Dodds

The Wavefront, Now What? To find the shortest path, according to your metric, simply always move toward a cell with a lower number The numbers generated by the Wavefront planner are roughly proportional to their distance from the goal Two possible shortest paths shown Adapted from slides by Howie Choset, with slides from Ji Yeong Lee, G.D. Hager and Z. Dodds

Outline Bug Algorithms Artificial Potential Fields Wavefront Planning Visibility/Voronoi Graphs Configuration-Space (C-Space) Quad-Trees Probabilistic Road Maps

Illustration of Basic Definitions border q_start Legal configurations obstacles Optimal path q_goal

Visibility Graph Example q_start q_goal

Voronoi Graph Example q_start h q_goal

C-Space Transform Steps For a mobile robot that only translates Choose point on the robot. Grow obstacle by translating robot around the edge of the obstacle and tracing line made by the reference point of the robot. Represent the robot only by the reference point. Legal configurations now consist of all non-obstacle obstacle points. There is a trick to make this process easier. Robots that can rotate as well are usually represented by a circle. Once the obstacles have been grown, you can plan a path!

Cell Decomposition Last approach: Divide C-space C into convex polygons and plan between legal cells We ll use grids but algorithm extends to general convex polygons of different sizes Algorithm: 1. Start with C-space C map 2. Divide map into polygons 3. Mark cells containing obstacles as occupied 4. Search for path to goal using unoccupied cells

Cell Decomposition Example q_start Occupied Free h q_goal

Cell Decomposition in Practice Multiple ways to sub-divide C-spaceC Grids, Quad-trees Resolution is a limiting issue Too fine a grid leads to long search time Too coarse a grid misses paths May require post planning smoothing Requires a search algorithm E.g. A* and D*

Quad-Tree Example

Quadtree Example Space Representation Equivalent quadtree Russell Gayle, The University of North Carolina, Chapel Hill

Quadtree Example Space Representation Equivalent quadtree NW child NE SW SE Gray node Free node

Quadtree Example Space Representation Equivalent quadtree G S(G) Obstacle Node

Quadtree Example Space Representation Equivalent quadtree Each of these steps are examples of pruned quadtrees, or the space at different resolutions

Quadtree Example Space Representation Equivalent quadtree

Quadtree Example Space Representation Equivalent quadtree Complete quadtree

Quadtree-Based Path Planning Preprocessing Step 1 Grow the obstacles by radius of the robot s s cross section Convert the result into a quadtree Step 2 Compute a distance transform of the free nodes (from the center of the region represented by a node to the nearest obstacle) Given start and goal points Determine the nodes S and G which contains these points Compute the minimum cost path from S to G through free nodes using the A* graph search

Search Once you have your graph you need to search for the best path Several search methods can be used: A* is the most popular (we will discuss this briefly on Monday) Other options include random search, depth first search, breadth first search, etc. Good search techniques are important for a variety of reasons you will learn more about them in 15-211 and other CS classes

Dijkstra Algorithm Is efficient in discrete domains Given start and goal node in an arbitrary graph Incrementally label nodes with their distance-from-start Produces optimal (shortest) paths Applying this to continuous domains requires discretization Grid-like discretization in high-dimensions is daunting! (curse of dimensionality) What are other ways to discretize space more efficiently? 27/61

Sample-based Path Finding 28/61

Probabilistic Road Maps [Kavraki, Svetska, Latombe,Overmars, 95] 29/61

Probabilistic Road Maps [Kavraki, Svetska, Latombe,Overmars, 95] q R n describes configuration Q free is the set of configurations without collision 29/61

Probabilistic Road Maps [Kavraki, Svetska, Latombe,Overmars, 95] Probabilistic Road Map generates a graph G = (V, E) of configurations such that configurations along each edges are Q free 30/61

Probabilistic Road Maps Given the graph, use (e.g.) Dijkstra to find path from q start to q goal. 31/61

Probabilistic Road Maps generation Input: number n of samples, number k number of nearest neighbors Output: PRM G = (V, E) 1: initialize V =, E = 2: while V < n do // find n collision free points q i 3: q random sample from Q 4: if q Q free then V V {q} 5: end while 6: for all q V do // check if near points can be connected 7: N q k nearest neighbors of q in V 8: for all q N q do 9: if path(q, q ) Q free then E E {(q, q )} 10: end for 11: end for where path(q, q ) is a local planner (easiest: straight line) 32/61

Local Planner tests collisions up to a specified resolution δ 33/61

Problem: Narrow Passages The smaller the gap (clearance ϱ) the more unlikely to sample such points. 34/61

PRM theory (for uniform sampling in d-dim space) Let a, b Q free and γ a path in Q free connecting a and b. Then the probability that P RM found the path after n samples is B1 2 d Q free P (PRM-success n) 1 2 γ ϱ σ = ϱ = clearance of γ (distance to obstacles) (roughly: the exponential term are volume ratios ) d e σϱ n This result is called probabilistic complete (one can achieve any probability with high enough n) For a given success probability, n needs to be exponential in d 35/61

Other PRM sampling strategies illustration from O. Brock s lecture Gaussian: q 1 U; q 2 N(q 1, σ); if q 1 Q free and q 2 Q free, add q 1 (or vice versa). Bridge: q 1 U; q 2 N(q 1, σ); q 3 = (q 1 + q 2 )/2; if q 1, q 2 Q free and q 3 Q free, add q 3. 36/61

Other PRM sampling strategies illustration from O. Brock s lecture Gaussian: q 1 U; q 2 N(q 1, σ); if q 1 Q free and q 2 Q free, add q 1 (or vice versa). Bridge: q 1 U; q 2 N(q 1, σ); q 3 = (q 1 + q 2 )/2; if q 1, q 2 Q free and q 3 Q free, add q 3. Sampling strategy can be made more intelligence: utility-based sampling Connection sampling (once earlier sampling has produced connected components) 36/61

Probabilistic Roadmaps conclusions Pros: Algorithmically very simple Highly explorative Allows probabilistic performance guarantees Good to answer many queries in an unchanged environment Cons: Precomputation of exhaustive roadmap takes a long time (but not necessary for Lazy PRMs ) 37/61

Rapidly Exploring Random Trees 2 motivations: Single Query path finding: Focus computational efforts on paths for specific (q start, q goal ) Use actually controllable DoFs to incrementally explore the search space: control-based path finding. (Ensures that RRTs can be extended to handling differential constraints.) 38/61

n = 1 39/61

n = 100 39/61

n = 300 39/61

n = 600 39/61

n = 1000 39/61

n = 2000 39/61

Rapidly Exploring Random Trees Simplest RRT with straight line local planner and step size α Input: q start, number n of nodes, stepsize α Output: tree T = (V, E) 1: initialize V = {q start }, E = 2: for i = 0 : n do 3: q target random sample from Q 4: q near nearest neighbor of q target in V α 5: q new q near + q target q (q near target q near) 6: if q new Q free then V V {q new}, E E {(q near, q new)} 7: end for 40/61

Rapidly Exploring Random Trees RRT growing directedly towards the goal Input: q start, q goal, number n of nodes, stepsize α, β Output: tree T = (V, E) 1: initialize V = {q start }, E = 2: for i = 0 : n do 3: if rand(0, 1) < β then q target q goal 4: else q target random sample from Q 5: q near nearest neighbor of q target in V α 6: q new q near + q target q (q near target q near) 7: if q new Q free then V V {q new}, E E {(q near, q new)} 8: end for 41/61

n = 1 42/61

n = 100 42/61

n = 200 42/61

n = 300 42/61

n = 400 42/61

n = 500 42/61

Bi-directional search grow two trees starting from q start and q goal let one tree grow towards the other (e.g., choose q new of T 1 as q target of T 2 ) 43/61

Summary: RRTs Pros (shared with PRMs): Algorithmically very simple Highly explorative Allows probabilistic performance guarantees Pros (beyond PRMs): Focus computation on single query (q start, q goal ) problem Trees from multiple queries can be merged to a roadmap Can be extended to differential constraints (nonholonomic systems) To keep in mind (shared with PRMs): The metric (for nearest neighbor selection) is sometimes critical The local planner may be non-trivial 44/61

References Steven M. LaValle: Planning Algorithms, http://planning.cs.uiuc.edu/. Choset et. al.: Principles of Motion Planning, MIT Press. Latombe s motion planning lecture, http: //robotics.stanford.edu/~latombe/cs326/2007/schedule.htm 45/61

Non-holonomic systems 46/61

Non-holonomic systems We define a nonholonomic system as one with differential constraints: dim(u t ) < dim(x t ) Not all degrees of freedom are directly controllable Dynamic systems are an example! General definition of a differential constraint: For any given state x, let U x be the tangent space that is generated by controls: U x = {ẋ : ẋ = f(x, u), u U} (non-holonomic dim(u x ) < dim(x)) The elements of U x are elements of T x subject to additional differential constraints. 47/61

Car example ẋ = v cos θ ẏ = v sin θ θ = (v/l) tan ϕ ϕ < Φ State q = x y θ Controls u = v ϕ Sytem equation ẋ ẏ θ = v cos θ v sin θ (v/l) tan ϕ 48/61

Car example The car is a non-holonomic system: not all DoFs are controlled, dim(u) < dim(q) We have the differential constraint q: A car cannot move directly lateral. ẋ sin θ ẏ cos θ = 0 Analogy to dynamic systems: Just like a car cannot instantly move sidewards, a dynamic system cannot instantly change its position q: the current change in position is constrained by the current velocity q. 49/61

Path finding with a non-holonomic system Could a car follow this trajectory? This is generated with a PRM in the state space q = (x, y, θ) ignoring the differntial constraint. 50/61

Path finding with a non-holonomic system This is a solution we would like to have: The path respects differential constraints. Each step in the path corresponds to setting certain controls. 51/61

Control-based sampling to grow a tree Control-based sampling: fulfils none of the nice exploration properties of RRTs, but fulfils the differential constraints: 1) Select a q T from tree of current configurations 2) Pick control vector u at random 3) Integrate equation of motion over short duration (picked at random or not) 4) If the motion is collision-free, add the endpoint to the tree 52/61

Control-based sampling for the car 1) Select a q T 2) Pick v, φ, and τ 3) Integrate motion from q 4) Add result if collision-free 53/61

J. Barraquand and J.C. Latombe. Nonholonomic Multibody Robots: Controllability and Motion Planning in the Presence of Obstacles. Algorithmica, 10:121-155, 1993. car parking 54/61

car parking 55/61

parking with only left-steering 56/61