Robotics. Path Planning. Marc Toussaint U Stuttgart

Similar documents
Robotics. Path Planning. University of Stuttgart Winter 2018/19

Programming Robots in ROS Slides adapted from CMU, Harvard and TU Stuttgart

Robotics. Dynamics. Marc Toussaint U Stuttgart

A motion planner for nonholonomic mobile robots

MEM380 Applied Autonomous Robots I Fall BUG Algorithms Potential Fields

Completeness of Randomized Kinodynamic Planners with State-based Steering

Algorithms for Sensor-Based Robotics: Potential Functions

Improving the Performance of Sampling-Based Planners by Using a Symmetry-Exploiting Gap Reduction Algorithm

Robotics. Dynamics. University of Stuttgart Winter 2018/19

Lecture 8: Kinematics: Path and Trajectory Planning

Probabilistically Complete Planning with End-Effector Pose Constraints

An Explicit Characterization of Minimum Wheel-Rotation Paths for Differential-Drives

Improving the Performance of Sampling-Based Planners by Using a. Symmetry-Exploiting Gap Reduction Algorithm

From Reeds-Shepp s paths to continuous curvature path- Part I: transition schemes &algorithms

Robotics, Geometry and Control - A Preview

Supplementary Material for the paper Kinodynamic Planning in the Configuration Space via Velocity Interval Propagation

arxiv: v2 [cs.ro] 20 Nov 2015

Control of Mobile Robots Prof. Luca Bascetta

Completeness of Randomized Kinodynamic Planners with State-based Steering

Extremal Trajectories for Bounded Velocity Mobile Robots

Dynamic-Domain RRTs. Anna Yershova 1 Léonard Jaillet 2 Thierry Siméon 2 Steven M. LaValle 1

Lecture 10. Rigid Body Transformation & C-Space Obstacles. CS 460/560 Introduction to Computational Robotics Fall 2017, Rutgers University

Robotics. Control Theory. Marc Toussaint U Stuttgart

Enhancing sampling-based kinodynamic motion planning for quadrotors

TWo decades ago LaValle and Kuffner presented the

Motion Planning for LTL Specifications: A Satisfiability Modulo Convex Optimization Approach

CONTROL OF THE NONHOLONOMIC INTEGRATOR

Forces of Constraint & Lagrange Multipliers

An Adaptive Multi-resolution State Lattice Approach for Motion Planning with Uncertainty

LQG-MP: Optimized Path Planning for Robots with Motion Uncertainty and Imperfect State Information

Control of probabilistic diffusion in motion planning

Principles of AI Planning

Robot Control Basics CS 685

EN Nonlinear Control and Planning in Robotics Lecture 2: System Models January 28, 2015

DSCC TIME-OPTIMAL TRAJECTORIES FOR STEERED AGENT WITH CONSTRAINTS ON SPEED AND TURNING RATE

LOCAL NAVIGATION. Dynamic adaptation of global plan to local conditions A.K.A. local collision avoidance and pedestrian models

Missile trajectory shaping using sampling-based path planning

Protein Folding by Robotics

Kinematic and Dynamic Models

Improving the Performance of Sampling-Based Motion Planning With Symmetry-Based Gap Reduction

Robot Arm Transformations, Path Planning, and Trajectories!

Dynamic Obstacle Avoidance using Online Trajectory Time-Scaling and Local Replanning

Integration of Local Geometry and Metric Information in Sampling-Based Motion Planning

Today. Why idealized? Idealized physical models of robotic vehicles. Noise. Idealized physical models of robotic vehicles

Intelligent Systems:

Motion Planning of Discrete time Nonholonomic Systems with Difference Equation Constraints

Chapter 11. Path Manager. Beard & McLain, Small Unmanned Aircraft, Princeton University Press, 2012,

Discrete Control Barrier Functions for Safety-Critical Control of Discrete Systems with Application to Bipedal Robot Navigation

An Equivalence Relation for Local Path Sets

Principles of AI Planning

Gait Controllability for Legged Robots

ELEC4631 s Lecture 2: Dynamic Control Systems 7 March Overview of dynamic control systems

Bringing the Compass-Gait Bipedal Walker to Three Dimensions

Transverse Linearization for Controlled Mechanical Systems with Several Passive Degrees of Freedom (Application to Orbital Stabilization)

Manipulators. Robotics. Outline. Non-holonomic robots. Sensors. Mobile Robots

Planning With Information States: A Survey Term Project for cs397sml Spring 2002

Robotics. Kinematics. Marc Toussaint University of Stuttgart Winter 2017/18

Motion Planning with Invariant Set Trees

Physics-Aware Informative Coverage Planning for Autonomous Vehicles

12 : Variational Inference I

Robust Adaptive Motion Planning in the Presence of Dynamic Obstacles

Continuous Curvature Path Generation Based on Bézier Curves for Autonomous Vehicles

Calculus of Variations Summer Term 2014

The Nonlinear Velocity Obstacle Revisited: the Optimal Time Horizon

Artificial Intelligence

Classification: The rest of the story

Modeling, Simulation and Control of the Walking of Biped Robotic Devices Part III: Turning while Walking

Lecture Notes: (Stochastic) Optimal Control

Improved Action Selection and Path Synthesis using Gradient Sampling

Discrete Control Barrier Functions for Safety-Critical Control of Discrete Systems with Application to Bipedal Robot Navigation

CPSC 540: Machine Learning

MINIMUM TIME KINEMATIC TRAJECTORIES FOR SELF-PROPELLED RIGID BODIES IN THE UNOBSTRUCTED PLANE

Selected Topics in Optimization. Some slides borrowed from

Query Processing in Spatial Network Databases

IROS 2009 Tutorial: Filtering and Planning in I-Spaces PART 3: VIRTUAL SENSORS

Lecture Notes Multibody Dynamics B, wb1413

Robust Low Torque Biped Walking Using Differential Dynamic Programming With a Minimax Criterion

Locomotion. Amir Degani. Intro to Robotics CS - Technion Winter Guest Lecture. Locomotion and Manipulation. Locomotion and. Manipulation Duality

Introduction to Robotics

The Belief Roadmap: Efficient Planning in Belief Space by Factoring the Covariance. Samuel Prentice and Nicholas Roy Presentation by Elaine Short

I may not have gone where I intended to go, but I think I have ended up where I needed to be. Douglas Adams

5. Nonholonomic constraint Mechanics of Manipulation

Advanced Dynamics. - Lecture 4 Lagrange Equations. Paolo Tiso Spring Semester 2017 ETH Zürich

IEEE Int. Conf. on Robotics and Automation May 16{21, Leuven (BE). Vol. 1, pp. 27{32 Path Planning with Uncertainty for Car-Like Robots Th. Frai

Potential Field Methods

Gradient Sampling for Improved Action Selection and Path Synthesis

LQG-MP: Optimized Path Planning for Robots with Motion Uncertainty and Imperfect State Information

Bounds on Tracking Error using Closed-Loop Rapidly-Exploring Random Trees

Extremal Trajectories for Bounded Velocity Differential Drive Robots

CS 273 Prof. Serafim Batzoglou Prof. Jean-Claude Latombe Spring Lecture 12 : Energy maintenance (1) Lecturer: Prof. J.C.

Physics 411 Lecture 7. Tensors. Lecture 7. Physics 411 Classical Mechanics II

Sampling-based path planning: a new tool for missile guidance

Instance-based Learning CE-717: Machine Learning Sharif University of Technology. M. Soleymani Fall 2016

Kinodynamic Randomized Rearrangement Planning via Dynamic Transitions Between Statically Stable States

Sampling-based Nonholonomic Motion Planning in Belief Space via Dynamic Feedback Linearization-based FIRM

Nonholonomic Constraints Examples

The time-optimal trajectories for an omni-directional vehicle

q 1 F m d p q 2 Figure 1: An automated crane with the relevant kinematic and dynamic definitions.

RRT X : Real-Time Motion Planning/Replanning for Environments with Unpredictable Obstacles

COMP3702/7702 Artificial Intelligence Week 5: Search in Continuous Space with an Application in Motion Planning " Hanna Kurniawati"

Transcription:

Robotics Path Planning Path finding vs. trajectory optimization, local vs. global, Dijkstra, Probabilistic Roadmaps, Rapidly Exploring Random Trees, non-holonomic systems, car system equation, path-finding for non-holonomic systems, control-based sampling, Dubins curves Marc Toussaint U Stuttgart

Path finding examples Alpha-Puzzle, solved with James Kuffner s RRTs 2/61

Path finding examples J. Kuffner, K. Nishiwaki, S. Kagami, M. Inaba, and H. Inoue. Footstep Planning Among Obstacles for Biped Robots. Proc. IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS), 2001. 3/61

Path finding examples T. Bretl. Motion Planning of Multi-Limbed Robots Subject to Equilibrium Constraints: The Free-Climbing Robot Problem. International Journal of Robotics Research, 25(4):317-342, Apr 2006. 4/61

Path finding examples S. M. LaValle and J. J. Kuffner. Randomized Kinodynamic Planning. International Journal of Robotics Research, 20(5):378 400, May 2001. 5/61

Feedback control, path finding, trajectory optim. path finding trajectory optimization start feedback control goal Feedback Control: E.g., q t+1 = q t + J (y φ(q t )) Trajectory Optimization: argmin q0:t f(q 0:T ) Path Finding: Find some q 0:T with only valid configurations 6/61

Control, path finding, trajectory optimization Combining methods: 1) Path Finding 2) Trajectory Optimization ( smoothing ) 3) Feedback Control start path finding trajectory optimization feedback control goal Many problems can be solved with only feedback control (though not optimally) Many more problems can be solved locally optimal with only trajectory optimization Tricky problems need path finding: global search for valid paths 7/61

Outline Heuristics & Discretization (slides from Howie CHoset s CMU lectures) Bugs algorithm Potentials to guide feedback control Dijkstra Sample-based Path Finding Probabilistic Roadmaps Rapidly Exploring Random Trees 8/61

A better bug? m-line Bug 2 Algorithm 1) head toward goal on the m-line 2) if an obstacle is in the way, follow it until you encounter the m-line again. 3) Leave the obstacle and continue toward the goal OK? 9/61

A better bug? Bug 2 Algorithm Start 1) head toward goal on the m-line 2) if an obstacle is in the way, follow it until you encounter the m-line again. 3) Leave the obstacle and continue toward the goal Goal Better or worse than Bug1? 10/61

A better bug? Bug 2 Algorithm Start 1) head toward goal on the m-line 2) if an obstacle is in the way, follow it until you encounter the m-line again. 3) Leave the obstacle and continue toward the goal Goal NO! How do we fix this? 11/61

A better bug? Bug 2 Algorithm Start 1) head toward goal on the m-line 2) if an obstacle is in the way, follow it until you encounter the m-line again closer to the goal. 3) Leave the obstacle and continue toward the goal Goal Better or worse than Bug1? 12/61

BUG algorithms conclusions Other variants: TangentBug, VisBug, RoverBug, WedgeBug,... only 2D! (TangentBug has extension to 3D) Guaranteed convergence Still active research: K. Taylor and S.M. LaValle: I-Bug: An Intensity-Based Bug Algorithm Useful for minimalistic, robust 2D goal reaching not useful for finding paths in joint space 13/61

Start-Goal Algorithm: Potential Functions 14/61

Repulsive Potential 15/61

Total Potential Function U ( q) = U att ( q) + U rep( q) F( q) = U ( q) + = 16/61

Local Minimum Problem with the Charge Analogy 17/61

Potential fields conclusions Very simple, therefore popular In our framework: Combining a goal (endeffector) task variable, with a constraint (collision avoidance) task variable; then using inv. kinematics is exactly the same as Potential Fields Does not solve locality problem of feedback control. 18/61

The Wavefront in Action (Part 2) Now repeat with the modified cells This will be repeated until no 0 s are adjacent to cells with values >= 2 0 s will only remain when regions are unreachable 19/61

The Wavefront in Action (Part 1) Starting with the goal, set all adjacent cells with 0 to the current cell + 1 4-Point Connectivity or 8-Point Connectivity? Your Choice. We ll use 8-Point Connectivity in our example 20/61

The Wavefront in Action (Part 2) Now repeat with the modified cells This will be repeated until no 0 s are adjacent to cells with values >= 2 0 s will only remain when regions are unreachable 21/61

The Wavefront in Action (Part 3) Repeat again... 22/61

The Wavefront in Action (Part 4) And again... 23/61

The Wavefront in Action (Part 5) And again until... 24/61

The Wavefront in Action (Done) You re done Remember, 0 s should only remain if unreachable regions exist 25/61

The Wavefront, Now What? To find the shortest path, according to your metric, simply always move toward a cell with a lower number The numbers generated by the Wavefront planner are roughly proportional to their distance from the goal Two possible shortest paths shown 26/61

Dijkstra Algorithm Is efficient in discrete domains Given start and goal node in an arbitrary graph Incrementally label nodes with their distance-from-start Produces optimal (shortest) paths Applying this to continuous domains requires discretization Grid-like discretization in high-dimensions is daunting! (curse of dimensionality) What are other ways to discretize space more efficiently? 27/61

Sample-based Path Finding 28/61

Probabilistic Road Maps [Kavraki, Svetska, Latombe,Overmars, 95] 29/61

Probabilistic Road Maps [Kavraki, Svetska, Latombe,Overmars, 95] q R n describes configuration Q free is the set of configurations without collision 29/61

Probabilistic Road Maps [Kavraki, Svetska, Latombe,Overmars, 95] Probabilistic Road Map generates a graph G = (V, E) of configurations such that configurations along each edges are Q free 30/61

Probabilistic Road Maps Given the graph, use (e.g.) Dijkstra to find path from q start to q goal. 31/61

Probabilistic Road Maps generation Input: number n of samples, number k number of nearest neighbors Output: PRM G = (V, E) 1: initialize V =, E = 2: while V < n do // find n collision free points q i 3: q random sample from Q 4: if q Q free then V V {q} 5: end while 6: for all q V do // check if near points can be connected 7: N q k nearest neighbors of q in V 8: for all q N q do 9: if path(q, q ) Q free then E E {(q, q )} 10: end for 11: end for where path(q, q ) is a local planner (easiest: straight line) 32/61

Local Planner tests collisions up to a specified resolution δ 33/61

Problem: Narrow Passages The smaller the gap (clearance ϱ) the more unlikely to sample such points. 34/61

PRM theory (for uniform sampling in d-dim space) Let a, b Q free and γ a path in Q free connecting a and b. Then the probability that P RM found the path after n samples is B1 2 d Q free P (PRM-success n) 1 2 γ ϱ σ = ϱ = clearance of γ (distance to obstacles) (roughly: the exponential term are volume ratios ) d e σϱ n This result is called probabilistic complete (one can achieve any probability with high enough n) For a given success probability, n needs to be exponential in d 35/61

Other PRM sampling strategies illustration from O. Brock s lecture Gaussian: q 1 U; q 2 N(q 1, σ); if q 1 Q free and q 2 Q free, add q 1 (or vice versa). Bridge: q 1 U; q 2 N(q 1, σ); q 3 = (q 1 + q 2 )/2; if q 1, q 2 Q free and q 3 Q free, add q 3. 36/61

Other PRM sampling strategies illustration from O. Brock s lecture Gaussian: q 1 U; q 2 N(q 1, σ); if q 1 Q free and q 2 Q free, add q 1 (or vice versa). Bridge: q 1 U; q 2 N(q 1, σ); q 3 = (q 1 + q 2 )/2; if q 1, q 2 Q free and q 3 Q free, add q 3. Sampling strategy can be made more intelligence: utility-based sampling Connection sampling (once earlier sampling has produced connected components) 36/61

Probabilistic Roadmaps conclusions Pros: Algorithmically very simple Highly explorative Allows probabilistic performance guarantees Good to answer many queries in an unchanged environment Cons: Precomputation of exhaustive roadmap takes a long time (but not necessary for Lazy PRMs ) 37/61

Rapidly Exploring Random Trees 2 motivations: Single Query path finding: Focus computational efforts on paths for specific (q start, q goal ) Use actually controllable DoFs to incrementally explore the search space: control-based path finding. (Ensures that RRTs can be extended to handling differential constraints.) 38/61

n = 1 39/61

n = 100 39/61

n = 300 39/61

n = 600 39/61

n = 1000 39/61

n = 2000 39/61

Rapidly Exploring Random Trees Simplest RRT with straight line local planner and step size α Input: q start, number n of nodes, stepsize α Output: tree T = (V, E) 1: initialize V = {q start }, E = 2: for i = 0 : n do 3: q target random sample from Q 4: q near nearest neighbor of q target in V α 5: q new q near + q target q (q near target q near) 6: if q new Q free then V V {q new}, E E {(q near, q new)} 7: end for 40/61

Rapidly Exploring Random Trees RRT growing directedly towards the goal Input: q start, q goal, number n of nodes, stepsize α, β Output: tree T = (V, E) 1: initialize V = {q start }, E = 2: for i = 0 : n do 3: if rand(0, 1) < β then q target q goal 4: else q target random sample from Q 5: q near nearest neighbor of q target in V α 6: q new q near + q target q (q near target q near) 7: if q new Q free then V V {q new}, E E {(q near, q new)} 8: end for 41/61

n = 1 42/61

n = 100 42/61

n = 200 42/61

n = 300 42/61

n = 400 42/61

n = 500 42/61

Bi-directional search grow two trees starting from q start and q goal let one tree grow towards the other (e.g., choose q new of T 1 as q target of T 2 ) 43/61

Summary: RRTs Pros (shared with PRMs): Algorithmically very simple Highly explorative Allows probabilistic performance guarantees Pros (beyond PRMs): Focus computation on single query (q start, q goal ) problem Trees from multiple queries can be merged to a roadmap Can be extended to differential constraints (nonholonomic systems) To keep in mind (shared with PRMs): The metric (for nearest neighbor selection) is sometimes critical The local planner may be non-trivial 44/61

References Steven M. LaValle: Planning Algorithms, http://planning.cs.uiuc.edu/. Choset et. al.: Principles of Motion Planning, MIT Press. Latombe s motion planning lecture, http: //robotics.stanford.edu/~latombe/cs326/2007/schedule.htm 45/61

Non-holonomic systems 46/61

Non-holonomic systems We define a nonholonomic system as one with differential constraints: dim(u t ) < dim(x t ) Not all degrees of freedom are directly controllable Dynamic systems are an example! General definition of a differential constraint: For any given state x, let U x be the tangent space that is generated by controls: U x = {ẋ : ẋ = f(x, u), u U} (non-holonomic dim(u x ) < dim(x)) The elements of U x are elements of T x subject to additional differential constraints. 47/61

Car example ẋ = v cos θ ẏ = v sin θ θ = (v/l) tan ϕ ϕ < Φ State q = x y θ Controls u = v ϕ Sytem equation ẋ ẏ θ = v cos θ v sin θ (v/l) tan ϕ 48/61

Car example The car is a non-holonomic system: not all DoFs are controlled, dim(u) < dim(q) We have the differential constraint q: A car cannot move directly lateral. ẋ sin θ ẏ cos θ = 0 Analogy to dynamic systems: Just like a car cannot instantly move sidewards, a dynamic system cannot instantly change its position q: the current change in position is constrained by the current velocity q. 49/61

Path finding with a non-holonomic system Could a car follow this trajectory? This is generated with a PRM in the state space q = (x, y, θ) ignoring the differntial constraint. 50/61

Path finding with a non-holonomic system This is a solution we would like to have: The path respects differential constraints. Each step in the path corresponds to setting certain controls. 51/61

Control-based sampling to grow a tree Control-based sampling: fulfils none of the nice exploration properties of RRTs, but fulfils the differential constraints: 1) Select a q T from tree of current configurations 2) Pick control vector u at random 3) Integrate equation of motion over short duration (picked at random or not) 4) If the motion is collision-free, add the endpoint to the tree 52/61

Control-based sampling for the car 1) Select a q T 2) Pick v, φ, and τ 3) Integrate motion from q 4) Add result if collision-free 53/61

J. Barraquand and J.C. Latombe. Nonholonomic Multibody Robots: Controllability and Motion Planning in the Presence of Obstacles. Algorithmica, 10:121-155, 1993. car parking 54/61

car parking 55/61

parking with only left-steering 56/61

with a trailer 57/61

Better control-based exploration: RRTs revisited RRTs with differential constraints: Input: q start, number k of nodes, time interval τ Output: tree T = (V, E) 1: initialize V = {q start }, E = 2: for i = 0 : k do 3: q target random sample from Q 4: q near nearest neighbor of q target in V 5: use local planner to compute controls u that steer q near towards q target 6: q new q near + τ t=0 q(q, u)dt 7: if q new Q free then V V {q new}, E E {(q near, q new)} 8: end for Crucial questions: How meassure near in nonholonomic systems? How find controls u to steer towards target? 58/61

Metrics Standard/Naive metrics: Comparing two 2D rotations/orientations θ 1, θ 2 SO(2): a) Euclidean metric between e iθ 1 and e iθ 2 b) d(θ 1, θ 2) = min{ θ 1 θ 2, 2π θ 1 θ 2 } Comparing two configurations (x, y, θ) 1,2 in R 2 : Eucledian metric on (x, y, e iθ ) Comparing two 3D rotations/orientations r 1, r 2 SO(3): Represent both orientations as unit-length quaternions r 1, r 2 R 4 : d(r 1, d 2) = min{ r 1 r 2, r 1 + r 2 } where is the Euclidean metric. (Recall that r 1 and r 1 represent exactly the same rotation.) 59/61

Metrics Standard/Naive metrics: Comparing two 2D rotations/orientations θ 1, θ 2 SO(2): a) Euclidean metric between e iθ 1 and e iθ 2 b) d(θ 1, θ 2) = min{ θ 1 θ 2, 2π θ 1 θ 2 } Comparing two configurations (x, y, θ) 1,2 in R 2 : Eucledian metric on (x, y, e iθ ) Comparing two 3D rotations/orientations r 1, r 2 SO(3): Represent both orientations as unit-length quaternions r 1, r 2 R 4 : d(r 1, d 2) = min{ r 1 r 2, r 1 + r 2 } where is the Euclidean metric. (Recall that r 1 and r 1 represent exactly the same rotation.) Ideal metric: Optimal cost-to-go between two states x 1 and x 2 : Use optimal trajectory cost as metric This is as hard to compute as the original problem, of course!! Approximate, e.g., by neglecting obstacles. 59/61

Dubins curves Dubins car: constant velocity, and steer ϕ [ Φ, Φ] Neglecting obstacles, there are only six types of trajectories that connect any configuration R 2 S 1 : {LRL, RLR, LSL, LSR, RSL, RSR} annotating durations of each phase: {L α R β L γ,, R α L β R γ, L α S d L γ, L α S d R γ, R α S d L γ, R α S d R γ } with α [0, 2π), β (π, 2π), d 0 60/61

Dubins curves By testing all six types of trajectories for (q 1, q 2 ) we can define a Dubins metric for the RRT and use the Dubins curves as controls! 61/61

Dubins curves By testing all six types of trajectories for (q 1, q 2 ) we can define a Dubins metric for the RRT and use the Dubins curves as controls! Reeds-Shepp curves are an extension for cars which can drive back. (includes 46 types of trajectories, good metric for use in RRTs for cars) 61/61