Incremental Path Planning

Similar documents
DD* Lite: Efficient Incremental Search with State Dominance

Graph Search Howie Choset

I may not have gone where I intended to go, but I think I have ended up where I needed to be. Douglas Adams

Navigation. Global Pathing. The Idea. Diagram I: Overall Navigation

CMU Lecture 4: Informed Search. Teacher: Gianni A. Di Caro

Informed Search. Chap. 4. Breadth First. O(Min(N,B L )) BFS. Search. have same cost BIBFS. Bi- Direction. O(Min(N,2B L/2 )) BFS. have same cost UCS

8 Priority Queues. 8 Priority Queues. Prim s Minimum Spanning Tree Algorithm. Dijkstra s Shortest Path Algorithm

Informed Search. Day 3 of Search. Chap. 4, Russel & Norvig. Material in part from

CS360 Homework 12 Solution

Part V. Matchings. Matching. 19 Augmenting Paths for Matchings. 18 Bipartite Matching via Flows

CS 4407 Algorithms Lecture: Shortest Path Algorithms

Appendix of Computational Protein Design Using AND/OR Branch and Bound Search

Deterministic Planning and State-space Search. Erez Karpas

Query Processing in Spatial Network Databases

Searching in non-deterministic, partially observable and unknown environments

Planning & Decision-making in Robotics Search Algorithms: Markov Property, Dependent vs. Independent variables, Dominance relationship

Algorithms: Lecture 12. Chalmers University of Technology


A walk over the shortest path: Dijkstra s Algorithm viewed as fixed-point computation

Design and Analysis of Algorithms

Algorithm Design Strategies V

Searching in non-deterministic, partially observable and unknown environments

A faster algorithm for the single source shortest path problem with few distinct positive lengths

All-Pairs Shortest Paths

Single Source Shortest Paths

CS 6301 PROGRAMMING AND DATA STRUCTURE II Dept of CSE/IT UNIT V GRAPHS

Breadth First Search, Dijkstra s Algorithm for Shortest Paths

Worst-Case Solution Quality Analysis When Not Re-Expanding Nodes in Best-First Search

COMP251: Bipartite graphs

A Simple Implementation Technique for Priority Search Queues

Embedded Systems 15. REVIEW: Aperiodic scheduling. C i J i 0 a i s i f i d i

Parsing. Weighted Deductive Parsing. Laura Kallmeyer. Winter 2017/18. Heinrich-Heine-Universität Düsseldorf 1 / 26

CSE613: Parallel Programming, Spring 2012 Date: May 11. Final Exam. ( 11:15 AM 1:45 PM : 150 Minutes )

CSE 591 Foundations of Algorithms Homework 4 Sample Solution Outlines. Problem 1

Learning in State-Space Reinforcement Learning CIS 32

CS 253: Algorithms. Chapter 24. Shortest Paths. Credit: Dr. George Bebis

What is SSA? each assignment to a variable is given a unique name all of the uses reached by that assignment are renamed

Faster than Weighted A*: An Optimistic Approach to Bounded Suboptimal Search

1 Matchings in Non-Bipartite Graphs

Algorithms Booklet. 1 Graph Searching. 1.1 Breadth First Search

Discrete Wiskunde II. Lecture 5: Shortest Paths & Spanning Trees

Distributed Optimization. Song Chong EE, KAIST

Summary. Local Search and cycle cut set. Local Search Strategies for CSP. Greedy Local Search. Local Search. and Cycle Cut Set. Strategies for CSP

Algorithms and Data Structures 2016 Week 5 solutions (Tues 9th - Fri 12th February)

Planning in Markov Decision Processes

Chapter 4. Greedy Algorithms. Slides by Kevin Wayne. Copyright 2005 Pearson-Addison Wesley. All rights reserved.

CS 188 Introduction to Fall 2007 Artificial Intelligence Midterm

Final. Introduction to Artificial Intelligence. CS 188 Spring You have approximately 2 hours and 50 minutes.

CSE 4502/5717 Big Data Analytics Spring 2018; Homework 1 Solutions

Pengju

Queues. Principles of Computer Science II. Basic features of Queues

Topic Contents. Factoring Methods. Unit 3: Factoring Methods. Finding the square root of a number

Analysis of Algorithms. Outline. Single Source Shortest Path. Andres Mendez-Vazquez. November 9, Notes. Notes

Heuristic Search Algorithms

Data Flow Analysis. Lecture 6 ECS 240. ECS 240 Data Flow Analysis 1

Faster Dynamic Controllability Checking for Simple Temporal Networks with Uncertainty

Single Source Shortest Paths

Breadth-First Search of Graphs

Data-Flow Analysis. Compiler Design CSE 504. Preliminaries

Algorithm Design and Analysis

CS 486: Applied Logic Lecture 7, February 11, Compactness. 7.1 Compactness why?

cs/ee/ids 143 Communication Networks

CS 161: Design and Analysis of Algorithms

Multiprocessor Scheduling I: Partitioned Scheduling. LS 12, TU Dortmund

University of New Mexico Department of Computer Science. Final Examination. CS 362 Data Structures and Algorithms Spring, 2007

Divide-and-Conquer Algorithms Part Two

Using preprocessing to speed up Brandes betweenness centrality algorithm

Minimax strategies, alpha beta pruning. Lirong Xia

Institute of Organic Chemistry, Polish Academy of Sciences, ul. Kasprzaka 44/52, Warsaw , Poland

Solving LP and MIP Models with Piecewise Linear Objective Functions

Dijkstra s Single Source Shortest Path Algorithm. Andreas Klappenecker

Complexity Analysis of. Real-Time Reinforcement Learning. Applied to. December 1992 CMU-CS Carnegie Mellon University. Pittsburgh, PA 15213

Math Models of OR: Branch-and-Bound

Scout and NegaScout. Tsan-sheng Hsu.

Search and Lookahead. Bernhard Nebel, Julien Hué, and Stefan Wölfl. June 4/6, 2012

Slides credited from Hsueh-I Lu & Hsu-Chun Hsiao

6.852: Distributed Algorithms Fall, Class 24

CMPS 6610 Fall 2018 Shortest Paths Carola Wenk

A.I.: Beyond Classical Search

ne a priority queue for the nodes of G; ile (! PQ.is empty( )) select u PQ with d(u) minimal and remove it; I Informatik 1 Kurt Mehlhorn

AI Programming CS S-06 Heuristic Search

Optimal Metric Planning with State Sets in Automata Representation

Fixed-Universe Successor : Van Emde Boas. Lecture 18

Time-Expanded vs Time-Dependent Models for Timetable Information

Introduction to Algorithms

Numerical Methods. V. Leclère May 15, x R n

Static Program Analysis

Introduction to Algorithms

Contents Lecture 4. Greedy graph algorithms Dijkstra s algorithm Prim s algorithm Kruskal s algorithm Union-find data structure with path compression

CS 410/584, Algorithm Design & Analysis, Lecture Notes 4

Embedded Systems - FS 2018

Problem solving and search. Chapter 3. Chapter 3 1

Chapter 4 Deliberation with Temporal Domain Models

Algorithm Design and Analysis

Data Structures in Java

A* Search. 1 Dijkstra Shortest Path

PART III. Outline. Codes and Cryptography. Sources. Optimal Codes (I) Jorge L. Villar. MAMME, Fall 2015

Lec 03 Entropy and Coding II Hoffman and Golomb Coding

CMSC 631 Program Analysis and Understanding. Spring Data Flow Analysis

1 Difference between grad and undergrad algorithms

Transcription:

Incremental Path Planning Dynamic and Incremental A* Prof. Brian Williams (help from Ihsiang Shu) 6./6.8 Cognitive Robotics February 7 th,

Outline Review: Optimal Path Planning in Partially Known Environments. Continuous Optimal Path Planning Dynamic A* Incremental A* (LRTA*)

[Zellinsky, 9]. Generate global path plan from initial map.. Repeat until goal reached or failure: Execute next step in current global path plan Update map based on sensors. If map changed generate new global path from map.

Compute Optimal Path J M N O E I L G B D H K S A C F

Begin Executing Optimal Path h = J M N O E I L G h = B D H K h = h = h = h = h = h = h = S A C F Robot moves along backpointers towards goal. Uses sensors to detect discrepancies along way.

Obstacle Encountered! h = J M N O E I L G h = B D H K h = h = h = h = h = h = S A C F At state A, robot discovers edge from D to H is blocked (cost, units). Update map and reinvoke planner.

Continue Path Execution h = J M N O E I L G h = B D H K h = h = h = h = h = h = S A C F A s previous path is still optimal. Continue moving robot along back pointers.

Second Obstacle, Replan! h = J M N O E I L G h = B D H K h = h = h = h = h = h = S A C F At C robot discovers blocked edges from C to F and H (cost, units). Update map and reinvoke planner.

Path Execution Achieves Goal h = J M N O E I L G h = B D H K h = h = h = h = h = h = h = S A C F Follow back pointers to goal. No further discrepancies detected; goal achieved!

Outline Review: Optimal Path Planning in Partially Known Environments. Continuous Optimal Path Planning Dynamic A* Incremental A* (LRTA*)

What is Continuous Optimal Path Planning? Supports search as a repetitive online process. Exploits similarities between a series of searches to solve much faster than solving each search starting from scratch. Reuses the identical parts of the previous search tree, while updating differences. Solutions guaranteed to be optimal. On the first search, behaves like traditional algorithms. D* behaves exactly like Dijkstra s. Incremental A* A* behaves exactly like A*.

Dynamic A* (aka D*) [Stenz, 9]. Generate global path plan from initial map.. Repeat until Goal reached, or failure. Execute next step of current global path plan. Update map based on sensor information. Incrementally update global path plan from map changes. to orders of magnitude speedup relative to a non-incremental path planner.

Map and Path Concepts c(x,y) : Cost to move from Y to X. c(x,y) is undefined if move disallowed. Neighbors(X) : Any Y such that c(x,y) or c(y,x) is defined. o(g,x) : Optimal path cost to Goal from X. h(g,x) : Estimate of optimal path cost to goal from X. b(x) = Y : backpointer from X to Y. Y is the first state on path from X to G.

D* Search Concepts list : States with estimates to be propagated to other states. States on list tagged Sorted by key function k. State tag t(x) : : has no estimate h. : estimate needs to be propagated. : estimate propagated.

D* Fundamental Search Concepts k(g,x) : key function minimum of h(g,x) before modification, and all values assumed by h(g,x) since X was placed on the list. Lowered state : k(g,x) = current h(g,x), Propagate decrease to descendants and other nodes. Raised state : k(g,x) < current h(g,x), Propagate increase to dscendants and other nodes. Try to find alternate shorter paths.

Running D* First Time on Graph Initially Mark G Open and Queue Mark all other states New Run Process_States on queue until path found or empty. When edge cost c(x,y) changes If X is marked Closed, then Update h(x) Mark X open and queue with key h(x).

Use D* to Compute Initial Path J M N O E I L G B D H K S A C F States initially tagged (no cost determined yet).

Use D* to Compute Initial Path J M N O List (,G) E I L G h = B D H K S A C F 8: if kold = h(x) then 9: for each neighbor Y of X: : if t(y) = or : (b(y) = X and h(y) h(x) + c(x,y)) or : (b(y) X and h(y) > h(x) + c(x,y)) then : b(y) = X; Insert(Y,h(X) + c(x,y)) Add Goal node to the list. Process list until the robot s current state is.

Process_State: New or Lowered State Remove from Open list, state X with lowest k If X is a new/lowered state, its path cost is optimal! Then propagate to each neighbor Y If Y is New, give it an initial path cost and propagate. If Y is a descendant of X, propagate any change. Else, if X can lower Y s path cost, Then do so and propagate.

Use D* to Compute Initial Path J M N O List (,G) E I L G h = B D H K S A C F 8: if kold = h(x) then 9: for each neighbor Y of X: : if t(y) = or : (b(y) = X and h(y) h(x) + c(x,y)) or : (b(y) X and h(y) > h(x) + c(x,y)) then : b(y) = X; Insert(Y,h(X) + c(x,y)) Add new neighbors of G onto the list Create backpointers to G.

Use D* to Compute Initial Path J M N O E I L G h = B D H K S A C F List (,G) (,K) (,L) (,O) 8: if kold = h(x) then 9: for each neighbor Y of X: : if t(y) = or : (b(y) = X and h(y) h(x) + c(x,y)) or : (b(y) X and h(y) > h(x) + c(x,y)) then : b(y) = X; Insert(Y,h(X) + c(x,y)) Add new neighbors of G onto the list Create backpointers to G.

Use D* to Compute Initial Path J M N O E I L G h = h = B D H K h = S A C F List (,G) (,K) (,L) (,O) (,L) (,O) (,F) (,H) 8: if kold = h(x) then 9: for each neighbor Y of X: : if t(y) = or : (b(y) = X and h(y) h(x) + c(x,y)) or : (b(y) X and h(y) > h(x) + c(x,y)) then : b(y) = X; Insert(Y,h(X) + c(x,y)) Add new neighbors of K on to the list and create backpointers.

Use D* to Compute Initial Path J M N O E I L G h = h = h = h = B D H K h = S A C F List (,G) (,K) (,L) (,O) (,L) (,O) (,F) (,H) (,F) (,H) (,I) (,N) 8: if kold = h(x) then 9: for each neighbor Y of X: : if t(y) = or : (b(y) = X and h(y) h(x) + c(x,y)) or : (b(y) X and h(y) > h(x) + c(x,y)) then : b(y) = X; Insert(Y,h(X) + c(x,y)) Add new neighbors of L, then O on to the list and create backpointers.

Use D* to Compute Initial Path J M N O h = h = E I L G h = h = B D H K List (,G) (,K) (,L) (,O) (,L) (,O) (,F) (,H) (,F) (,H) (,I) (,N) (,H) (,I) (,N) (,C) h = S A C F Continue until current state S is closed.

Use D* to Compute Initial Path J M N O h = E I L G h = h = h = B D H K List (,G) (,K) (,L) (,O) (,L) (,O) (,F) (,H) (,F) (,H) (,I) (,N) (,H) (,I) (,N) (,C) 6 (,I) (,N) (,C) (,D) h = S A C F Continue until current state S is closed.

Use D* to Compute Initial Path J M N O h = E I L G h = h = B D H K h = h = S A C F List (,G) (,K) (,L) (,O) (,L) (,O) (,F) (,H) (,F) (,H) (,I) (,N) (,H) (,I) (,N) (,C) 6 (,I) (,N) (,C) (,D) 7 (,N) (,C) (,D) (,E) (,M) Continue until current state S is closed.

Use D* to Compute Initial Path J M N O h = E I L G h = h = B D H K h = h = S A C F List (,G) (,K) (,L) (,O) (,L) (,O) (,F) (,H) (,F) (,H) (,I) (,N) (,H) (,I) (,N) (,C) 6 (,I) (,N) (,C) (,D) 7 (,N) (,C) (,D) (,E) (,M) 8 (,C) (,D) (,E) (,M) Continue until current state S is closed.

Use D* to Compute Initial Path J M N O h = E I L G B D H K h = h = h = h = h = S A C F List (,G) (,K) (,L) (,O) (,L) (,O) (,F) (,H) (,F) (,H) (,I) (,N) (,H) (,I) (,N) (,C) 6 (,I) (,N) (,C) (,D) 7 (,N) (,C) (,D) (,E) (,M) 8 (,C) (,D) (,E) (,M) 9 (,D) (,E) (,M) (,A) Continue until current state S is closed.

Use D* to Compute Initial Path J M N O E I L G h = h = B D H K h = h = h = h = h = S A C F Continue until current state S is closed.. List (,G) (,K) (,L) (,O) (,L) (,O) (,F) (,H) (,F) (,H) (,I) (,N) (,H) (,I) (,N) (,C) 6 (,I) (,N) (,C) (,D) 7 (,N) (,C) (,D) (,E) (,M) 8 (,C) (,D) (,E) (,M) 9 (,D) (,E) (,M) (,A) (,E) (,M) (,A) (,B)

Use D* to Compute Initial Path h = J M N O E I L G h = h = h = h = h = B D H K List (,E) (,M) (,A) (,B) (,M) (,A) (,B) (,J) h = h = S A C F Continue until current state S is closed.

Use D* to Compute Initial Path h = J M N O E I L G h = h = h = h = h = B D H K List (,E) (,M) (,A) (,B) (,M) (,A) (,B) (,J) (,M) (,A) (,B) h = h = S A C F Continue until current state S is closed.

Use D* to Compute Initial Path h = J M N O E I L G h = h = h = h = h = B D H K List (,E) (,M) (,A) (,B) (,M) (,A) (,B) (,J) (,M) (,A) (,B) (,B) (,J) (,S) h = h = h = S A C F Continue until current state S is closed.

Use D* to Compute Initial Path h = J M N O E I L G h = h = h = h = h = B D H K List (,E) (,M) (,A) (,B) (,M) (,A) (,B) (,J) (,M) (,A) (,B) (,B) (,J) (,S) (,J) (,S) h = h = h = S A C F Continue until current state S is closed.

Use D* to Compute Initial Path h = J M N O E I L G h = h = h = h = h = B D H K List (,E) (,M) (,A) (,B) (,M) (,A) (,B) (,J) (,M) (,A) (,B) (,B) (,J) (,S) (,J) (,S) (,S) h = h = h = S A C F Continue until current state S is closed.

D* Completed Initial Path h = J M N O E I L G h = B D H K h = h = h = h = h = h = h = S A C F List (,E) (,M) (,A) (,B) (,M) (,A) (,B) (,J) (,M) (,A) (,B) (,B) (,J) (,S) (,J) (,S) (,S) 6 NULL Done: Current state S is closed, and Open list is empty.

Begin Executing Optimal Path h = J M N O E I L G h = B D H K h = h = h = h = h = h = h = S A C F Robot moves along backpointers towards goal Uses sensors to detect discrepancies along way.

Obstacle Encountered! h = J M N O E I L G h = B D H K h = h = h = h = h = h = S A C F At state A, robot discovers edge D to H is blocked off (cost, units). Update map and reinvoke D*

Running D* After Edge Cost Change When edge cost c(y,x) changes If X is marked Closed, then Update h(x) Mark X open and queue, key is new h(x). Run Process_State on queue until path to current state is shown optimal, or queue Open List is empty.

D* Update From First Obstacle h = J M N O E I L G h = B D H K h = h = h = h = h = h = S A C F List (,H) Function: Modify-Cost(X,Y,eval) : c(x,y) = eval : if t(x) = then Insert(X,h(X)) : return Get-Kmin() Propagate changes starting at H

D* Update From First Obstacle h = J M N O E I L G h = B D H K h = h = h = h = h = h = h = S A C F List (,H) (,D) 8: if kold = h(x) then 9: for each neighbor Y of X: : if t(y) = or : (b(y) = X and h(y) h(x) + c(x,y)) or : (b(y) X and h(y) > h(x) + c(x,y)) then : b(y) = X; Insert(Y,h(X) + c(x,y)) Raise cost of H s descendant D, and propagate.

Process_State: Raised State If X is a raise state its cost might be suboptimal. Try reducing cost of X using an optimal neighbor Y. h(y) [h(x) before it was raised] propagate X s cost to each neighbor Y If Y is New, Then give it an initial path cost and propagate. If Y is a descendant of X, Then propagate ANY change. If X can lower Y s path cost, Postpone: Queue X to propagate when optimal (reach current h(x)) If Y can lower X s path cost, and Y is suboptimal, Postpone: Queue Y to propagate when optimal (reach current h(y)). Postponement avoids creating cycles.

D* Update From First Obstacle h = J M N O E I L G h = h = B D H K h = h = h = h = h = h = S A C F List (,H) (,D) : if kold < h(x) then : for each neighbor Y of X: 6: if h(y) kold and h(x) > h(y) + C(Y,X) then 7: b(x) = Y; h(x) = h(y) + c(y,x); D may not be optimal, check neighbors for better path. Transitioning to I is better, and I s path is optimal, so update D.

D* Update From First Obstacle h = J M N O E I L G h = h = h = h = B D H K List (,D) NULL h = h = h = S A C F All neighbors of D have consistent h-values. No further propagation needed.

Continue Path Execution h = J M N O E I L G h = h = h = h = B D H K List (,D) NULL h = h = h = S A C F A s path optimal. Continue moving robot along backpointers.

Second Obstacle! h = J M N O E I L G h = B D H K h = h = h = h = h = h = S A C F List (,F) (,H) Function: Modify-Cost(X,Y,eval) : c(x,y) = eval : if t(x) = then Insert(X,h(X)) : return Get-Kmin() At C robot discovers blocked edges C to F and H (cost, units). Update map and reinvoke D* until H(current position optimal).

D* Update From Second Obstacle h = J M N O E I L G h = B D H K h = h = h = h = h = h = h = S A C F List (,F) (,H) (,C) 8: if kold = h(x) then 9: for each neighbor Y of X: : if t(y) = or : (b(y) = X and h(y) h(x) + c(x,y)) or : (b(y) X and h(y) > h(x) + c(x,y)) then : b(y) = X; Insert(Y,h(X) + c(x,y)) Processing F raises descendant C s cost, and propagates. Processing H does nothing.

D* Update From Second Obstacle h = J M N O E I L G h = B D H K h = h = h = h = h = h = S A C F Closed h = Open List (,F) (,H) (,C) : if kold < h(x) then : for each neighbor Y of X: 6: if h(y) kold and h(x) > h(y) + C(Y,X) then 7: b(x) = Y; h(x) = h(y) + c(y,x); C may be suboptimal, check neighbors; Better path through A! However, A may be suboptimal, and updating creates a loop!

D* Update From Second Obstacle h = J M N O E I L G h = B D H K h = h = h = h = h = S A C F h = h = List (,F) (,H) (,C) (,A) : for each neighbor Y of X: 6: if t(y) = or 7: (b(y) = X and h(y) h(x) + c(x,y)) then 8: b(y) = X; Insert(Y,h(X) + c(x,y)) Don t change C s path to A (yet). Instead, propagate increase to A.

Process_State: Raised State If X is a raise state its cost might be suboptimal. Try reducing cost of X using an optimal neighbor Y. h(y) [h(x) before it was raised] propagate X s cost to each neighbor Y If Y is New, Then give it an initial path cost and propagate. If Y is a descendant of X, Then propagate ANY change. If X can lower Y s path cost, Postpone: Queue X to propagate when optimal (reach current h(x)) If Y can lower X s path cost, and Y is suboptimal, Postpone: Queue Y to propagate when optimal (reach current h(y)). Postponement avoids creating cycles.

D* Update From Second Obstacle h = J M N O E I L G h = B D H K h = h = h = h = h = h = S A C F h = List (,F) (,H) (,C) (,A) : if kold < h(x) then : for each neighbor Y of X: 6: if h(y) kold and h(x) > h(y) + C(Y,X) then 7: b(x) = Y; h(x) = h(y) + c(y,x); A may not be optimal, check neighbors for better path. Transitioning to D is better, and D s path is optimal, so update A.

D* Update From Second Obstacle h = J M N O E I L G h = B D H K h = h = h = h = h = h = S A C F h = List (,F) (,H) (,C) (,A) : if kold < h(x) then : for each neighbor Y of X: 6: if h(y) kold and h(x) > h(y) + C(Y,X) then 7: b(x) = Y; h(x) = h(y) + c(y,x); A may not be optimal, check neighbors for better path. Transitioning to D is better, and D s path is optimal, so update A.

D* Update From Second Obstacle h = J M N O E I L G h = B D H K h = h = h = h = h = h = S A C F h = List (,F) (,H) (,C) (,A) (,C) for each neighbor Y of X: 7: if (b(y) = X and h(y) h(x) + c(x,y)) then 8: b(y) = X; Insert(Y,h(X) + c(x,y)) 9: else : if b(y) X and h(y) > h(x) + c(x,y) then : Insert(X,h(X)) A can improve neighbor C, so queue C.

Process_State: New or Lowered State Remove from Open list, state X with lowest k If X is a new/lowered state its path cost is optimal, Then propagate to each neighbor Y If Y is New, give it an initial path cost and propagate. If Y is a descendant of X, propagate any change. Else, if X can lower Y s path cost, Then do so and propagate.

D* Update From Second Obstacle h = J M N O E I L G h = B D H K h = h = h = h = h = S A C F h = h = List (,F) (,H) (,C) (,A) (,C) 8: if kold = h(x) then 9: for each neighbor Y of X: : if t(y) = or : (b(y) = X and h(y) h(x) + c(x,y)) or : (b(y) X and h(y) > h(x) + c(x,y)) then : b(y) = X; Insert(Y,h(X) + c(x,y)) C lowered to optimal; no neighbors affected. Current state reached, so Process_State terminates. Bug? How does C get pointed to A?

Complete Path Execution h = J M N O E I L G h = B D H K h = h = h = h = h = h = h = S A C F Follow back pointers to Goal. No further discrepancies detected; goal achieved!

D* Pseudo Code Function: Process-State() : X = Min-State() : if X=NULL then return - : kold = Get-Kmin(); Delete(X) : if kold < h(x) then : for each neighbor Y of X: 6: if h(y) kold and h(x) > h(y) + C(Y,X) then 7: b(x) = Y; h(x) = h(y) + c(y,x); 8: if kold = h(x) then 9: for each neighbor Y of X: : if t(y) = or : (b(y) = X and h(y) h(x) + c(x,y)) or : (b(y) X and h(y) > h(x) + c(x,y)) then : b(y) = X; Insert(Y,h(X) + c(x,y)) : else : for each neighbor Y of X: 6: if t(y) = or 7: (b(y) = X and h(y) h(x) + c(x,y)) then 8: b(y) = X; Insert(Y,h(X) + c(x,y)) 9: else : if b(y) X and h(x) > h(x) + c(x,y) then : Insert(X,h(X)) : else : if b(y) X and h(x) > h(y) + c(y,x) and : t(y) = and h(y) > kold then : Insert(Y,h(Y)) 6: return Get-Kmin() Function: Modify-Cost(X,Y,eval) : c(x,y) = eval : if t(x) = then Insert(X,h(X)) : return Get-Kmin() Function: Insert(X, h new ) : if t(x) = then k(x) = h new : else if t(x) = then k(x) = min(k(x), h new ) : else k(x) = min(h(x), h new ) : h(x) = h new ; : t(x) =

Outline Review: Optimal Path Planning in Partially Known Environments. Continuous Optimal Path Planning Dynamic A* Incremental A* (LRTA*)

Incremental A* On Gridworld [Koenig and Likhachev, ]

Incremental A* Versus Other Searches

Incremental A* Versus Other Searches

Definitions: Graph and Costs Succ(s) : the set of successors of vertex s. Pred(s) : the set of predecessors of vertex s. c(s,s ) : the cost of moving from vertex s to vertex s. g * (s) : true start distance, from s start to vertex s. g(s) : estimate of the start distance of vertex s. h(s) : heuristic estimate of goal distance of vertex s.

Definitions: Local Consistency & Update rhs(s) : one-step lookahead estimate for g(s) if s = s start then else min s Pred(s) (g(s ) + c(s,s)) locally consistent : g-value of a vertex equals it s rhs value. Do nothing. overconsistent : g-value of a vertex is greater than rhs value. Relax: set g-value to rhs value. underconsistent : g-value of a vertex is less than rhs value. Upperbound: Set g-value to inf

Incremental A* in Brief ) Initialize start node with g estimates of and enqueue. All other g estimates (g,rhs values) are initialized to INF. Only the start node is locally inconsistent. ) Take node with smallest key from priority queue. Update g-value, and add all successors to the priority queue. ) Loop step until either the goal is locally consistent or the priority becomes empty. ) To find shortest path: Start with s = goal, working to start, next s = predecessor (s) of s with best g(s) + c(s, s ), ) Wait for edge costs to change. Update edge costs and add edge s destination node to the queue. 6) Go to step.

Definitions: Incremental A* Search Queue U : priority queue, contains all locally inconsistent vertices. U ordered by key k(s) k(s) : [k (s), k (s)] k (s) : min(g(s), rhs(s)) + h(s) k (s) : min(g(s), rhs(s)) k(s) corresponds directly to f(s) of A*. First time through IA*, min(g(s), rhs(s)) = true g*(s). k(s) breaks ties, using minimum start distance.

First Call To Incremental A* Search U - Priority Queue, (X, [k(s);k(s)]) (S, [;]) S 6 A B C D G rhs(s) = rhs(a) = rhs(b) = rhs(c) = rhs(d) = g(s) = g(a) = g(b) = g(c) = g(d) = rhs(g) = g(g) = Initialize all values and add start to queue. Only start is locally inconsistent.

First Call To Incremental A* Search U - Priority Queue (S, [;]) S 6 A B C D G rhs(s) = g(s) = rhs(a) = rhs(b) = rhs(c) = rhs(d) = g(a) = g(b) = g(c) = g(d) = rhs(g) = g(g) = Check local consistency of smallest node in queue, and update g-value.

First Call To Incremental A* Search S 6 A B C D G U - Priority Queue (S, [;]) rhs(s) = g(s) = rhs(a) = rhs(b) = rhs(c) = rhs(d) = g(a) = g(b) = g(c) = g(d) = rhs(g) = g(g) = Update successor nodes.

First Call To Incremental A* Search S 6 A B C D G U - Priority Queue (S, [;]) (A, [;]) rhs(s) = g(s) = rhs(a) = rhs(b) = rhs(c) = rhs(d) = g(a) = g(b) = g(c) = g(d) = rhs(g) = g(g) = Add changed successor node A to priority queue.

First Call To Incremental A* Search S 6 A B 6 C D G U - Priority Queue (S, [;]) (A, [;]) rhs(s) = g(s) = rhs(a) = rhs(b) = 6 rhs(c) = rhs(d) = g(a) = g(b) = g(c) = g(d) = rhs(g) = g(g) = Update change to successor node B.

First Call To Incremental A* Search S 6 A B 6 C D G U - Priority Queue (S, [;]) (A, [;]) (B, [9;6]) rhs(s) = g(s) = rhs(a) = rhs(b) = 6 rhs(c) = rhs(d) = g(a) = g(b) = g(c) = g(d) = rhs(g) = g(g) = Add successor node B to priority queue.

First Call To Incremental A* Search S 6 A B 6 C D G U - Priority Queue (S, [;]) (A, [;]) (B, [9;6]) rhs(s) = g(s) = rhs(a) = g(a) = rhs(b) = 6 rhs(c) = rhs(d) = g(b) = g(c) = g(d) = rhs(g) = g(g) = Check local consistency of smallest key, A, in the priority queue.

First Call To Incremental A* Search S 6 A B 6 C D G U - Priority Queue (S, [;]) (A, [;]) (B, [9;6]) (B, [9;6]) rhs(s) = g(s) = rhs(a) = g(a) = rhs(b) = 6 rhs(c) = rhs(d) = g(b) = g(c) = g(d) = rhs(g) = g(g) = Update change to successor node C.

First Call To Incremental A* Search S 6 A B 6 C D G U - Priority Queue (S, [;]) (A, [;]) (B, [9;6]) (C, [;]) (B, [9;6]) rhs(s) = g(s) = rhs(a) = g(a) = rhs(b) = 6 rhs(c) = rhs(d) = g(b) = g(c) = g(d) = rhs(g) = g(g) = Add successor C to priority queue.

First Call To Incremental A* Search S 6 A B 6 C 6 D G U - Priority Queue (S, [;]) (A, [;]) (B, [9;6]) (C, [;]) (B, [9;6]) rhs(s) = g(s) = rhs(a) = g(a) = rhs(b) = 6 rhs(c) = rhs(d) = 6 g(b) = g(c) = g(d) = rhs(g) = g(g) = Update change to successor node D.

First Call To Incremental A* Search S 6 A B 6 C 6 D G U - Priority Queue (S, [;]) (A, [;]) (B, [9;6]) (C, [;]) (D, [7;6]) (B, [9;6]) rhs(s) = g(s) = rhs(a) = g(a) = rhs(b) = 6 rhs(c) = rhs(d) = 6 g(b) = g(c) = g(d) = rhs(g) = g(g) = Add successor D to priority queue.

First Call To Incremental A* Search S 6 A B 6 6 D C Check local consistency of smallest key, C, in queue. G U - Priority Queue (S, [;]) (A, [;]) (B, [9;6]) (C, [;]) (D, [7;6]) (B, [9;6]) (D, [7;6]) (B, [9;6]) rhs(s) = g(s) = rhs(a) = g(a) = rhs(b) = 6 g(b) = rhs(c) = g(c) = rhs(d) = 6 rhs(g) = Node C has no successors, so nothing is added to the queue. g(d) = g(g) =

Update successor G, and add to queue. First Call To Incremental A* Search S 6 A B 6 C 6 6 D 8 G U - Priority Queue (S, [;]) (A, [;]) (B, [9;6]) (C, [;]) (D, [7;6]) (B, [9;6]) (D, [7;6]) (B, [9;6]) (G, [8;8]) (B, [9;6]) rhs(s) = g(s) = rhs(a) = g(a) = rhs(b) = 6 g(b) = rhs(c) = g(c) = rhs(d) = 6 g(d) = 6 rhs(g) = 8 g(g) = Check local consistency of smallest key, D.

First Call To Incremental A* Search S 6 A B 6 C 6 6 D 8 8 G U - Priority Queue (S, [;]) (A, [;]) (B, [9;6]) (C, [;]) (D, [7;6]) (B, [9;6]) (D, [7;6]) (B, [9;6]) (G, [8;8]) (B, [9;6]) rhs(s) = rhs(a) = rhs(b) = 6 rhs(c) = rhs(d) = 6 g(s) = g(a) = g(b) = g(c) = g(d) = 6 rhs(g) = 8 g(g) = 8 Check local consistency of smallest key, G.

First Call To Incremental A* Search S 6 A B 6 C 6 6 D 8 8 G U - Priority Queue (S, [;]) (A, [;]) (B, [9;6]) (C, [;]) (D, [7;6]) (B, [9;6]) (D, [7;6]) (B, [9;6]) (G, [8;8]) (B, [9;6]) rhs(s) = rhs(a) = rhs(b) = 6 rhs(c) = rhs(d) = 6 g(s) = g(a) = g(b) = g(c) = g(d) = 6 rhs(g) = 8 g(g) = 8 The goal is locally consistent, hence search terminates.

First Call To Incremental A* Search S 6 A B 6 C 6 6 D 8 8 G U - Priority Queue (S, [;]) (A, [;]) (B, [9;6]) (C, [;]) (D, [7;6]) (B, [9;6]) (D, [7;6]) (B, [9;6]) (G, [8;8]) (B, [9;6]) rhs(s) = rhs(a) = rhs(b) = rhs(c) = rhs(d) = 6 g(s) = g(a) = g(b) = g(c) = g(d) = 6 rhs(g) = 8 g(g) = 8 The search does not necessarily make all vertices locally consistent, only those expanded.

First Call To Incremental A* Search U - Priority Queue (B, [9;6]) S A B 6 C 6 6 D 8 8 G rhs(s) = rhs(a) = rhs(b) = 6 rhs(c) = rhs(d) = 6 g(s) = g(a) = g(b) = g(c) = g(d) = 6 rhs(g) = 6 g(g) = 6 For next search, we keep rhs and g-values so that we know which nodes not to traverse again.

Incremental A* After Edge Decrease U - Priority Queue (B, [9;6]) S 6 A B 6 C 6 6 D 8 8 G rhs(s) = rhs(a) = rhs(b) = 6 rhs(c) = rhs(d) = 6 g(s) = g(a) = g(b) = g(c) = g(d) = 6 rhs(g) = 6 g(g) = 6 We discover that the cost of edge S-B decreases to.

Incremental A* After Edge Decrease U - Priority Queue (B, [;]) S A B C 6 6 D 8 8 G rhs(s) = rhs(a) = rhs(b) = rhs(c) = rhs(d) = 6 g(s) = g(a) = g(b) = g(c) = g(d) = 6 rhs(g) = 6 g(g) = 6 The affected nodes, B in this case, are updated and added to the queue.

Incremental A* After Edge Decrease U - Priority Queue (B, [;]) S A B C 6 6 D 8 8 G rhs(s) = rhs(a) = rhs(b) = rhs(c) = rhs(d) = 6 g(s) = g(a) = g(b) = g(c) = g(d) = 6 rhs(g) = 6 g(g) = 6 Check local consistency of smallest key, B, in queue.

Incremental A* After Edge Decrease S A B C 6 D 8 8 G U - Priority Queue (B, [;]) (D, [;]) rhs(s) = g(s) = rhs(a) = g(a) = rhs(b) = g(b) = rhs(c) = g(c) = rhs(d) = g(d) = 6 rhs(g) = 6 g(g) = 6 Update rhs value of successor node D and add to queue.

Incremental A* After Edge Decrease S A B C 6 D 8 G U - Priority Queue (B, [;]) (D, [;]) (G, [;]) rhs(s) = g(s) = rhs(a) = g(a) = rhs(b) = g(b) = rhs(c) = g(c) = rhs(d) = g(d) = 6 rhs(g) = g(g) = 6 Update rhs value of successor node G and add to queue.

Incremental A* After Edge Decrease S A B C D 8 G U - Priority Queue (B, [;]) (D, [;]) (G, [;]) (G, [;]) rhs(s) = g(s) = rhs(a) = g(a) = rhs(b) = g(b) = rhs(c) = g(c) = rhs(d) = g(d) = rhs(g) = g(g) = 6 Check local consistency of D, and expand successors.

Incremental A* After Edge Decrease S A B C D G U - Priority Queue (B, [;]) (D, [;]) (G, [;]) (G, [;]) rhs(s) = g(s) = rhs(a) = g(a) = rhs(b) = g(b) = rhs(c) = g(c) = rhs(d) = g(d) = rhs(g) = g(g) = Check local consistency of smallest key, G, in queue.

Incremental A* After Edge Decrease S A B C D G U - Priority Queue (B, [;]) (D, [;]) (G, [;]) (G [;]) rhs(s) = g(s) = rhs(a) = g(a) = rhs(b) = g(b) = rhs(c) = g(c) = rhs(d) = g(d) = rhs(g) = g(g) = Goal is locally consistent, thus finished. At the end of search, if g(s goal ) =, then there is no path from s start to s goal.

Incremental A* After Edge Increase U - Priority Queue S A B C D G rhs(s) = rhs(a) = rhs(b) = rhs(c) = rhs(d) = g(s) = g(a) = g(b) = g(c) = g(d) = rhs(g) = g(g) = We discover that the cost of edge B-D increases to.

Incremental A* After Edge Increase U - Priority Queue (D, [;]) S A B C D G rhs(s) = rhs(a) = rhs(b) = rhs(c) = rhs(d) = g(s) = g(a) = g(b) = g(c) = g(d) = rhs(g) = g(g) = The affected node, D, is updated and added to the queue.

Incremental A* After Edge Increase U - Priority Queue (D, [;]) S A B C D G rhs(s) = rhs(a) = rhs(b) = rhs(c) = rhs(d) = g(s) = g(a) = g(b) = g(c) = g(d) = rhs(g) = g(g) = Check local consistency of smallest key, D, in queue.

Incremental A* After Edge Increase S A B C inf D G U - Priority Queue (D, [;]) (D, [;]) rhs(s) = g(s) = rhs(a) = g(a) = rhs(b) = g(b) = rhs(c) = g(c) = rhs(d) = g(d) = INF rhs(g) = g(g) = Update g value of node D and add to queue.

rhs values of successor nodes C and G are consistent, don t add to queue. Incremental A* After Edge Increase S A B C inf D G U - Priority Queue (B, [;]) (D, [;]) rhs(s) = g(s) = rhs(a) = g(a) = rhs(b) = g(b) = rhs(c) = g(c) = rhs(d) = g(d) = inf rhs(g) = g(g) =

Incremental A* After Edge Increase S A B C D G U - Priority Queue (B, [;]) (D, [;]) rhs(s) = g(s) = rhs(a) = g(a) = rhs(b) = g(b) = rhs(c) = g(c) = rhs(d) = g(g) = rhs(g) = g(g) = Check local consistency of D, and update; successors consistent.

Incremental A* Pseudo Code procedure Initialize() {} U := {} for all s S rhs(s) = g(s) = {} rhs(s start ) = ; {} U.Insert(s start, [h(s start ); ]); Procedure Main() {7} Initialize(); {8} forever {9} ComputerShortestPath(); {} Wait for changes in edge costs; {} for all directed edges (u,v) with changed costs {} Update the edge cost c(u,v); {} UpdateVertex(v);

Incremental A* Pseudo Code procedure ComputeShortestPath() {9} while (U.TopKey()<CalculateKey(sgoal) OR rhs(s goal ) g(s goal )) {} u = U.Pop(); {} if (g(u) > rhs(u)) {} g(u) = rhs(u); {} for all s Succ(u) UpdateVertex(s) {} else {} g(u) = {6} for all s Succ(u) {u} UpdateVertex(s) procedure UpdateVertex(u) {6} if (u sstart)rhs(u)= min s Pred(u)(g(s )+ c(s,u)) {7} if (u U) U.Remove(u); {8} if (g(u) rhs(u)) U.Insert(u, CalculateKey(u)); procedure CalculateKey(s) {} return [min(g(s), rhs(s)) + h(s); min(g(s), rhs(s))];

Continuous Optimal Planning. Generate global path plan from initial map.. Repeat until Goal reached, or failure. Execute next step of current global path plan. Update map based on sensor information. Incrementally update global path plan from map changes. to orders of magnitude speedup relative to a non-incremental path planner.

Recap: Dynamic & Incremental A* Supports search as a repetitive online process. Exploits similarities between a series of searches to solve much faster than solving each search starting from scratch. Reuses the identical parts of the previous search tree, while updating differences. Solutions guaranteed to be optimal. On the first search, behaves like traditional algorithms. D* behaves exactly like Dijkstra s. Incremental A* A* behaves exactly like A*.