Incremental Path Planning

Size: px
Start display at page:

Download "Incremental Path Planning"

Transcription

1 Incremental Path Planning Dynamic and Incremental A* Prof. Brian Williams (help from Ihsiang Shu) 6./6.8 Cognitive Robotics February 7 th,

2 Outline Review: Optimal Path Planning in Partially Known Environments. Continuous Optimal Path Planning Dynamic A* Incremental A* (LRTA*)

3 [Zellinsky, 9]. Generate global path plan from initial map.. Repeat until goal reached or failure: Execute next step in current global path plan Update map based on sensors. If map changed generate new global path from map.

4 Compute Optimal Path J M N O E I L G B D H K S A C F

5 Begin Executing Optimal Path h = J M N O E I L G h = B D H K h = h = h = h = h = h = h = S A C F Robot moves along backpointers towards goal. Uses sensors to detect discrepancies along way.

6 Obstacle Encountered! h = J M N O E I L G h = B D H K h = h = h = h = h = h = S A C F At state A, robot discovers edge from D to H is blocked (cost, units). Update map and reinvoke planner.

7 Continue Path Execution h = J M N O E I L G h = B D H K h = h = h = h = h = h = S A C F A s previous path is still optimal. Continue moving robot along back pointers.

8 Second Obstacle, Replan! h = J M N O E I L G h = B D H K h = h = h = h = h = h = S A C F At C robot discovers blocked edges from C to F and H (cost, units). Update map and reinvoke planner.

9 Path Execution Achieves Goal h = J M N O E I L G h = B D H K h = h = h = h = h = h = h = S A C F Follow back pointers to goal. No further discrepancies detected; goal achieved!

10 Outline Review: Optimal Path Planning in Partially Known Environments. Continuous Optimal Path Planning Dynamic A* Incremental A* (LRTA*)

11 What is Continuous Optimal Path Planning? Supports search as a repetitive online process. Exploits similarities between a series of searches to solve much faster than solving each search starting from scratch. Reuses the identical parts of the previous search tree, while updating differences. Solutions guaranteed to be optimal. On the first search, behaves like traditional algorithms. D* behaves exactly like Dijkstra s. Incremental A* A* behaves exactly like A*.

12 Dynamic A* (aka D*) [Stenz, 9]. Generate global path plan from initial map.. Repeat until Goal reached, or failure. Execute next step of current global path plan. Update map based on sensor information. Incrementally update global path plan from map changes. to orders of magnitude speedup relative to a non-incremental path planner.

13 Map and Path Concepts c(x,y) : Cost to move from Y to X. c(x,y) is undefined if move disallowed. Neighbors(X) : Any Y such that c(x,y) or c(y,x) is defined. o(g,x) : Optimal path cost to Goal from X. h(g,x) : Estimate of optimal path cost to goal from X. b(x) = Y : backpointer from X to Y. Y is the first state on path from X to G.

14 D* Search Concepts list : States with estimates to be propagated to other states. States on list tagged Sorted by key function k. State tag t(x) : : has no estimate h. : estimate needs to be propagated. : estimate propagated.

15 D* Fundamental Search Concepts k(g,x) : key function minimum of h(g,x) before modification, and all values assumed by h(g,x) since X was placed on the list. Lowered state : k(g,x) = current h(g,x), Propagate decrease to descendants and other nodes. Raised state : k(g,x) < current h(g,x), Propagate increase to dscendants and other nodes. Try to find alternate shorter paths.

16 Running D* First Time on Graph Initially Mark G Open and Queue Mark all other states New Run Process_States on queue until path found or empty. When edge cost c(x,y) changes If X is marked Closed, then Update h(x) Mark X open and queue with key h(x).

17 Use D* to Compute Initial Path J M N O E I L G B D H K S A C F States initially tagged (no cost determined yet).

18 Use D* to Compute Initial Path J M N O List (,G) E I L G h = B D H K S A C F 8: if kold = h(x) then 9: for each neighbor Y of X: : if t(y) = or : (b(y) = X and h(y) h(x) + c(x,y)) or : (b(y) X and h(y) > h(x) + c(x,y)) then : b(y) = X; Insert(Y,h(X) + c(x,y)) Add Goal node to the list. Process list until the robot s current state is.

19 Process_State: New or Lowered State Remove from Open list, state X with lowest k If X is a new/lowered state, its path cost is optimal! Then propagate to each neighbor Y If Y is New, give it an initial path cost and propagate. If Y is a descendant of X, propagate any change. Else, if X can lower Y s path cost, Then do so and propagate.

20 Use D* to Compute Initial Path J M N O List (,G) E I L G h = B D H K S A C F 8: if kold = h(x) then 9: for each neighbor Y of X: : if t(y) = or : (b(y) = X and h(y) h(x) + c(x,y)) or : (b(y) X and h(y) > h(x) + c(x,y)) then : b(y) = X; Insert(Y,h(X) + c(x,y)) Add new neighbors of G onto the list Create backpointers to G.

21 Use D* to Compute Initial Path J M N O E I L G h = B D H K S A C F List (,G) (,K) (,L) (,O) 8: if kold = h(x) then 9: for each neighbor Y of X: : if t(y) = or : (b(y) = X and h(y) h(x) + c(x,y)) or : (b(y) X and h(y) > h(x) + c(x,y)) then : b(y) = X; Insert(Y,h(X) + c(x,y)) Add new neighbors of G onto the list Create backpointers to G.

22 Use D* to Compute Initial Path J M N O E I L G h = h = B D H K h = S A C F List (,G) (,K) (,L) (,O) (,L) (,O) (,F) (,H) 8: if kold = h(x) then 9: for each neighbor Y of X: : if t(y) = or : (b(y) = X and h(y) h(x) + c(x,y)) or : (b(y) X and h(y) > h(x) + c(x,y)) then : b(y) = X; Insert(Y,h(X) + c(x,y)) Add new neighbors of K on to the list and create backpointers.

23 Use D* to Compute Initial Path J M N O E I L G h = h = h = h = B D H K h = S A C F List (,G) (,K) (,L) (,O) (,L) (,O) (,F) (,H) (,F) (,H) (,I) (,N) 8: if kold = h(x) then 9: for each neighbor Y of X: : if t(y) = or : (b(y) = X and h(y) h(x) + c(x,y)) or : (b(y) X and h(y) > h(x) + c(x,y)) then : b(y) = X; Insert(Y,h(X) + c(x,y)) Add new neighbors of L, then O on to the list and create backpointers.

24 Use D* to Compute Initial Path J M N O h = h = E I L G h = h = B D H K List (,G) (,K) (,L) (,O) (,L) (,O) (,F) (,H) (,F) (,H) (,I) (,N) (,H) (,I) (,N) (,C) h = S A C F Continue until current state S is closed.

25 Use D* to Compute Initial Path J M N O h = E I L G h = h = h = B D H K List (,G) (,K) (,L) (,O) (,L) (,O) (,F) (,H) (,F) (,H) (,I) (,N) (,H) (,I) (,N) (,C) 6 (,I) (,N) (,C) (,D) h = S A C F Continue until current state S is closed.

26 Use D* to Compute Initial Path J M N O h = E I L G h = h = B D H K h = h = S A C F List (,G) (,K) (,L) (,O) (,L) (,O) (,F) (,H) (,F) (,H) (,I) (,N) (,H) (,I) (,N) (,C) 6 (,I) (,N) (,C) (,D) 7 (,N) (,C) (,D) (,E) (,M) Continue until current state S is closed.

27 Use D* to Compute Initial Path J M N O h = E I L G h = h = B D H K h = h = S A C F List (,G) (,K) (,L) (,O) (,L) (,O) (,F) (,H) (,F) (,H) (,I) (,N) (,H) (,I) (,N) (,C) 6 (,I) (,N) (,C) (,D) 7 (,N) (,C) (,D) (,E) (,M) 8 (,C) (,D) (,E) (,M) Continue until current state S is closed.

28 Use D* to Compute Initial Path J M N O h = E I L G B D H K h = h = h = h = h = S A C F List (,G) (,K) (,L) (,O) (,L) (,O) (,F) (,H) (,F) (,H) (,I) (,N) (,H) (,I) (,N) (,C) 6 (,I) (,N) (,C) (,D) 7 (,N) (,C) (,D) (,E) (,M) 8 (,C) (,D) (,E) (,M) 9 (,D) (,E) (,M) (,A) Continue until current state S is closed.

29 Use D* to Compute Initial Path J M N O E I L G h = h = B D H K h = h = h = h = h = S A C F Continue until current state S is closed.. List (,G) (,K) (,L) (,O) (,L) (,O) (,F) (,H) (,F) (,H) (,I) (,N) (,H) (,I) (,N) (,C) 6 (,I) (,N) (,C) (,D) 7 (,N) (,C) (,D) (,E) (,M) 8 (,C) (,D) (,E) (,M) 9 (,D) (,E) (,M) (,A) (,E) (,M) (,A) (,B)

30 Use D* to Compute Initial Path h = J M N O E I L G h = h = h = h = h = B D H K List (,E) (,M) (,A) (,B) (,M) (,A) (,B) (,J) h = h = S A C F Continue until current state S is closed.

31 Use D* to Compute Initial Path h = J M N O E I L G h = h = h = h = h = B D H K List (,E) (,M) (,A) (,B) (,M) (,A) (,B) (,J) (,M) (,A) (,B) h = h = S A C F Continue until current state S is closed.

32 Use D* to Compute Initial Path h = J M N O E I L G h = h = h = h = h = B D H K List (,E) (,M) (,A) (,B) (,M) (,A) (,B) (,J) (,M) (,A) (,B) (,B) (,J) (,S) h = h = h = S A C F Continue until current state S is closed.

33 Use D* to Compute Initial Path h = J M N O E I L G h = h = h = h = h = B D H K List (,E) (,M) (,A) (,B) (,M) (,A) (,B) (,J) (,M) (,A) (,B) (,B) (,J) (,S) (,J) (,S) h = h = h = S A C F Continue until current state S is closed.

34 Use D* to Compute Initial Path h = J M N O E I L G h = h = h = h = h = B D H K List (,E) (,M) (,A) (,B) (,M) (,A) (,B) (,J) (,M) (,A) (,B) (,B) (,J) (,S) (,J) (,S) (,S) h = h = h = S A C F Continue until current state S is closed.

35 D* Completed Initial Path h = J M N O E I L G h = B D H K h = h = h = h = h = h = h = S A C F List (,E) (,M) (,A) (,B) (,M) (,A) (,B) (,J) (,M) (,A) (,B) (,B) (,J) (,S) (,J) (,S) (,S) 6 NULL Done: Current state S is closed, and Open list is empty.

36 Begin Executing Optimal Path h = J M N O E I L G h = B D H K h = h = h = h = h = h = h = S A C F Robot moves along backpointers towards goal Uses sensors to detect discrepancies along way.

37 Obstacle Encountered! h = J M N O E I L G h = B D H K h = h = h = h = h = h = S A C F At state A, robot discovers edge D to H is blocked off (cost, units). Update map and reinvoke D*

38 Running D* After Edge Cost Change When edge cost c(y,x) changes If X is marked Closed, then Update h(x) Mark X open and queue, key is new h(x). Run Process_State on queue until path to current state is shown optimal, or queue Open List is empty.

39 D* Update From First Obstacle h = J M N O E I L G h = B D H K h = h = h = h = h = h = S A C F List (,H) Function: Modify-Cost(X,Y,eval) : c(x,y) = eval : if t(x) = then Insert(X,h(X)) : return Get-Kmin() Propagate changes starting at H

40 D* Update From First Obstacle h = J M N O E I L G h = B D H K h = h = h = h = h = h = h = S A C F List (,H) (,D) 8: if kold = h(x) then 9: for each neighbor Y of X: : if t(y) = or : (b(y) = X and h(y) h(x) + c(x,y)) or : (b(y) X and h(y) > h(x) + c(x,y)) then : b(y) = X; Insert(Y,h(X) + c(x,y)) Raise cost of H s descendant D, and propagate.

41 Process_State: Raised State If X is a raise state its cost might be suboptimal. Try reducing cost of X using an optimal neighbor Y. h(y) [h(x) before it was raised] propagate X s cost to each neighbor Y If Y is New, Then give it an initial path cost and propagate. If Y is a descendant of X, Then propagate ANY change. If X can lower Y s path cost, Postpone: Queue X to propagate when optimal (reach current h(x)) If Y can lower X s path cost, and Y is suboptimal, Postpone: Queue Y to propagate when optimal (reach current h(y)). Postponement avoids creating cycles.

42 D* Update From First Obstacle h = J M N O E I L G h = h = B D H K h = h = h = h = h = h = S A C F List (,H) (,D) : if kold < h(x) then : for each neighbor Y of X: 6: if h(y) kold and h(x) > h(y) + C(Y,X) then 7: b(x) = Y; h(x) = h(y) + c(y,x); D may not be optimal, check neighbors for better path. Transitioning to I is better, and I s path is optimal, so update D.

43 D* Update From First Obstacle h = J M N O E I L G h = h = h = h = B D H K List (,D) NULL h = h = h = S A C F All neighbors of D have consistent h-values. No further propagation needed.

44 Continue Path Execution h = J M N O E I L G h = h = h = h = B D H K List (,D) NULL h = h = h = S A C F A s path optimal. Continue moving robot along backpointers.

45 Second Obstacle! h = J M N O E I L G h = B D H K h = h = h = h = h = h = S A C F List (,F) (,H) Function: Modify-Cost(X,Y,eval) : c(x,y) = eval : if t(x) = then Insert(X,h(X)) : return Get-Kmin() At C robot discovers blocked edges C to F and H (cost, units). Update map and reinvoke D* until H(current position optimal).

46 D* Update From Second Obstacle h = J M N O E I L G h = B D H K h = h = h = h = h = h = h = S A C F List (,F) (,H) (,C) 8: if kold = h(x) then 9: for each neighbor Y of X: : if t(y) = or : (b(y) = X and h(y) h(x) + c(x,y)) or : (b(y) X and h(y) > h(x) + c(x,y)) then : b(y) = X; Insert(Y,h(X) + c(x,y)) Processing F raises descendant C s cost, and propagates. Processing H does nothing.

47 D* Update From Second Obstacle h = J M N O E I L G h = B D H K h = h = h = h = h = h = S A C F Closed h = Open List (,F) (,H) (,C) : if kold < h(x) then : for each neighbor Y of X: 6: if h(y) kold and h(x) > h(y) + C(Y,X) then 7: b(x) = Y; h(x) = h(y) + c(y,x); C may be suboptimal, check neighbors; Better path through A! However, A may be suboptimal, and updating creates a loop!

48 D* Update From Second Obstacle h = J M N O E I L G h = B D H K h = h = h = h = h = S A C F h = h = List (,F) (,H) (,C) (,A) : for each neighbor Y of X: 6: if t(y) = or 7: (b(y) = X and h(y) h(x) + c(x,y)) then 8: b(y) = X; Insert(Y,h(X) + c(x,y)) Don t change C s path to A (yet). Instead, propagate increase to A.

49 Process_State: Raised State If X is a raise state its cost might be suboptimal. Try reducing cost of X using an optimal neighbor Y. h(y) [h(x) before it was raised] propagate X s cost to each neighbor Y If Y is New, Then give it an initial path cost and propagate. If Y is a descendant of X, Then propagate ANY change. If X can lower Y s path cost, Postpone: Queue X to propagate when optimal (reach current h(x)) If Y can lower X s path cost, and Y is suboptimal, Postpone: Queue Y to propagate when optimal (reach current h(y)). Postponement avoids creating cycles.

50 D* Update From Second Obstacle h = J M N O E I L G h = B D H K h = h = h = h = h = h = S A C F h = List (,F) (,H) (,C) (,A) : if kold < h(x) then : for each neighbor Y of X: 6: if h(y) kold and h(x) > h(y) + C(Y,X) then 7: b(x) = Y; h(x) = h(y) + c(y,x); A may not be optimal, check neighbors for better path. Transitioning to D is better, and D s path is optimal, so update A.

51 D* Update From Second Obstacle h = J M N O E I L G h = B D H K h = h = h = h = h = h = S A C F h = List (,F) (,H) (,C) (,A) : if kold < h(x) then : for each neighbor Y of X: 6: if h(y) kold and h(x) > h(y) + C(Y,X) then 7: b(x) = Y; h(x) = h(y) + c(y,x); A may not be optimal, check neighbors for better path. Transitioning to D is better, and D s path is optimal, so update A.

52 D* Update From Second Obstacle h = J M N O E I L G h = B D H K h = h = h = h = h = h = S A C F h = List (,F) (,H) (,C) (,A) (,C) for each neighbor Y of X: 7: if (b(y) = X and h(y) h(x) + c(x,y)) then 8: b(y) = X; Insert(Y,h(X) + c(x,y)) 9: else : if b(y) X and h(y) > h(x) + c(x,y) then : Insert(X,h(X)) A can improve neighbor C, so queue C.

53 Process_State: New or Lowered State Remove from Open list, state X with lowest k If X is a new/lowered state its path cost is optimal, Then propagate to each neighbor Y If Y is New, give it an initial path cost and propagate. If Y is a descendant of X, propagate any change. Else, if X can lower Y s path cost, Then do so and propagate.

54 D* Update From Second Obstacle h = J M N O E I L G h = B D H K h = h = h = h = h = S A C F h = h = List (,F) (,H) (,C) (,A) (,C) 8: if kold = h(x) then 9: for each neighbor Y of X: : if t(y) = or : (b(y) = X and h(y) h(x) + c(x,y)) or : (b(y) X and h(y) > h(x) + c(x,y)) then : b(y) = X; Insert(Y,h(X) + c(x,y)) C lowered to optimal; no neighbors affected. Current state reached, so Process_State terminates. Bug? How does C get pointed to A?

55 Complete Path Execution h = J M N O E I L G h = B D H K h = h = h = h = h = h = h = S A C F Follow back pointers to Goal. No further discrepancies detected; goal achieved!

56 D* Pseudo Code Function: Process-State() : X = Min-State() : if X=NULL then return - : kold = Get-Kmin(); Delete(X) : if kold < h(x) then : for each neighbor Y of X: 6: if h(y) kold and h(x) > h(y) + C(Y,X) then 7: b(x) = Y; h(x) = h(y) + c(y,x); 8: if kold = h(x) then 9: for each neighbor Y of X: : if t(y) = or : (b(y) = X and h(y) h(x) + c(x,y)) or : (b(y) X and h(y) > h(x) + c(x,y)) then : b(y) = X; Insert(Y,h(X) + c(x,y)) : else : for each neighbor Y of X: 6: if t(y) = or 7: (b(y) = X and h(y) h(x) + c(x,y)) then 8: b(y) = X; Insert(Y,h(X) + c(x,y)) 9: else : if b(y) X and h(x) > h(x) + c(x,y) then : Insert(X,h(X)) : else : if b(y) X and h(x) > h(y) + c(y,x) and : t(y) = and h(y) > kold then : Insert(Y,h(Y)) 6: return Get-Kmin() Function: Modify-Cost(X,Y,eval) : c(x,y) = eval : if t(x) = then Insert(X,h(X)) : return Get-Kmin() Function: Insert(X, h new ) : if t(x) = then k(x) = h new : else if t(x) = then k(x) = min(k(x), h new ) : else k(x) = min(h(x), h new ) : h(x) = h new ; : t(x) =

57 Outline Review: Optimal Path Planning in Partially Known Environments. Continuous Optimal Path Planning Dynamic A* Incremental A* (LRTA*)

58 Incremental A* On Gridworld [Koenig and Likhachev, ]

59 Incremental A* Versus Other Searches

60 Incremental A* Versus Other Searches

61 Definitions: Graph and Costs Succ(s) : the set of successors of vertex s. Pred(s) : the set of predecessors of vertex s. c(s,s ) : the cost of moving from vertex s to vertex s. g * (s) : true start distance, from s start to vertex s. g(s) : estimate of the start distance of vertex s. h(s) : heuristic estimate of goal distance of vertex s.

62 Definitions: Local Consistency & Update rhs(s) : one-step lookahead estimate for g(s) if s = s start then else min s Pred(s) (g(s ) + c(s,s)) locally consistent : g-value of a vertex equals it s rhs value. Do nothing. overconsistent : g-value of a vertex is greater than rhs value. Relax: set g-value to rhs value. underconsistent : g-value of a vertex is less than rhs value. Upperbound: Set g-value to inf

63 Incremental A* in Brief ) Initialize start node with g estimates of and enqueue. All other g estimates (g,rhs values) are initialized to INF. Only the start node is locally inconsistent. ) Take node with smallest key from priority queue. Update g-value, and add all successors to the priority queue. ) Loop step until either the goal is locally consistent or the priority becomes empty. ) To find shortest path: Start with s = goal, working to start, next s = predecessor (s) of s with best g(s) + c(s, s ), ) Wait for edge costs to change. Update edge costs and add edge s destination node to the queue. 6) Go to step.

64 Definitions: Incremental A* Search Queue U : priority queue, contains all locally inconsistent vertices. U ordered by key k(s) k(s) : [k (s), k (s)] k (s) : min(g(s), rhs(s)) + h(s) k (s) : min(g(s), rhs(s)) k(s) corresponds directly to f(s) of A*. First time through IA*, min(g(s), rhs(s)) = true g*(s). k(s) breaks ties, using minimum start distance.

65 First Call To Incremental A* Search U - Priority Queue, (X, [k(s);k(s)]) (S, [;]) S 6 A B C D G rhs(s) = rhs(a) = rhs(b) = rhs(c) = rhs(d) = g(s) = g(a) = g(b) = g(c) = g(d) = rhs(g) = g(g) = Initialize all values and add start to queue. Only start is locally inconsistent.

66 First Call To Incremental A* Search U - Priority Queue (S, [;]) S 6 A B C D G rhs(s) = g(s) = rhs(a) = rhs(b) = rhs(c) = rhs(d) = g(a) = g(b) = g(c) = g(d) = rhs(g) = g(g) = Check local consistency of smallest node in queue, and update g-value.

67 First Call To Incremental A* Search S 6 A B C D G U - Priority Queue (S, [;]) rhs(s) = g(s) = rhs(a) = rhs(b) = rhs(c) = rhs(d) = g(a) = g(b) = g(c) = g(d) = rhs(g) = g(g) = Update successor nodes.

68 First Call To Incremental A* Search S 6 A B C D G U - Priority Queue (S, [;]) (A, [;]) rhs(s) = g(s) = rhs(a) = rhs(b) = rhs(c) = rhs(d) = g(a) = g(b) = g(c) = g(d) = rhs(g) = g(g) = Add changed successor node A to priority queue.

69 First Call To Incremental A* Search S 6 A B 6 C D G U - Priority Queue (S, [;]) (A, [;]) rhs(s) = g(s) = rhs(a) = rhs(b) = 6 rhs(c) = rhs(d) = g(a) = g(b) = g(c) = g(d) = rhs(g) = g(g) = Update change to successor node B.

70 First Call To Incremental A* Search S 6 A B 6 C D G U - Priority Queue (S, [;]) (A, [;]) (B, [9;6]) rhs(s) = g(s) = rhs(a) = rhs(b) = 6 rhs(c) = rhs(d) = g(a) = g(b) = g(c) = g(d) = rhs(g) = g(g) = Add successor node B to priority queue.

71 First Call To Incremental A* Search S 6 A B 6 C D G U - Priority Queue (S, [;]) (A, [;]) (B, [9;6]) rhs(s) = g(s) = rhs(a) = g(a) = rhs(b) = 6 rhs(c) = rhs(d) = g(b) = g(c) = g(d) = rhs(g) = g(g) = Check local consistency of smallest key, A, in the priority queue.

72 First Call To Incremental A* Search S 6 A B 6 C D G U - Priority Queue (S, [;]) (A, [;]) (B, [9;6]) (B, [9;6]) rhs(s) = g(s) = rhs(a) = g(a) = rhs(b) = 6 rhs(c) = rhs(d) = g(b) = g(c) = g(d) = rhs(g) = g(g) = Update change to successor node C.

73 First Call To Incremental A* Search S 6 A B 6 C D G U - Priority Queue (S, [;]) (A, [;]) (B, [9;6]) (C, [;]) (B, [9;6]) rhs(s) = g(s) = rhs(a) = g(a) = rhs(b) = 6 rhs(c) = rhs(d) = g(b) = g(c) = g(d) = rhs(g) = g(g) = Add successor C to priority queue.

74 First Call To Incremental A* Search S 6 A B 6 C 6 D G U - Priority Queue (S, [;]) (A, [;]) (B, [9;6]) (C, [;]) (B, [9;6]) rhs(s) = g(s) = rhs(a) = g(a) = rhs(b) = 6 rhs(c) = rhs(d) = 6 g(b) = g(c) = g(d) = rhs(g) = g(g) = Update change to successor node D.

75 First Call To Incremental A* Search S 6 A B 6 C 6 D G U - Priority Queue (S, [;]) (A, [;]) (B, [9;6]) (C, [;]) (D, [7;6]) (B, [9;6]) rhs(s) = g(s) = rhs(a) = g(a) = rhs(b) = 6 rhs(c) = rhs(d) = 6 g(b) = g(c) = g(d) = rhs(g) = g(g) = Add successor D to priority queue.

76 First Call To Incremental A* Search S 6 A B 6 6 D C Check local consistency of smallest key, C, in queue. G U - Priority Queue (S, [;]) (A, [;]) (B, [9;6]) (C, [;]) (D, [7;6]) (B, [9;6]) (D, [7;6]) (B, [9;6]) rhs(s) = g(s) = rhs(a) = g(a) = rhs(b) = 6 g(b) = rhs(c) = g(c) = rhs(d) = 6 rhs(g) = Node C has no successors, so nothing is added to the queue. g(d) = g(g) =

77 Update successor G, and add to queue. First Call To Incremental A* Search S 6 A B 6 C 6 6 D 8 G U - Priority Queue (S, [;]) (A, [;]) (B, [9;6]) (C, [;]) (D, [7;6]) (B, [9;6]) (D, [7;6]) (B, [9;6]) (G, [8;8]) (B, [9;6]) rhs(s) = g(s) = rhs(a) = g(a) = rhs(b) = 6 g(b) = rhs(c) = g(c) = rhs(d) = 6 g(d) = 6 rhs(g) = 8 g(g) = Check local consistency of smallest key, D.

78 First Call To Incremental A* Search S 6 A B 6 C 6 6 D 8 8 G U - Priority Queue (S, [;]) (A, [;]) (B, [9;6]) (C, [;]) (D, [7;6]) (B, [9;6]) (D, [7;6]) (B, [9;6]) (G, [8;8]) (B, [9;6]) rhs(s) = rhs(a) = rhs(b) = 6 rhs(c) = rhs(d) = 6 g(s) = g(a) = g(b) = g(c) = g(d) = 6 rhs(g) = 8 g(g) = 8 Check local consistency of smallest key, G.

79 First Call To Incremental A* Search S 6 A B 6 C 6 6 D 8 8 G U - Priority Queue (S, [;]) (A, [;]) (B, [9;6]) (C, [;]) (D, [7;6]) (B, [9;6]) (D, [7;6]) (B, [9;6]) (G, [8;8]) (B, [9;6]) rhs(s) = rhs(a) = rhs(b) = 6 rhs(c) = rhs(d) = 6 g(s) = g(a) = g(b) = g(c) = g(d) = 6 rhs(g) = 8 g(g) = 8 The goal is locally consistent, hence search terminates.

80 First Call To Incremental A* Search S 6 A B 6 C 6 6 D 8 8 G U - Priority Queue (S, [;]) (A, [;]) (B, [9;6]) (C, [;]) (D, [7;6]) (B, [9;6]) (D, [7;6]) (B, [9;6]) (G, [8;8]) (B, [9;6]) rhs(s) = rhs(a) = rhs(b) = rhs(c) = rhs(d) = 6 g(s) = g(a) = g(b) = g(c) = g(d) = 6 rhs(g) = 8 g(g) = 8 The search does not necessarily make all vertices locally consistent, only those expanded.

81 First Call To Incremental A* Search U - Priority Queue (B, [9;6]) S A B 6 C 6 6 D 8 8 G rhs(s) = rhs(a) = rhs(b) = 6 rhs(c) = rhs(d) = 6 g(s) = g(a) = g(b) = g(c) = g(d) = 6 rhs(g) = 6 g(g) = 6 For next search, we keep rhs and g-values so that we know which nodes not to traverse again.

82 Incremental A* After Edge Decrease U - Priority Queue (B, [9;6]) S 6 A B 6 C 6 6 D 8 8 G rhs(s) = rhs(a) = rhs(b) = 6 rhs(c) = rhs(d) = 6 g(s) = g(a) = g(b) = g(c) = g(d) = 6 rhs(g) = 6 g(g) = 6 We discover that the cost of edge S-B decreases to.

83 Incremental A* After Edge Decrease U - Priority Queue (B, [;]) S A B C 6 6 D 8 8 G rhs(s) = rhs(a) = rhs(b) = rhs(c) = rhs(d) = 6 g(s) = g(a) = g(b) = g(c) = g(d) = 6 rhs(g) = 6 g(g) = 6 The affected nodes, B in this case, are updated and added to the queue.

84 Incremental A* After Edge Decrease U - Priority Queue (B, [;]) S A B C 6 6 D 8 8 G rhs(s) = rhs(a) = rhs(b) = rhs(c) = rhs(d) = 6 g(s) = g(a) = g(b) = g(c) = g(d) = 6 rhs(g) = 6 g(g) = 6 Check local consistency of smallest key, B, in queue.

85 Incremental A* After Edge Decrease S A B C 6 D 8 8 G U - Priority Queue (B, [;]) (D, [;]) rhs(s) = g(s) = rhs(a) = g(a) = rhs(b) = g(b) = rhs(c) = g(c) = rhs(d) = g(d) = 6 rhs(g) = 6 g(g) = 6 Update rhs value of successor node D and add to queue.

86 Incremental A* After Edge Decrease S A B C 6 D 8 G U - Priority Queue (B, [;]) (D, [;]) (G, [;]) rhs(s) = g(s) = rhs(a) = g(a) = rhs(b) = g(b) = rhs(c) = g(c) = rhs(d) = g(d) = 6 rhs(g) = g(g) = 6 Update rhs value of successor node G and add to queue.

87 Incremental A* After Edge Decrease S A B C D 8 G U - Priority Queue (B, [;]) (D, [;]) (G, [;]) (G, [;]) rhs(s) = g(s) = rhs(a) = g(a) = rhs(b) = g(b) = rhs(c) = g(c) = rhs(d) = g(d) = rhs(g) = g(g) = 6 Check local consistency of D, and expand successors.

88 Incremental A* After Edge Decrease S A B C D G U - Priority Queue (B, [;]) (D, [;]) (G, [;]) (G, [;]) rhs(s) = g(s) = rhs(a) = g(a) = rhs(b) = g(b) = rhs(c) = g(c) = rhs(d) = g(d) = rhs(g) = g(g) = Check local consistency of smallest key, G, in queue.

89 Incremental A* After Edge Decrease S A B C D G U - Priority Queue (B, [;]) (D, [;]) (G, [;]) (G [;]) rhs(s) = g(s) = rhs(a) = g(a) = rhs(b) = g(b) = rhs(c) = g(c) = rhs(d) = g(d) = rhs(g) = g(g) = Goal is locally consistent, thus finished. At the end of search, if g(s goal ) =, then there is no path from s start to s goal.

90 Incremental A* After Edge Increase U - Priority Queue S A B C D G rhs(s) = rhs(a) = rhs(b) = rhs(c) = rhs(d) = g(s) = g(a) = g(b) = g(c) = g(d) = rhs(g) = g(g) = We discover that the cost of edge B-D increases to.

91 Incremental A* After Edge Increase U - Priority Queue (D, [;]) S A B C D G rhs(s) = rhs(a) = rhs(b) = rhs(c) = rhs(d) = g(s) = g(a) = g(b) = g(c) = g(d) = rhs(g) = g(g) = The affected node, D, is updated and added to the queue.

92 Incremental A* After Edge Increase U - Priority Queue (D, [;]) S A B C D G rhs(s) = rhs(a) = rhs(b) = rhs(c) = rhs(d) = g(s) = g(a) = g(b) = g(c) = g(d) = rhs(g) = g(g) = Check local consistency of smallest key, D, in queue.

93 Incremental A* After Edge Increase S A B C inf D G U - Priority Queue (D, [;]) (D, [;]) rhs(s) = g(s) = rhs(a) = g(a) = rhs(b) = g(b) = rhs(c) = g(c) = rhs(d) = g(d) = INF rhs(g) = g(g) = Update g value of node D and add to queue.

94 rhs values of successor nodes C and G are consistent, don t add to queue. Incremental A* After Edge Increase S A B C inf D G U - Priority Queue (B, [;]) (D, [;]) rhs(s) = g(s) = rhs(a) = g(a) = rhs(b) = g(b) = rhs(c) = g(c) = rhs(d) = g(d) = inf rhs(g) = g(g) =

95 Incremental A* After Edge Increase S A B C D G U - Priority Queue (B, [;]) (D, [;]) rhs(s) = g(s) = rhs(a) = g(a) = rhs(b) = g(b) = rhs(c) = g(c) = rhs(d) = g(g) = rhs(g) = g(g) = Check local consistency of D, and update; successors consistent.

96 Incremental A* Pseudo Code procedure Initialize() {} U := {} for all s S rhs(s) = g(s) = {} rhs(s start ) = ; {} U.Insert(s start, [h(s start ); ]); Procedure Main() {7} Initialize(); {8} forever {9} ComputerShortestPath(); {} Wait for changes in edge costs; {} for all directed edges (u,v) with changed costs {} Update the edge cost c(u,v); {} UpdateVertex(v);

97 Incremental A* Pseudo Code procedure ComputeShortestPath() {9} while (U.TopKey()<CalculateKey(sgoal) OR rhs(s goal ) g(s goal )) {} u = U.Pop(); {} if (g(u) > rhs(u)) {} g(u) = rhs(u); {} for all s Succ(u) UpdateVertex(s) {} else {} g(u) = {6} for all s Succ(u) {u} UpdateVertex(s) procedure UpdateVertex(u) {6} if (u sstart)rhs(u)= min s Pred(u)(g(s )+ c(s,u)) {7} if (u U) U.Remove(u); {8} if (g(u) rhs(u)) U.Insert(u, CalculateKey(u)); procedure CalculateKey(s) {} return [min(g(s), rhs(s)) + h(s); min(g(s), rhs(s))];

98 Continuous Optimal Planning. Generate global path plan from initial map.. Repeat until Goal reached, or failure. Execute next step of current global path plan. Update map based on sensor information. Incrementally update global path plan from map changes. to orders of magnitude speedup relative to a non-incremental path planner.

99 Recap: Dynamic & Incremental A* Supports search as a repetitive online process. Exploits similarities between a series of searches to solve much faster than solving each search starting from scratch. Reuses the identical parts of the previous search tree, while updating differences. Solutions guaranteed to be optimal. On the first search, behaves like traditional algorithms. D* behaves exactly like Dijkstra s. Incremental A* A* behaves exactly like A*.

DD* Lite: Efficient Incremental Search with State Dominance

DD* Lite: Efficient Incremental Search with State Dominance DD* Lite: Efficient Incremental Search with State Dominance G. Ayorkor Mills-Tettey Anthony Stentz M. Bernardine Dias CMU-RI-TR-07-12 May 2007 Robotics Institute Carnegie Mellon University Pittsburgh,

More information

Graph Search Howie Choset

Graph Search Howie Choset Graph Search Howie Choset 16-11 Outline Overview of Search Techniques A* Search Graphs Collection of Edges and Nodes (Vertices) A tree Grids Stacks and Queues Stack: First in, Last out (FILO) Queue: First

More information

I may not have gone where I intended to go, but I think I have ended up where I needed to be. Douglas Adams

I may not have gone where I intended to go, but I think I have ended up where I needed to be. Douglas Adams Disclaimer: I use these notes as a guide rather than a comprehensive coverage of the topic. They are neither a substitute for attending the lectures nor for reading the assigned material. I may not have

More information

Navigation. Global Pathing. The Idea. Diagram I: Overall Navigation

Navigation. Global Pathing. The Idea. Diagram I: Overall Navigation Navigation Diagram I: Overall Navigation Global Pathing The Idea (focus on 2D coordinates Takes advantage of prior info: navigation space/ dimensions target destination location Risks: Map Resolution too

More information

CMU Lecture 4: Informed Search. Teacher: Gianni A. Di Caro

CMU Lecture 4: Informed Search. Teacher: Gianni A. Di Caro CMU 15-781 Lecture 4: Informed Search Teacher: Gianni A. Di Caro UNINFORMED VS. INFORMED Uninformed Can only generate successors and distinguish goals from non-goals Informed Strategies that can distinguish

More information

Informed Search. Chap. 4. Breadth First. O(Min(N,B L )) BFS. Search. have same cost BIBFS. Bi- Direction. O(Min(N,2B L/2 )) BFS. have same cost UCS

Informed Search. Chap. 4. Breadth First. O(Min(N,B L )) BFS. Search. have same cost BIBFS. Bi- Direction. O(Min(N,2B L/2 )) BFS. have same cost UCS Informed Search Chap. 4 Material in part from http://www.cs.cmu.edu/~awm/tutorials Uninformed Search Complexity N = Total number of states B = Average number of successors (branching factor) L = Length

More information

8 Priority Queues. 8 Priority Queues. Prim s Minimum Spanning Tree Algorithm. Dijkstra s Shortest Path Algorithm

8 Priority Queues. 8 Priority Queues. Prim s Minimum Spanning Tree Algorithm. Dijkstra s Shortest Path Algorithm 8 Priority Queues 8 Priority Queues A Priority Queue S is a dynamic set data structure that supports the following operations: S. build(x 1,..., x n ): Creates a data-structure that contains just the elements

More information

Informed Search. Day 3 of Search. Chap. 4, Russel & Norvig. Material in part from

Informed Search. Day 3 of Search. Chap. 4, Russel & Norvig. Material in part from Informed Search Day 3 of Search Chap. 4, Russel & Norvig Material in part from http://www.cs.cmu.edu/~awm/tutorials Uninformed Search Complexity N = Total number of states B = Average number of successors

More information

CS360 Homework 12 Solution

CS360 Homework 12 Solution CS360 Homework 12 Solution Constraint Satisfaction 1) Consider the following constraint satisfaction problem with variables x, y and z, each with domain {1, 2, 3}, and constraints C 1 and C 2, defined

More information

Part V. Matchings. Matching. 19 Augmenting Paths for Matchings. 18 Bipartite Matching via Flows

Part V. Matchings. Matching. 19 Augmenting Paths for Matchings. 18 Bipartite Matching via Flows Matching Input: undirected graph G = (V, E). M E is a matching if each node appears in at most one Part V edge in M. Maximum Matching: find a matching of maximum cardinality Matchings Ernst Mayr, Harald

More information

CS 4407 Algorithms Lecture: Shortest Path Algorithms

CS 4407 Algorithms Lecture: Shortest Path Algorithms CS 440 Algorithms Lecture: Shortest Path Algorithms Prof. Gregory Provan Department of Computer Science University College Cork 1 Outline Shortest Path Problem General Lemmas and Theorems. Algorithms Bellman-Ford

More information

Appendix of Computational Protein Design Using AND/OR Branch and Bound Search

Appendix of Computational Protein Design Using AND/OR Branch and Bound Search Appendix of Computational Protein Design Using AND/OR Branch and Bound Search Yichao Zhou 1, Yuexin Wu 1, and Jianyang Zeng 1,2, 1 Institute for Interdisciplinary Information Sciences, Tsinghua University,

More information

Deterministic Planning and State-space Search. Erez Karpas

Deterministic Planning and State-space Search. Erez Karpas Planning and Search IE&M Technion to our Problem-solving agents Restricted form of general agent def Simple-Problem-Solving-Agent (problem): state, some description of the current world state seq, an action

More information

Query Processing in Spatial Network Databases

Query Processing in Spatial Network Databases Temporal and Spatial Data Management Fall 0 Query Processing in Spatial Network Databases SL06 Spatial network databases Shortest Path Incremental Euclidean Restriction Incremental Network Expansion Spatial

More information

Searching in non-deterministic, partially observable and unknown environments

Searching in non-deterministic, partially observable and unknown environments Searching in non-deterministic, partially observable and unknown environments CE417: Introduction to Artificial Intelligence Sharif University of Technology Spring 2014 Soleymani Artificial Intelligence:

More information

Planning & Decision-making in Robotics Search Algorithms: Markov Property, Dependent vs. Independent variables, Dominance relationship

Planning & Decision-making in Robotics Search Algorithms: Markov Property, Dependent vs. Independent variables, Dominance relationship 6-782 Planning & Decision-making in Robotics Search Algorithms: Markov Property, Dependent vs. Independent variables, Dominance relationship Maxim Likhachev Robotics Institute Carnegie Mellon University

More information

Algorithms: Lecture 12. Chalmers University of Technology

Algorithms: Lecture 12. Chalmers University of Technology Algorithms: Lecture 1 Chalmers University of Technology Today s Topics Shortest Paths Network Flow Algorithms Shortest Path in a Graph Shortest Path Problem Shortest path network. Directed graph G = (V,

More information

The Algorithm (DSC): Mark all nodes as unscheduled WHILE there are unscheduled nodes DO In descending order of priority, sort list of free nodes In descending order of priority, sort list of partial free

More information

A walk over the shortest path: Dijkstra s Algorithm viewed as fixed-point computation

A walk over the shortest path: Dijkstra s Algorithm viewed as fixed-point computation A walk over the shortest path: Dijkstra s Algorithm viewed as fixed-point computation Jayadev Misra 1 Department of Computer Sciences, University of Texas at Austin, Austin, Texas 78712-1188, USA Abstract

More information

Design and Analysis of Algorithms

Design and Analysis of Algorithms Design and Analysis of Algorithms CSE 5311 Lecture 21 Single-Source Shortest Paths Junzhou Huang, Ph.D. Department of Computer Science and Engineering CSE5311 Design and Analysis of Algorithms 1 Single-Source

More information

Algorithm Design Strategies V

Algorithm Design Strategies V Algorithm Design Strategies V Joaquim Madeira Version 0.0 October 2016 U. Aveiro, October 2016 1 Overview The 0-1 Knapsack Problem Revisited The Fractional Knapsack Problem Greedy Algorithms Example Coin

More information

Searching in non-deterministic, partially observable and unknown environments

Searching in non-deterministic, partially observable and unknown environments Searching in non-deterministic, partially observable and unknown environments CE417: Introduction to Artificial Intelligence Sharif University of Technology Spring 2017 Soleymani Artificial Intelligence:

More information

A faster algorithm for the single source shortest path problem with few distinct positive lengths

A faster algorithm for the single source shortest path problem with few distinct positive lengths A faster algorithm for the single source shortest path problem with few distinct positive lengths J. B. Orlin, K. Madduri, K. Subramani, and M. Williamson Journal of Discrete Algorithms 8 (2010) 189 198.

More information

All-Pairs Shortest Paths

All-Pairs Shortest Paths All-Pairs Shortest Paths Version of October 28, 2016 Version of October 28, 2016 All-Pairs Shortest Paths 1 / 26 Outline Another example of dynamic programming Will see two different dynamic programming

More information

Single Source Shortest Paths

Single Source Shortest Paths CMPS 00 Fall 015 Single Source Shortest Paths Carola Wenk Slides courtesy of Charles Leiserson with changes and additions by Carola Wenk 1 Paths in graphs Consider a digraph G = (V, E) with an edge-weight

More information

CS 6301 PROGRAMMING AND DATA STRUCTURE II Dept of CSE/IT UNIT V GRAPHS

CS 6301 PROGRAMMING AND DATA STRUCTURE II Dept of CSE/IT UNIT V GRAPHS UNIT V GRAPHS Representation of Graphs Breadth-first search Depth-first search Topological sort Minimum Spanning Trees Kruskal and Prim algorithm Shortest path algorithm Dijkstra s algorithm Bellman-Ford

More information

Breadth First Search, Dijkstra s Algorithm for Shortest Paths

Breadth First Search, Dijkstra s Algorithm for Shortest Paths CS 374: Algorithms & Models of Computation, Spring 2017 Breadth First Search, Dijkstra s Algorithm for Shortest Paths Lecture 17 March 1, 2017 Chandra Chekuri (UIUC) CS374 1 Spring 2017 1 / 42 Part I Breadth

More information

Worst-Case Solution Quality Analysis When Not Re-Expanding Nodes in Best-First Search

Worst-Case Solution Quality Analysis When Not Re-Expanding Nodes in Best-First Search Worst-Case Solution Quality Analysis When Not Re-Expanding Nodes in Best-First Search Richard Valenzano University of Alberta valenzan@ualberta.ca Nathan R. Sturtevant University of Denver sturtevant@cs.du.edu

More information

COMP251: Bipartite graphs

COMP251: Bipartite graphs COMP251: Bipartite graphs Jérôme Waldispühl School of Computer Science McGill University Based on slides fom M. Langer (McGill) & P. Beame (UofW) Recap: Dijkstra s algorithm DIJKSTRA(V, E,w,s) INIT-SINGLE-SOURCE(V,s)

More information

A Simple Implementation Technique for Priority Search Queues

A Simple Implementation Technique for Priority Search Queues A Simple Implementation Technique for Priority Search Queues RALF HINZE Institute of Information and Computing Sciences Utrecht University Email: ralf@cs.uu.nl Homepage: http://www.cs.uu.nl/~ralf/ April,

More information

Embedded Systems 15. REVIEW: Aperiodic scheduling. C i J i 0 a i s i f i d i

Embedded Systems 15. REVIEW: Aperiodic scheduling. C i J i 0 a i s i f i d i Embedded Systems 15-1 - REVIEW: Aperiodic scheduling C i J i 0 a i s i f i d i Given: A set of non-periodic tasks {J 1,, J n } with arrival times a i, deadlines d i, computation times C i precedence constraints

More information

Parsing. Weighted Deductive Parsing. Laura Kallmeyer. Winter 2017/18. Heinrich-Heine-Universität Düsseldorf 1 / 26

Parsing. Weighted Deductive Parsing. Laura Kallmeyer. Winter 2017/18. Heinrich-Heine-Universität Düsseldorf 1 / 26 Parsing Weighted Deductive Parsing Laura Kallmeyer Heinrich-Heine-Universität Düsseldorf Winter 2017/18 1 / 26 Table of contents 1 Idea 2 Algorithm 3 CYK Example 4 Parsing 5 Left Corner Example 2 / 26

More information

CSE613: Parallel Programming, Spring 2012 Date: May 11. Final Exam. ( 11:15 AM 1:45 PM : 150 Minutes )

CSE613: Parallel Programming, Spring 2012 Date: May 11. Final Exam. ( 11:15 AM 1:45 PM : 150 Minutes ) CSE613: Parallel Programming, Spring 2012 Date: May 11 Final Exam ( 11:15 AM 1:45 PM : 150 Minutes ) This exam will account for either 10% or 20% of your overall grade depending on your relative performance

More information

CSE 591 Foundations of Algorithms Homework 4 Sample Solution Outlines. Problem 1

CSE 591 Foundations of Algorithms Homework 4 Sample Solution Outlines. Problem 1 CSE 591 Foundations of Algorithms Homework 4 Sample Solution Outlines Problem 1 (a) Consider the situation in the figure, every edge has the same weight and V = n = 2k + 2. Easy to check, every simple

More information

Learning in State-Space Reinforcement Learning CIS 32

Learning in State-Space Reinforcement Learning CIS 32 Learning in State-Space Reinforcement Learning CIS 32 Functionalia Syllabus Updated: MIDTERM and REVIEW moved up one day. MIDTERM: Everything through Evolutionary Agents. HW 2 Out - DUE Sunday before the

More information

CS 253: Algorithms. Chapter 24. Shortest Paths. Credit: Dr. George Bebis

CS 253: Algorithms. Chapter 24. Shortest Paths. Credit: Dr. George Bebis CS : Algorithms Chapter 4 Shortest Paths Credit: Dr. George Bebis Shortest Path Problems How can we find the shortest route between two points on a road map? Model the problem as a graph problem: Road

More information

What is SSA? each assignment to a variable is given a unique name all of the uses reached by that assignment are renamed

What is SSA? each assignment to a variable is given a unique name all of the uses reached by that assignment are renamed Another Form of Data-Flow Analysis Propagation of values for a variable reference, where is the value produced? for a variable definition, where is the value consumed? Possible answers reaching definitions,

More information

Faster than Weighted A*: An Optimistic Approach to Bounded Suboptimal Search

Faster than Weighted A*: An Optimistic Approach to Bounded Suboptimal Search Faster than Weighted A*: An Optimistic Approach to Bounded Suboptimal Search Jordan Thayer and Wheeler Ruml {jtd7, ruml} at cs.unh.edu Jordan Thayer (UNH) Optimistic Search 1 / 45 Motivation Motivation

More information

1 Matchings in Non-Bipartite Graphs

1 Matchings in Non-Bipartite Graphs CS 598CSC: Combinatorial Optimization Lecture date: Feb 9, 010 Instructor: Chandra Chekuri Scribe: Matthew Yancey 1 Matchings in Non-Bipartite Graphs We discuss matching in general undirected graphs. Given

More information

Algorithms Booklet. 1 Graph Searching. 1.1 Breadth First Search

Algorithms Booklet. 1 Graph Searching. 1.1 Breadth First Search Algorithms Booklet 2010 Note: in all algorithms, unless stated otherwise, the input graph G is represented by adjacency lists. 1 Graph Searching 1.1 Breadth First Search Algorithm 1: BFS(G, s) Input: G

More information

Discrete Wiskunde II. Lecture 5: Shortest Paths & Spanning Trees

Discrete Wiskunde II. Lecture 5: Shortest Paths & Spanning Trees , 2009 Lecture 5: Shortest Paths & Spanning Trees University of Twente m.uetz@utwente.nl wwwhome.math.utwente.nl/~uetzm/dw/ Shortest Path Problem "#$%&'%()*%"()$#+,&- Given directed "#$%&'()*+,%+('-*.#/'01234564'.*,'7+"-%/8',&'5"4'84%#3

More information

Distributed Optimization. Song Chong EE, KAIST

Distributed Optimization. Song Chong EE, KAIST Distributed Optimization Song Chong EE, KAIST songchong@kaist.edu Dynamic Programming for Path Planning A path-planning problem consists of a weighted directed graph with a set of n nodes N, directed links

More information

Summary. Local Search and cycle cut set. Local Search Strategies for CSP. Greedy Local Search. Local Search. and Cycle Cut Set. Strategies for CSP

Summary. Local Search and cycle cut set. Local Search Strategies for CSP. Greedy Local Search. Local Search. and Cycle Cut Set. Strategies for CSP Summary Greedy and cycle cut set Greedy... in short G.B. Shaw A life spent doing mistakes is not only more honorable, but more useful than a life spent doing nothing Reminder: Hill-climbing Algorithm function

More information

Algorithms and Data Structures 2016 Week 5 solutions (Tues 9th - Fri 12th February)

Algorithms and Data Structures 2016 Week 5 solutions (Tues 9th - Fri 12th February) Algorithms and Data Structures 016 Week 5 solutions (Tues 9th - Fri 1th February) 1. Draw the decision tree (under the assumption of all-distinct inputs) Quicksort for n = 3. answer: (of course you should

More information

Planning in Markov Decision Processes

Planning in Markov Decision Processes Carnegie Mellon School of Computer Science Deep Reinforcement Learning and Control Planning in Markov Decision Processes Lecture 3, CMU 10703 Katerina Fragkiadaki Markov Decision Process (MDP) A Markov

More information

Chapter 4. Greedy Algorithms. Slides by Kevin Wayne. Copyright 2005 Pearson-Addison Wesley. All rights reserved.

Chapter 4. Greedy Algorithms. Slides by Kevin Wayne. Copyright 2005 Pearson-Addison Wesley. All rights reserved. Chapter 4 Greedy Algorithms Slides by Kevin Wayne. Copyright Pearson-Addison Wesley. All rights reserved. 4 4.1 Interval Scheduling Interval Scheduling Interval scheduling. Job j starts at s j and finishes

More information

CS 188 Introduction to Fall 2007 Artificial Intelligence Midterm

CS 188 Introduction to Fall 2007 Artificial Intelligence Midterm NAME: SID#: Login: Sec: 1 CS 188 Introduction to Fall 2007 Artificial Intelligence Midterm You have 80 minutes. The exam is closed book, closed notes except a one-page crib sheet, basic calculators only.

More information

Final. Introduction to Artificial Intelligence. CS 188 Spring You have approximately 2 hours and 50 minutes.

Final. Introduction to Artificial Intelligence. CS 188 Spring You have approximately 2 hours and 50 minutes. CS 188 Spring 2014 Introduction to Artificial Intelligence Final You have approximately 2 hours and 50 minutes. The exam is closed book, closed notes except your two-page crib sheet. Mark your answers

More information

CSE 4502/5717 Big Data Analytics Spring 2018; Homework 1 Solutions

CSE 4502/5717 Big Data Analytics Spring 2018; Homework 1 Solutions CSE 502/5717 Big Data Analytics Spring 2018; Homework 1 Solutions 1. Consider the following algorithm: for i := 1 to α n log e n do Pick a random j [1, n]; If a[j] = a[j + 1] or a[j] = a[j 1] then output:

More information

Pengju

Pengju Introduction to AI Chapter04 Beyond Classical Search Pengju Ren@IAIR Outline Steepest Descent (Hill-climbing) Simulated Annealing Evolutionary Computation Non-deterministic Actions And-OR search Partial

More information

Queues. Principles of Computer Science II. Basic features of Queues

Queues. Principles of Computer Science II. Basic features of Queues Queues Principles of Computer Science II Abstract Data Types Ioannis Chatzigiannakis Sapienza University of Rome Lecture 9 Queue is also an abstract data type or a linear data structure, in which the first

More information

Topic Contents. Factoring Methods. Unit 3: Factoring Methods. Finding the square root of a number

Topic Contents. Factoring Methods. Unit 3: Factoring Methods. Finding the square root of a number Topic Contents Factoring Methods Unit 3 The smallest divisor of an integer The GCD of two numbers Generating prime numbers Computing prime factors of an integer Generating pseudo random numbers Raising

More information

Analysis of Algorithms. Outline. Single Source Shortest Path. Andres Mendez-Vazquez. November 9, Notes. Notes

Analysis of Algorithms. Outline. Single Source Shortest Path. Andres Mendez-Vazquez. November 9, Notes. Notes Analysis of Algorithms Single Source Shortest Path Andres Mendez-Vazquez November 9, 01 1 / 108 Outline 1 Introduction Introduction and Similar Problems General Results Optimal Substructure Properties

More information

Heuristic Search Algorithms

Heuristic Search Algorithms CHAPTER 4 Heuristic Search Algorithms 59 4.1 HEURISTIC SEARCH AND SSP MDPS The methods we explored in the previous chapter have a serious practical drawback the amount of memory they require is proportional

More information

Data Flow Analysis. Lecture 6 ECS 240. ECS 240 Data Flow Analysis 1

Data Flow Analysis. Lecture 6 ECS 240. ECS 240 Data Flow Analysis 1 Data Flow Analysis Lecture 6 ECS 240 ECS 240 Data Flow Analysis 1 The Plan Introduce a few example analyses Generalize to see the underlying theory Discuss some more advanced issues ECS 240 Data Flow Analysis

More information

Faster Dynamic Controllability Checking for Simple Temporal Networks with Uncertainty

Faster Dynamic Controllability Checking for Simple Temporal Networks with Uncertainty Faster Dynamic Controllability Checking for Simple Temporal Networks with Uncertainty Massimo Cairo Department of Mathematics, University of Trento, Italy massimo.cairo@unitn.it Luke Hunsberger Computer

More information

Single Source Shortest Paths

Single Source Shortest Paths CMPS 00 Fall 017 Single Source Shortest Paths Carola Wenk Slides courtesy of Charles Leiserson with changes and additions by Carola Wenk Paths in graphs Consider a digraph G = (V, E) with an edge-weight

More information

Breadth-First Search of Graphs

Breadth-First Search of Graphs Breadth-First Search of Graphs Analysis of Algorithms Prepared by John Reif, Ph.D. Distinguished Professor of Computer Science Duke University Applications of Breadth-First Search of Graphs a) Single Source

More information

Data-Flow Analysis. Compiler Design CSE 504. Preliminaries

Data-Flow Analysis. Compiler Design CSE 504. Preliminaries Data-Flow Analysis Compiler Design CSE 504 1 Preliminaries 2 Live Variables 3 Data Flow Equations 4 Other Analyses Last modifled: Thu May 02 2013 at 09:01:11 EDT Version: 1.3 15:28:44 2015/01/25 Compiled

More information

Algorithm Design and Analysis

Algorithm Design and Analysis Algorithm Design and Analysis LECTURE 5 Greedy Algorithms Interval Scheduling Interval Partitioning Guest lecturer: Martin Furer Review In a DFS tree of an undirected graph, can there be an edge (u,v)

More information

CS 486: Applied Logic Lecture 7, February 11, Compactness. 7.1 Compactness why?

CS 486: Applied Logic Lecture 7, February 11, Compactness. 7.1 Compactness why? CS 486: Applied Logic Lecture 7, February 11, 2003 7 Compactness 7.1 Compactness why? So far, we have applied the tableau method to propositional formulas and proved that this method is sufficient and

More information

cs/ee/ids 143 Communication Networks

cs/ee/ids 143 Communication Networks cs/ee/ids 143 Communication Networks Chapter 5 Routing Text: Walrand & Parakh, 2010 Steven Low CMS, EE, Caltech Warning These notes are not self-contained, probably not understandable, unless you also

More information

CS 161: Design and Analysis of Algorithms

CS 161: Design and Analysis of Algorithms CS 161: Design and Analysis of Algorithms Greedy Algorithms 3: Minimum Spanning Trees/Scheduling Disjoint Sets, continued Analysis of Kruskal s Algorithm Interval Scheduling Disjoint Sets, Continued Each

More information

Multiprocessor Scheduling I: Partitioned Scheduling. LS 12, TU Dortmund

Multiprocessor Scheduling I: Partitioned Scheduling. LS 12, TU Dortmund Multiprocessor Scheduling I: Partitioned Scheduling Prof. Dr. Jian-Jia Chen LS 12, TU Dortmund 22/23, June, 2015 Prof. Dr. Jian-Jia Chen (LS 12, TU Dortmund) 1 / 47 Outline Introduction to Multiprocessor

More information

University of New Mexico Department of Computer Science. Final Examination. CS 362 Data Structures and Algorithms Spring, 2007

University of New Mexico Department of Computer Science. Final Examination. CS 362 Data Structures and Algorithms Spring, 2007 University of New Mexico Department of Computer Science Final Examination CS 362 Data Structures and Algorithms Spring, 2007 Name: Email: Print your name and email, neatly in the space provided above;

More information

Divide-and-Conquer Algorithms Part Two

Divide-and-Conquer Algorithms Part Two Divide-and-Conquer Algorithms Part Two Recap from Last Time Divide-and-Conquer Algorithms A divide-and-conquer algorithm is one that works as follows: (Divide) Split the input apart into multiple smaller

More information

Using preprocessing to speed up Brandes betweenness centrality algorithm

Using preprocessing to speed up Brandes betweenness centrality algorithm Using preprocessing to speed up Brandes betweenness centrality algorithm Steven Fleuren January 2018 BACHELOR THESIS Supervisor: Prof. Dr. R.H. Bisseling 2 1. Introduction In some applications of graph

More information

Minimax strategies, alpha beta pruning. Lirong Xia

Minimax strategies, alpha beta pruning. Lirong Xia Minimax strategies, alpha beta pruning Lirong Xia Reminder ØProject 1 due tonight Makes sure you DO NOT SEE ERROR: Summation of parsed points does not match ØProject 2 due in two weeks 2 How to find good

More information

Institute of Organic Chemistry, Polish Academy of Sciences, ul. Kasprzaka 44/52, Warsaw , Poland

Institute of Organic Chemistry, Polish Academy of Sciences, ul. Kasprzaka 44/52, Warsaw , Poland Electronic Supplementary Material (ESI) for Chemical Science. This journal is The Royal Society of Chemistry 2019 Supplementary Information for Manuscript titled Selection of cost-effective yet chemically

More information

Solving LP and MIP Models with Piecewise Linear Objective Functions

Solving LP and MIP Models with Piecewise Linear Objective Functions Solving LP and MIP Models with Piecewise Linear Obective Functions Zonghao Gu Gurobi Optimization Inc. Columbus, July 23, 2014 Overview } Introduction } Piecewise linear (PWL) function Convex and convex

More information

Dijkstra s Single Source Shortest Path Algorithm. Andreas Klappenecker

Dijkstra s Single Source Shortest Path Algorithm. Andreas Klappenecker Dijkstra s Single Source Shortest Path Algorithm Andreas Klappenecker Single Source Shortest Path Given: a directed or undirected graph G = (V,E) a source node s in V a weight function w: E -> R. Goal:

More information

Complexity Analysis of. Real-Time Reinforcement Learning. Applied to. December 1992 CMU-CS Carnegie Mellon University. Pittsburgh, PA 15213

Complexity Analysis of. Real-Time Reinforcement Learning. Applied to. December 1992 CMU-CS Carnegie Mellon University. Pittsburgh, PA 15213 Complexity Analysis of Real-Time Reinforcement Learning Applied to Finding Shortest Paths in Deterministic Domains Sven Koenig and Reid G. Simmons December 1992 CMU-CS-93-16 School of Computer Science

More information

Math Models of OR: Branch-and-Bound

Math Models of OR: Branch-and-Bound Math Models of OR: Branch-and-Bound John E. Mitchell Department of Mathematical Sciences RPI, Troy, NY 12180 USA November 2018 Mitchell Branch-and-Bound 1 / 15 Branch-and-Bound Outline 1 Branch-and-Bound

More information

Scout and NegaScout. Tsan-sheng Hsu.

Scout and NegaScout. Tsan-sheng Hsu. Scout and NegaScout Tsan-sheng Hsu tshsu@iis.sinica.edu.tw http://www.iis.sinica.edu.tw/~tshsu 1 Abstract It looks like alpha-beta pruning is the best we can do for an exact generic searching procedure.

More information

Search and Lookahead. Bernhard Nebel, Julien Hué, and Stefan Wölfl. June 4/6, 2012

Search and Lookahead. Bernhard Nebel, Julien Hué, and Stefan Wölfl. June 4/6, 2012 Search and Lookahead Bernhard Nebel, Julien Hué, and Stefan Wölfl Albert-Ludwigs-Universität Freiburg June 4/6, 2012 Search and Lookahead Enforcing consistency is one way of solving constraint networks:

More information

Slides credited from Hsueh-I Lu & Hsu-Chun Hsiao

Slides credited from Hsueh-I Lu & Hsu-Chun Hsiao Slides credited from Hsueh-I Lu & Hsu-Chun Hsiao Homework 3 released Due on 12/13 (Thur) 14:20 (one week only) Mini-HW 9 released Due on 12/13 (Thur) 14:20 Homework 4 released Due on 1/3 (Thur) 14:20 (four

More information

6.852: Distributed Algorithms Fall, Class 24

6.852: Distributed Algorithms Fall, Class 24 6.852: Distributed Algorithms Fall, 2009 Class 24 Today s plan Self-stabilization Self-stabilizing algorithms: Breadth-first spanning tree Mutual exclusion Composing self-stabilizing algorithms Making

More information

CMPS 6610 Fall 2018 Shortest Paths Carola Wenk

CMPS 6610 Fall 2018 Shortest Paths Carola Wenk CMPS 6610 Fall 018 Shortest Paths Carola Wenk Slides courtesy of Charles Leiserson with changes and additions by Carola Wenk Paths in graphs Consider a digraph G = (V, E) with an edge-weight function w

More information

A.I.: Beyond Classical Search

A.I.: Beyond Classical Search A.I.: Beyond Classical Search Random Sampling Trivial Algorithms Generate a state randomly Random Walk Randomly pick a neighbor of the current state Both algorithms asymptotically complete. Overview Previously

More information

ne a priority queue for the nodes of G; ile (! PQ.is empty( )) select u PQ with d(u) minimal and remove it; I Informatik 1 Kurt Mehlhorn

ne a priority queue for the nodes of G; ile (! PQ.is empty( )) select u PQ with d(u) minimal and remove it; I Informatik 1 Kurt Mehlhorn n-negative Edge Costs: optimal choice: u U with d(u) minimal. e use a priority queue (de nition on next slide) PQ to maintain the set of pairs, d(u)) ; u U } and obtain Dijkstra s Algorithm: ne a priority

More information

AI Programming CS S-06 Heuristic Search

AI Programming CS S-06 Heuristic Search AI Programming CS662-2013S-06 Heuristic Search David Galles Department of Computer Science University of San Francisco 06-0: Overview Heuristic Search - exploiting knowledge about the problem Heuristic

More information

Optimal Metric Planning with State Sets in Automata Representation

Optimal Metric Planning with State Sets in Automata Representation Proceedings of the Twenty-Third AAAI Conference on Artificial Intelligence (2008) Optimal Metric Planning with State Sets in Automata Representation Björn Ulrich Borowsky and Stefan Edelkamp Fakultät für

More information

Fixed-Universe Successor : Van Emde Boas. Lecture 18

Fixed-Universe Successor : Van Emde Boas. Lecture 18 Fixed-Universe Successor : Van Emde Boas Lecture 18 Fixed-universe successor problem Goal: Maintain a dynamic subset S of size n of the universe U = {0, 1,, u 1} of size u subject to these operations:

More information

Time-Expanded vs Time-Dependent Models for Timetable Information

Time-Expanded vs Time-Dependent Models for Timetable Information Time-Expanded vs Time-Dependent Models for Timetable Information Evangelia Pirga Computer Technology Institute & University of Patras Frank Schulz University of Konstanz Dorothea Wagner University of Konstanz

More information

Introduction to Algorithms

Introduction to Algorithms Introduction to Algorithms 6.046J/18.401J LECTURE 14 Shortest Paths I Properties of shortest paths Dijkstra s algorithm Correctness Analysis Breadth-first search Prof. Charles E. Leiserson Paths in graphs

More information

Numerical Methods. V. Leclère May 15, x R n

Numerical Methods. V. Leclère May 15, x R n Numerical Methods V. Leclère May 15, 2018 1 Some optimization algorithms Consider the unconstrained optimization problem min f(x). (1) x R n A descent direction algorithm is an algorithm that construct

More information

Static Program Analysis

Static Program Analysis Static Program Analysis Xiangyu Zhang The slides are compiled from Alex Aiken s Michael D. Ernst s Sorin Lerner s A Scary Outline Type-based analysis Data-flow analysis Abstract interpretation Theorem

More information

Introduction to Algorithms

Introduction to Algorithms Introduction to Algorithms 6.046J/8.40J/SMA550 Lecture 7 Prof. Erik Demaine Paths in graphs Consider a digraph G = (V, E) with edge-weight function w : E R. The weight of path p = v v L v k is defined

More information

Contents Lecture 4. Greedy graph algorithms Dijkstra s algorithm Prim s algorithm Kruskal s algorithm Union-find data structure with path compression

Contents Lecture 4. Greedy graph algorithms Dijkstra s algorithm Prim s algorithm Kruskal s algorithm Union-find data structure with path compression Contents Lecture 4 Greedy graph algorithms Dijkstra s algorithm Prim s algorithm Kruskal s algorithm Union-find data structure with path compression Jonas Skeppstedt (jonasskeppstedt.net) Lecture 4 2018

More information

CS 410/584, Algorithm Design & Analysis, Lecture Notes 4

CS 410/584, Algorithm Design & Analysis, Lecture Notes 4 CS 0/58,, Biconnectivity Let G = (N,E) be a connected A node a N is an articulation point if there are v and w different from a such that every path from 0 9 8 3 5 7 6 David Maier Biconnected Component

More information

Embedded Systems - FS 2018

Embedded Systems - FS 2018 Institut für Technische Informatik und Kommunikationsnetze Prof. L. Thiele Embedded Systems - FS 2018 Sample solution to Exercise 3 Discussion Date: 11.4.2018 Aperiodic Scheduling Task 1: Earliest Deadline

More information

Problem solving and search. Chapter 3. Chapter 3 1

Problem solving and search. Chapter 3. Chapter 3 1 Problem solving and search Chapter 3 Chapter 3 1 eminders ssignment 0 due 5pm today ssignment 1 posted, due 2/9 Section 105 will move to 9-10am starting next week Chapter 3 2 Problem-solving agents Problem

More information

Chapter 4 Deliberation with Temporal Domain Models

Chapter 4 Deliberation with Temporal Domain Models Last update: April 10, 2018 Chapter 4 Deliberation with Temporal Domain Models Section 4.4: Constraint Management Section 4.5: Acting with Temporal Models Automated Planning and Acting Malik Ghallab, Dana

More information

Algorithm Design and Analysis

Algorithm Design and Analysis Algorithm Design and Analysis LECTURE 6 Greedy Algorithms Interval Scheduling Interval Partitioning Scheduling to Minimize Lateness Sofya Raskhodnikova S. Raskhodnikova; based on slides by E. Demaine,

More information

Data Structures in Java

Data Structures in Java Data Structures in Java Lecture 21: Introduction to NP-Completeness 12/9/2015 Daniel Bauer Algorithms and Problem Solving Purpose of algorithms: find solutions to problems. Data Structures provide ways

More information

A* Search. 1 Dijkstra Shortest Path

A* Search. 1 Dijkstra Shortest Path A* Search Consider the eight puzzle. There are eight tiles numbered 1 through 8 on a 3 by three grid with nine locations so that one location is left empty. We can move by sliding a tile adjacent to the

More information

PART III. Outline. Codes and Cryptography. Sources. Optimal Codes (I) Jorge L. Villar. MAMME, Fall 2015

PART III. Outline. Codes and Cryptography. Sources. Optimal Codes (I) Jorge L. Villar. MAMME, Fall 2015 Outline Codes and Cryptography 1 Information Sources and Optimal Codes 2 Building Optimal Codes: Huffman Codes MAMME, Fall 2015 3 Shannon Entropy and Mutual Information PART III Sources Information source:

More information

Lec 03 Entropy and Coding II Hoffman and Golomb Coding

Lec 03 Entropy and Coding II Hoffman and Golomb Coding CS/EE 5590 / ENG 40 Special Topics Multimedia Communication, Spring 207 Lec 03 Entropy and Coding II Hoffman and Golomb Coding Zhu Li Z. Li Multimedia Communciation, 207 Spring p. Outline Lecture 02 ReCap

More information

CMSC 631 Program Analysis and Understanding. Spring Data Flow Analysis

CMSC 631 Program Analysis and Understanding. Spring Data Flow Analysis CMSC 631 Program Analysis and Understanding Spring 2013 Data Flow Analysis Data Flow Analysis A framework for proving facts about programs Reasons about lots of little facts Little or no interaction between

More information

1 Difference between grad and undergrad algorithms

1 Difference between grad and undergrad algorithms princeton univ. F 4 cos 52: Advanced Algorithm Design Lecture : Course Intro and Hashing Lecturer: Sanjeev Arora Scribe:Sanjeev Algorithms are integral to computer science and every computer scientist

More information