I may not have gone where I intended to go, but I think I have ended up where I needed to be. Douglas Adams

Similar documents
Graph Search Howie Choset

CMU Lecture 4: Informed Search. Teacher: Gianni A. Di Caro

Incremental Path Planning

AI Programming CS S-06 Heuristic Search

Guest Lecture: Steering in Games. the. gamedesigninitiative at cornell university

Game Engineering CS F-12 Artificial Intelligence

CS 273 Prof. Serafim Batzoglou Prof. Jean-Claude Latombe Spring Lecture 12 : Energy maintenance (1) Lecturer: Prof. J.C.

CSC 1700 Analysis of Algorithms: Warshall s and Floyd s algorithms

CS 4100 // artificial intelligence. Recap/midterm review!

CS 188 Introduction to Fall 2007 Artificial Intelligence Midterm

CSC236 Week 3. Larry Zhang

Scout, NegaScout and Proof-Number Search

Register Allocation. Maryam Siahbani CMPT 379 4/5/2016 1

Algorithms for Playing and Solving games*

Learning in State-Space Reinforcement Learning CIS 32

The Inductive Proof Template

N/4 + N/2 + N = 2N 2.

Introduction to Computer Science and Programming for Astronomers

Breadth First Search, Dijkstra s Algorithm for Shortest Paths

Lecture 3: Big-O and Big-Θ

Skylines. Yufei Tao. ITEE University of Queensland. INFS4205/7205, Uni of Queensland

Alpha-Beta Pruning: Algorithm and Analysis

MAKING A BINARY HEAP

ICS 252 Introduction to Computer Design

IS 709/809: Computational Methods in IS Research Fall Exam Review

Lexical Analysis: DFA Minimization & Wrap Up

P, NP, NP-Complete. Ruth Anderson

Scout and NegaScout. Tsan-sheng Hsu.

Constraint sat. prob. (Ch. 6)

1 Terminology and setup

CS188: Artificial Intelligence, Fall 2009 Written 2: MDPs, RL, and Probability

Lecture - 24 Radial Basis Function Networks: Cover s Theorem

M155 Exam 2 Concept Review

Value-Ordering and Discrepancies. Ciaran McCreesh and Patrick Prosser

DD* Lite: Efficient Incremental Search with State Dominance

Informed Search. Chap. 4. Breadth First. O(Min(N,B L )) BFS. Search. have same cost BIBFS. Bi- Direction. O(Min(N,2B L/2 )) BFS. have same cost UCS

Time Dependent Contraction Hierarchies Basic Algorithmic Ideas

CS/COE

Last time: Summary. Last time: Summary

Algorithms and Data Structures 2016 Week 5 solutions (Tues 9th - Fri 12th February)

Algorithm Design Strategies V

University of California Berkeley CS170: Efficient Algorithms and Intractable Problems November 19, 2001 Professor Luca Trevisan. Midterm 2 Solutions

Informed Search. Day 3 of Search. Chap. 4, Russel & Norvig. Material in part from

Great Theoretical Ideas in Computer Science. Lecture 7: Introduction to Computational Complexity

Wolf-Tilo Balke Silviu Homoceanu Institut für Informationssysteme Technische Universität Braunschweig

Jeffrey D. Ullman Stanford University

Complexity Theory of Polynomial-Time Problems

CSE 332. Data Abstractions

What is SSA? each assignment to a variable is given a unique name all of the uses reached by that assignment are renamed

εx 2 + x 1 = 0. (2) Suppose we try a regular perturbation expansion on it. Setting ε = 0 gives x 1 = 0,

CS188: Artificial Intelligence, Fall 2009 Written 2: MDPs, RL, and Probability

A.I.: Beyond Classical Search

Mock Exam Künstliche Intelligenz-1. Different problems test different skills and knowledge, so do not get stuck on one problem.

Complex Networks CSYS/MATH 303, Spring, Prof. Peter Dodds

Multimedia Databases 1/29/ Indexes for Multimedia Data Indexes for Multimedia Data Indexes for Multimedia Data

MAKING A BINARY HEAP

Algorithms: Lecture 12. Chalmers University of Technology

Lecture 17: Trees and Merge Sort 10:00 AM, Oct 15, 2018

Robust Knowledge and Rationality

Math Models of OR: Branch-and-Bound

Minimax strategies, alpha beta pruning. Lirong Xia

CSE 4502/5717 Big Data Analytics Spring 2018; Homework 1 Solutions

CS 387/680: GAME AI MOVEMENT

Foundations of Natural Language Processing Lecture 6 Spelling correction, edit distance, and EM

A* with Lookahead Re-evaluated

b + O(n d ) where a 1, b > 1, then O(n d log n) if a = b d d ) if a < b d O(n log b a ) if a > b d

CITS4211 Mid-semester test 2011

NP-Completeness I. Lecture Overview Introduction: Reduction and Expressiveness

A Simple Implementation Technique for Priority Search Queues

CS 387/680: GAME AI MOVEMENT

Query Processing in Spatial Network Databases

Algorithms. NP -Complete Problems. Dong Kyue Kim Hanyang University

Quiz 1 Solutions. Problem 2. Asymptotics & Recurrences [20 points] (3 parts)

Searching in non-deterministic, partially observable and unknown environments

Data Structures. Outline. Introduction. Andres Mendez-Vazquez. December 3, Data Manipulation Examples

Reinforcement Learning

Midterm. Introduction to Artificial Intelligence. CS 188 Summer You have approximately 2 hours and 50 minutes.

5 + 9(10) + 3(100) + 0(1000) + 2(10000) =

CS264: Beyond Worst-Case Analysis Lecture #11: LP Decoding

CSCB63 Winter Week 11 Bloom Filters. Anna Bretscher. March 30, / 13

Alpha-Beta Pruning: Algorithm and Analysis

Faster than Weighted A*: An Optimistic Approach to Bounded Suboptimal Search

CS264: Beyond Worst-Case Analysis Lecture #4: Parameterized Analysis of Online Paging

CS 241 Analysis of Algorithms

Introduction to Algorithms / Algorithms I Lecturer: Michael Dinitz Topic: Intro to Learning Theory Date: 12/8/16

[Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty.]

INF4130: Dynamic Programming September 2, 2014 DRAFT version

ASYMPTOTIC COMPLEXITY SEARCHING/SORTING

Reinforcement Learning. Spring 2018 Defining MDPs, Planning

Shortest Path Algorithms

Lexical Analysis Part II: Constructing a Scanner from Regular Expressions

Solutions to the Midterm Practice Problems

CS173 Lecture B, November 3, 2015

CS221 Practice Midterm

Central Algorithmic Techniques. Iterative Algorithms

Mining of Massive Datasets Jure Leskovec, AnandRajaraman, Jeff Ullman Stanford University

IS-ZC444: ARTIFICIAL INTELLIGENCE

1 Primals and Duals: Zero Sum Games

A Linear Time Algorithm for Ordered Partition

CSE332: Data Structures & Parallelism Lecture 2: Algorithm Analysis. Ruth Anderson Winter 2018

Transcription:

Disclaimer: I use these notes as a guide rather than a comprehensive coverage of the topic. They are neither a substitute for attending the lectures nor for reading the assigned material. I may not have gone where I intended to go, but I think I have ended up where I needed to be. Douglas Adams All you need is the plan, the road map, and the courage to press on to your destination. Earl Nightingale

PREVIOUSLY ON

Class N-3 1. How would you describe AI (generally), to not us? 2. Game AI is really about The I of I. Which is what? Supporting the P E which is all about making the game more enjoyable Doing all the things that a(nother) player or designer 3. What are ways Game AI differs from Academic AI? 4. (academic) AI in games vs. AI for games. What s that? 5. What is the complexity fallacy? 6. The essence of a game is a g_ a set of r_, and a? 7. What are three big components of game AI in-game? 8. What is a way game AI is used out-of-game?

Class N-2 1. What is attack kung fu style? 2. How do intentional mistakes help games? 3. What defines a graph? 4. What defines graph search? 5. Name 3 uniformed graph search algorithms. 6. What is a heuristic? 7. Admissible heuristics never estimate 8. Examples of using graphs for games

Is NavMesh good for all games? Not necessarily 2d Strategy game grid gives fast random access Among the best for robust pathfinding and terrain reasoning in 3d worlds Find the right solution for your problem

Class N-1 1. What are some benefits of path networks? 2. Cons of path networks? 3. What is the flood fill algorithm? 4. What is a simple approach to using path navigation nodes? 5. What is a navigation table? 6. How does the expanded geometry model work? Does it work with map gen features? 7. What are the major wins of a Nav Mesh? 8. Would you calculate an optimal nav-mesh?

findnearestwaypoint() Most engines provide a rapid nearest function for objects Spatial partitioning w/ special data structures: Quad-trees (2d), oct-trees (3d), k-d trees Binary space partitioning (BSP tree) Multi-resolution maps (hierarchical grids) The gain over all-pairs techniques depends on number of agents/objects

Graphs, Search, & Path Planning Continued 2016-05-26 (and maybe kinematic motion; steering and flocking)

PATH NETWORK SEARCH

Precomputing Paths Why, When Faster than computation on the fly Especially with large maps or lots of agents How Use Dijkstra s algorithm to create lookup tables Lookup cost tables Registering search requests What is the main problem with precomputed paths?

Dijkstra s algorithm A single-source, multi-target shortest path algorithm for arbitrary directed graphs with non-negative weights Tells you path from any one node to all other nodes

Given: G=(V,E), source For each vertex v in G, set dist[v] to infinity Set dist[source] = 0 Let Q = all vertices in G While Q is not empty: Let u = get vertex in Q with smallest distance value Remove u from Q For each neighbor v of u: d = dist[u] + distance(u, v) if d < dist[v] then: dist[v] = d parent[v] = u Return dist[]

For each vertex v in G, set dist[v] to infinity Set dist[source] = 0 Let Q = all vertices in G While Q is not empty inf 6 2 inf 3 inf 5 11 6 4 inf x dist[x] par[x] 1 0 2 inf 3 inf 4 inf 5 inf 6 inf Q=[1,2,3,4,5,6] U=1 14 10 15 * Source = 1 0 1 2 7 inf Let u = get vertex in Q with smallest distance value (node 1) dist[v]

Remove u (node 1) from Q For each neighbor v of u: d = dist[u] + distance(u, v) if d < dist[v] then: dist[v] = d parent[v] = u inf 6 2 inf 5 inf 3 11 6 4 inf x dist[x] par[x] 1 0 2 7 1 3 inf 4 inf 5 inf 6 inf Q=[1,2,3,4,5,6] U=1 V=2 14 10 15 * Source = 1 0 1 2 7 7 parent[v] dist[v]

Remove u (node 1) from Q For each neighbor v of u: d = dist[u] + distance(u, v) if d < dist[v] then: dist[v] = d parent[v] = u inf 6 2 3 inf 5 11 6 4 inf x dist[x] dist[1]=0 par[x] 1 0 dist[2]=7 2 7 dist[3]= 1 dist[4]=inf 3 dist[5]=inf 1 4 inf dist[6]=inf 5 Q=[1,2,3,4,5,6] inf 6 inf Q=[1,2,3,4,5,6] U=1 V=3 14 10 15 * Source = 1 0 1 2 7 7 parent[v] dist[v]

Remove u (node 1) from Q For each neighbor v of u: d = dist[u] + distance(u, v) if d < dist[v] then: dist[v] = d parent[v] = u 14 6 2 3 inf 5 11 6 4 inf x dist[x] par[x] 1 0 2 7 1 3 1 4 inf 5 inf 6 14 1 Q=[1,2,3,4,5,6] U=1 V=6 14 10 15 * Source = 1 0 1 2 7 7 parent[v] dist[v]

14 6 2 3 inf 5 11 6 4 inf x dist[x] par[x] 1 0 2 7 1 3 1 4 inf 5 inf 6 14 1 Q=[2,3,4,5,6] U=2 V= 14 10 15 * Source = 1 0 1 2 7 7 Let u = get vertex in Q with smallest distance value (node 2)

Remove u (node 2) from Q For each neighbor v of u: d = dist[u] + distance(u, v) if d < dist[v] then: dist[v] = d parent[v] = u 14 6 2 3 inf 5 11 6 4 inf x dist[x] par[x] 1 0 2 7 1 3 1 4 inf 5 inf 6 14 1 Q=[2,3,4,5,6] U=2 V=3 14 10 15 * Source = 1 0 1 2 7 7 parent[v] dist[v]

Remove u (node 2) from Q For each neighbor v of u: d = dist[u] + distance(u, v) if d < dist[v] then: dist[v] = d parent[v] = u 14 6 2 3 inf 5 11 6 4 22 x dist[x] par[x] 1 0 2 7 1 3 1 4 22 2 5 inf 6 14 1 Q=[2,3,4,5,6] U=2 V=4 14 10 15 * Source = 1 0 1 2 7 7 parent[v] dist[v]

14 6 2 3 inf 5 11 6 4 22 x dist[x] par[x] 1 0 2 7 1 3 1 4 22 2 5 inf 6 14 1 Q=[3,4,5,6] U=3 V= 14 10 15 * Source = 1 0 1 2 7 7 Let u = get vertex in Q with smallest distance value (node 3)

Remove u (node 3) from Q For each neighbor v of u: d = dist[u] + distance(u, v) if d < dist[v] then: dist[v] = d parent[v] = u 14 6 2 3 inf 5 11 6 4 20 x dist[x] par[x] 1 0 2 7 1 3 1 4 20 3 5 inf 6 14 1 Q=[3,4,5,6] U=3 V=4 14 10 15 * Source = 1 0 1 2 7 7 parent[v] dist[v]

Remove u (node 3) from Q For each neighbor v of u: d = dist[u] + distance(u, v) if d < dist[v] then: dist[v] = d parent[v] = u 11 6 2 3 inf 5 11 6 4 20 x dist[x] par[x] 1 0 2 7 1 3 1 4 20 3 5 inf 6 11 3 Q=[3,4,5,6] U=3 V=6 14 10 15 * Source = 1 0 1 2 7 7 parent[v] dist[v]

11 6 2 3 inf 5 11 6 4 20 x dist[x] par[x] 1 0 2 7 1 3 1 4 20 3 5 inf 6 11 3 Q=[4,5,6] U=6 V= 14 10 15 * Source = 1 0 1 2 7 7 Let u = get vertex in Q with smallest distance value (node 6)

Remove u (node 6) from Q For each neighbor v of u: d = dist[u] + distance(u, v) if d < dist[v] then: dist[v] = d parent[v] = u 11 6 2 3 20 5 11 6 4 20 x dist[x] par[x] 1 0 2 7 1 3 1 4 20 3 5 20 6 6 11 3 Q=[4,5,6] U=6 V=5 14 10 15 * Source = 1 0 1 2 7 7 parent[v] dist[v]

11 6 2 3 20 5 11 6 4 20 x dist[x] par[x] 1 0 2 7 1 3 1 4 20 3 5 20 6 6 11 3 Q=[4,5] U=4 V= 14 10 15 * Source = 1 0 1 2 7 7 Let u = get vertex in Q with smallest distance value (node 4)

Remove u (node 4) from Q For each neighbor v of u: d = dist[u] + distance(u, v) if d < dist[v] then: dist[v] = d parent[v] = u 11 6 2 3 20 5 11 6 4 20 x dist[x] par[x] 1 0 2 7 1 3 1 4 20 3 5 20 6 6 11 3 Q=[4,5] U=4 V=5 14 10 15 * Source = 1 0 1 2 7 7 parent[v] dist[v]

11 6 2 3 20 5 11 6 4 20 x dist[x] par[x] 1 0 2 7 1 3 1 4 20 3 5 20 6 6 11 3 Q=[5] U=5 V= 14 10 15 * Source = 1 0 1 2 7 7 Let u = get vertex in Q with smallest distance value (node 5)

11 6 2 3 20 5 11 6 4 20 x dist[x] par[x] 1 0 2 7 1 3 1 4 20 3 5 20 6 6 11 3 Q=[ ] U=5 V= 14 10 15 * Source = 1 0 1 2 7 7 * We now know the shortest distance and shortest path to all nodes from node 1.

Floyd-Warshall algorithm All-pairs shortest path algorithm Tells you path from all nodes to all other nodes in weighted graph Positive or negative edge weights, but no negative cycles (edges sum to negative) Incrementally improves estimate O( V 3 ) [Use Dijkstra from each starting vertex when the graph is sparse and has non-negative edges]

Given: G=(V,E), source For each edge (u, v) do: dist[u][v] = weight of edge (u, v) or infinity next[u][v] = v For k = 1 to V do: for i = 1 to V do: for j = 1 to V do: Intermediate node Start node End node if dist[i][k] + dist[k][j] < dist[i][j] then: dist[i][j] = dist[i][k] + dist[k][j] next[i][j] = next[i][k]

0 1 2 6 1 3 Distance 0 1 2 3 4 0 INF INF INF 1 INF INF INF 2 INF 1 6 3 INF INF 1 INF 3 4 INF INF 6 3 INF k = 0 i = 0 j = 0 4 3 Next 0 1 2 3 4 0 1 2 1 0 2 2 0 1 3 4 3 2 4 4 2 3

0 Distance 0 1 2 3 4 0 INF INF INF 1 2 1 1 INF INF INF 2 INF 1 6 6 3 3 INF INF 1 INF 3 4 INF INF 6 3 INF 4 3 Next 0 1 2 3 4 0 1 2 M S k = 2 i = 0 1 0 2 2 0 1 3 4 3 2 4 F j = 3 + 1 < INF 4 2 3

0 1 2 6 1 3 Distance 0 1 2 3 4 0 INF 10 INF 1 INF INF INF 2 INF 1 6 3 INF INF 1 INF 3 4 INF INF 6 3 INF M S F 3 4 k = 2 i = 0 j = 3 0 -> 3: dist 10, goto 2 Next 0 1 2 3 4 0 1 2 2 1 0 2 2 0 1 3 4 3 2 4 4 2 3

0 Distance 0 1 2 3 4 0 INF 10 INF 1 2 1 1 INF INF INF 2 INF 1 6 6 3 3 INF INF 1 INF 3 4 INF INF 6 3 INF 4 3 Next 0 1 2 3 4 0 1 2 2 M S k = 2 i = 0 1 0 2 2 0 1 3 4 3 2 4 F j = 4 + 6 < INF 4 2 3

0 1 2 6 1 3 Distance 0 1 2 3 4 0 INF 10 15 1 INF INF INF 2 INF 1 6 3 INF INF 1 INF 3 4 INF INF 6 3 INF M S F 3 4 k = 2 i = 0 j = 4 0 -> 3: dist 10, goto 2 Next 0 1 2 3 4 0 1 2 2 2 1 0 2 2 0 1 3 4 3 2 4 4 2 3

0 1 2 6 1 3 Distance 0 1 2 3 4 0 INF 10 15 1 INF INF INF 2 INF 1 6 3 INF INF 1 INF 3 4 INF INF 6 3 INF M S F 3 4 k = 3 i = 0 j = 4 10 + 3 < 15 Next 0 1 2 3 4 0 1 2 2 2 1 0 2 2 0 1 3 4 3 2 4 4 2 3

0 1 2 6 1 3 Distance 0 1 2 3 4 0 INF 10 13 1 INF INF INF 2 INF 1 6 3 INF INF 1 INF 3 4 INF INF 6 3 INF M S F 3 4 k = 3 i = 0 j = 4 0 -> 4: dist 13, goto 2 Next 0 1 2 3 4 0 1 2 2 2 1 0 2 2 0 1 3 4 3 2 4 4 2 3

0 1 2 6 1 3 Distance 0 1 2 3 4 0 INF 10 13 1 INF INF INF 2 INF 1 6 3 INF INF 1 INF 3 4 INF INF 6 3 INF M S F 3 4 k = 3 i = 2 j = 4 1 + 3 < 6 Next 0 1 2 3 4 0 1 2 2 2 1 0 2 2 0 1 3 4 3 2 4 4 2 3

0 1 2 6 1 3 Distance 0 1 2 3 4 0 INF 10 13 1 INF INF INF 2 INF 1 4 3 INF INF 1 INF 3 4 INF INF 6 3 INF M S F 3 4 k = 3 i = 2 j = 4 2 -> 4: dist 4, goto 3 Next 0 1 2 3 4 0 1 2 2 2 1 0 2 2 0 1 3 3 3 2 4 4 2 3

0 - - - 1 2 6 4 1 3 3 Distance 0 1 2 3 4 0-3 - - 8 4 1 - -3 - INF INF 2 - - -3 1 4 3 INF INF 1 INF 3 4 INF INF 6 3 INF

Reconstructing the path Want to go from u to v if next[u][v] is empty then return null path path = (u) while u <> v do: u = next[u][v] path.append(u) return path

Dynamic environments Terrain can change Jumpable? Kickable? Too big to jump/kick? Typically: destructible environments Path network edges can be eliminated Path network edges can be created

Heuristic Search Find shortest path from a single source to a single destination Heuristic function: We have some knowledge about how far away any given state from the goal, in terms of operation cost For navigation: Euclidean distance, Manhattan distance

A 366 B 0 C 160 D 242 E 161 F 176 G 77 H 151 I 226 L 244 M 241 N 234 O 380 P 100 S 253 T 32 U 80 V 1 Z 374

A* Search Single source, single target graph search Generalization of Dijkstra Guaranteed to return the optimal path if the heuristic is admissible; quick and accurate Evaluate each state: f(n) = g(n) + h(n) Open list: nodes that are known and waiting to be visited Closed list: nodes that have been visited

A* Given: init, goal(s), ops ops = { } closed = nil open = {init} current = init while (NOT isgoal(current) AND open <> nil) closed = closed + {current} open = open {current} + (successors(current, ops) closed) current = first(open) end while if isgoal(current) then reconstruct solution else fail * Insert according to evaluation function

Evaluation function f(n) = g(n) + h(n) A 366 B 0 C 160 D 242 E 161 F 176 G 77 H 151 I 226 L 244 M 241 N 234 O 380 P 100 S 253 T 32 U 80 V 1 Z 374 Open: A(366) Closed:

Evaluation function f(n) = g(n) + h(n) A 366 B 0 C 160 D 242 E 161 F 176 G 77 H 151 I 226 L 244 M 241 N 234 O 380 P 100 S 253 T 32 U 80 V 1 Z 374 Open: S(253+140=33), T(32+118=447), Z(374+75=44) Closed: A(366)

Evaluation function f(n) = g(n) + h(n) A 366 B 0 C 160 D 242 E 161 F 176 G 77 H 151 I 226 L 244 M 241 N 234 O 380 P 100 S 253 T 32 U 80 V 1 Z 374 Open: R(220+13=413), F(23+176=415), T(32+118=447), Z(374+75=44), O(21+380=671) Closed: S(33), A(366)

Evaluation function f(n) = g(n) + h(n) A 366 B 0 C 160 D 242 E 161 F 176 G 77 H 151 I 226 L 244 M 241 N 234 O 380 P 100 S 253 T 32 U 80 V 1 Z 374 Open: F(23+176=415), P(317+100=417), T(32+118=447), Z(374+75=44), C(366+160=526), O Closed: R(413), S(33), A(366)

Evaluation function f(n) = g(n) + h(n) A 366 B 0 C 160 D 242 E 161 F 176 Backtrack! G 77 H 151 I 226 L 244 M 241 N 234 O 380 P 100 S 253 T 32 U 80 V 1 Z 374 Open: P(317+100=417), T(32+118=447), Z(374+75=44), B(450+0=450), C(366+160=526), O(2 Closed: F(415), R(413), S(33), A(366)

Evaluation function f(n) = g(n) + h(n) A 366 B 0 C 160 D 242 E 161 F 176 G 77 H 151 I 226 L 244 M 241 N 234 O 380 P 100 S 253 T 32 U 80 V 1 Z 374 Open: B(418+0=418), T(32+118=447), Z(374+75=44), C(366+160=526), O(21+380=671) Closed: P(417), F(415), R(413), S(33), A(366)

Evaluation function f(n) = g(n) + h(n) A 366 B 0 C 160 D 242 E 161 F 176 G 77 H 151 I 226 L 244 M 241 N 234 O 380 P 100 S 253 T 32 U 80 V 1 Z 374 Solution: A-S-R-P-B Open: T(32+118=447), Z(374+75=44), C(366+160=526), O(21+380=671) Closed: B(418), P(417), F(415), R(413), S(33), A(366)

A* Search A* is optimal but only if you use an admissible heuristic An admissible heuristic is mathematically guaranteed to underestimate the cost of reaching a goal What is an admissible heuristic for path finding on a path network?

Non-Admissible Heuristics What happens if you have a non-admissible heuristic? 20 10 3 10 F G Goal 20 25 A B C D 15 40 30 74 55 20 10

Non-admissible heuristics Discourage agent from being in particular states Encourage agent from being in particular states

Hierarchical Path Planning Used to reduce CPU overhead of graph search Plan with coarse-grained and fine-grained maps People think hierarchically (more efficient) We can prune a large number of states Example: Planning a trip to NYC based on states, then individual roads

Hierarchical A* http://www.cs.ualberta.ca/~mmueller/ps/hpa star.pdf Within 1% of optimal path length, but up to 10 times faster

How high up do you go? As high as you can without start and end being in the same node.

1. Build clusters. Can be arbitrary 2. Find transitions, a (possibly empty) set of obstacle-free locations. 3. Inter-edges: Place a node on either side of transition, and link them (cost 1). 4. Intra-edges: Search between nodes inside cluster, record cost. * Can keep optimal intra-cluster paths, or discard for memory savings.

1. Start cluster: Search within cluster to the border 2. Search across clusters to the goal cluster 3. Goal cluster: Search from border to goal 4. Path smoothing * Really just adds start and goal to the hierarchy graph

Path Smoothing in Hierarchical A*

Sticky Situations Dynamic environments can ruin plans What do we do when an agent has been pushed back through a doorway that it has already visited? What do we do in fog of war situations? What if we have a moving target?

Real Time A* Online search: execute as you search Because you can t look at a state until you get there You can t backtrack No open list Modified cost function f() g(n) is actual distance from n to current state (instead of initial state) Use a hash-table to keep track of h() for nodes you have visited (because you might visit them again) Pick node with lowest f-value from immediate successors Execute move immediately After you move, update previous location h(prev) = second best f-value Second best f-value represents the estimated cost of returning to the previous state (and then add g)

RTA* with lookahead At every node you can see some distance DFS, then back up the value (think of it as minimin with alpha-pruning) Search out to known limit Pick best, then move Repeat, because something might change in the environment that change our assessment Things we discover as our horizon moves Things that change behind us

D* Lite Incremental search: replan often, but reuse search space if possible In unknown terrain, assume anything you don t know is clear (optimistic) Perform A*, execute plan until discrepancy, then replan D* Lite achieves 2x speedup over A* (when replanning) http://idm-lab.org/bib/abstracts/papers/aaai02b.pdf

Omniscient optimal : given complete information

Optimistic optimal : assume empty for parts you don t know.

Heuristic Search Recap A* Can t precompute Dynamic environments Memory issues Optimal when heuristic is admissible (and assuming no changes) Replanning can be slow on really big maps Hierarchical A* is the same, but faster

Heuristic Search Recap Real-time A* Stumbling in the dark, 1 step lookahead Replan every step, but fast! Realistic? For a blind agent that knows nothing Optimal when completely blind

Heuristic Search Recap Real-time A* with lookahead Good for fog-of-war Replan every step, with fast bounded lookahead to edge of known space Optimality depends on lookahead

Heuristic Search Recap D* Lite Assume everything is open/clear Replan when necessary Worst case: Runs like Real-Time A* Best case: Never replan Optimal including changes

See also AI Game Programming wisdom 2, CH 2 Buckland CH 8 Millington CH 4

Next time (KINEMATIC) MOVEMENT, STEERING, FLOCKING, FORMATIONS