Informed Search. Day 3 of Search. Chap. 4, Russel & Norvig. Material in part from

Similar documents
Informed Search. Chap. 4. Breadth First. O(Min(N,B L )) BFS. Search. have same cost BIBFS. Bi- Direction. O(Min(N,2B L/2 )) BFS. have same cost UCS

CMU Lecture 4: Informed Search. Teacher: Gianni A. Di Caro

Tree/Graph Search. q IDDFS-? B C. Search CSL452 - ARTIFICIAL INTELLIGENCE 2

AI Programming CS S-06 Heuristic Search

CS360 Homework 12 Solution

Graph Search Howie Choset

CS 4100 // artificial intelligence. Recap/midterm review!

Analyzing Tie-Breaking Strategies for the A Algorithm

Principles of AI Planning

Scout and NegaScout. Tsan-sheng Hsu.

CITS4211 Mid-semester test 2011

Principles of AI Planning

Game Engineering CS F-12 Artificial Intelligence

CS 662 Sample Midterm

CS 4700: Foundations of Artificial Intelligence Homework 2 Solutions. Graph. You can get to the same single state through multiple paths.

Scout, NegaScout and Proof-Number Search

Faster than Weighted A*: An Optimistic Approach to Bounded Suboptimal Search

Homework 2: MDPs and Search

(tree searching technique) (Boolean formulas) satisfying assignment: (X 1, X 2 )

Principles of AI Planning

Alpha-Beta Pruning: Algorithm and Analysis

Appendix of Computational Protein Design Using AND/OR Branch and Bound Search

Principles of AI Planning

Problem solving and search. Chapter 3. Chapter 3 1

Alpha-Beta Pruning: Algorithm and Analysis

Alpha-Beta Pruning: Algorithm and Analysis

Example - Newton-Raphson Method

Incremental Path Planning

A* with Lookahead Re-evaluated

Essentials of the A* Algorithm

Discrete Wiskunde II. Lecture 5: Shortest Paths & Spanning Trees

CSE 473: Artificial Intelligence Spring 2014

6.01: Introduction to EECS I. Optimizing a Search

Query Processing in Spatial Network Databases

Deterministic Planning and State-space Search. Erez Karpas

Yet another bidirectional algorithm for shortest paths

Topic Contents. Factoring Methods. Unit 3: Factoring Methods. Finding the square root of a number

Artificial Intelligence Heuristic Search Methods

CSE 573. Markov Decision Processes: Heuristic Search & Real-Time Dynamic Programming. Slides adapted from Andrey Kolobov and Mausam

A* Search. 1 Dijkstra Shortest Path

ICS 252 Introduction to Computer Design

Heuristic Search Algorithms

Lecture 1 : Data Compression and Entropy

Uninformed Search Lecture 4

Math Models of OR: Branch-and-Bound

Optimal Planning for Delete-free Tasks with Incremental LM-cut

Algorithm Design Strategies V

Parsing. Weighted Deductive Parsing. Laura Kallmeyer. Winter 2017/18. Heinrich-Heine-Universität Düsseldorf 1 / 26

Artificial Intelligence Methods. Problem solving by searching

Predicting the Performance of IDA* with Conditional Distributions

Markov Decision Processes Chapter 17. Mausam

Breadth First Search, Dijkstra s Algorithm for Shortest Paths

CS 188: Artificial Intelligence Spring 2007

Analysis of Algorithms. Outline. Single Source Shortest Path. Andres Mendez-Vazquez. November 9, Notes. Notes

Problem Solving as State Space Search

CS 188 Introduction to Fall 2007 Artificial Intelligence Midterm

Analysis of Uninformed Search Methods

CS221 Practice Midterm

CTL satisfability via tableau A C++ implementation

Decoding Revisited: Easy-Part-First & MERT. February 26, 2015

TRANSMISSION STRATEGIES FOR SINGLE-DESTINATION WIRELESS NETWORKS

Maximum flow problem (part I)

Planning in Markov Decision Processes

The exam is closed book, closed calculator, and closed notes except your one-page crib sheet.

Directed Model Checking (not only) for Timed Automata

Given a sequence a 1, a 2,...of numbers, the finite sum a 1 + a 2 + +a n,wheren is an nonnegative integer, can be written

Unit 1A: Computational Complexity

Best-Case and Worst-Case Behavior of Greedy Best-First Search

Register Allocation. Maryam Siahbani CMPT 379 4/5/2016 1

CS 6301 PROGRAMMING AND DATA STRUCTURE II Dept of CSE/IT UNIT V GRAPHS

Chapter 11. Approximation Algorithms. Slides by Kevin Wayne Pearson-Addison Wesley. All rights reserved.

Chapter 3 Deterministic planning

An Algorithm better than AO*?

PROBLEM SOLVING AND SEARCH IN ARTIFICIAL INTELLIGENCE

Introduction to Spring 2006 Artificial Intelligence Practice Final

Optimal Metric Planning with State Sets in Automata Representation

Local and Stochastic Search

22 Max-Flow Algorithms

Doctoral Course in Speech Recognition. May 2007 Kjell Elenius

Algorithms for Playing and Solving games*

Bidirectional Search: Is It For Me? Nathan R. Sturtevant Associate Professor University of Denver

CSE541 Class 22. Jeremy Buhler. November 22, Today: how to generalize some well-known approximation results

Assignment 4. CSci 3110: Introduction to Algorithms. Sample Solutions

Context Free Languages: Decidability of a CFL

Single Source Shortest Paths

Learning in Depth-First Search: A Unified Approach to Heuristic Search in Deterministic, Non-Deterministic, Probabilistic, and Game Tree Settings

CSE548, AMS542: Analysis of Algorithms, Fall 2017 Date: Oct 26. Homework #2. ( Due: Nov 8 )

Artificial Intelligence (Künstliche Intelligenz 1) Part II: Problem Solving

Succinct Data Structures for Approximating Convex Functions with Applications

22c:145 Artificial Intelligence

CS 4407 Algorithms Lecture: Shortest Path Algorithms

Final, CSC242, May 2009

Single Solution-based Metaheuristics

Insert Sorted List Insert as the Last element (the First element?) Delete Chaining. 2 Slide courtesy of Dr. Sang-Eon Park

MATH1050 Greatest/least element, upper/lower bound

CMSC 451: Lecture 7 Greedy Algorithms for Scheduling Tuesday, Sep 19, 2017

I may not have gone where I intended to go, but I think I have ended up where I needed to be. Douglas Adams

CS361 Homework #3 Solutions

CSE 4502/5717 Big Data Analytics Spring 2018; Homework 1 Solutions

Final. Introduction to Artificial Intelligence. CS 188 Spring You have approximately 2 hours and 50 minutes.

Transcription:

Informed Search Day 3 of Search Chap. 4, Russel & Norvig Material in part from http://www.cs.cmu.edu/~awm/tutorials

Uninformed Search Complexity N = Total number of states B = Average number of successors (branching factor) L = Length for start to goal with smallest number of steps Q = Average size of the priority queue Lmax = Length of longest path from START to any state Algorithm Complete Optimal Time Space BFS Breadth First Search Y Y, If all trans. have same cost O(Min(N,B L )) O(Min(N,B L )) BIBFS Bi- Direction. BFS Y Y, If all trans. have same cost O(Min(N,2B L/2 )) O(Min(N,2B L/2 )) UCS PCDFS MEMD FS Uniform Cost Search Path Check DFS Memorizing DFS Y, If cost > 0 Y, If cost > 0 O(log(Q)*B C/ε )) O(Min(N,B C/ε )) Y N O(B Lmax ) O(BL max ) Y N O(Min(N,B Lmax )) O(Min(N,B Lmax )) DFID Iterative Deepening Y Y, If all trans. have same cost O(B L ) O(BL)

Search Revisited States ready to be expanded (the fringe ) START states expanded so far f(a) A f(b) B D E C f(c) F. Store a value f(s) at each state s Low f() means this state may lie on the solution path 2. Choose the state with lowest f to expand next 3. Insert its successors If f() is chosen carefully, we will eventually find the lowestcost sequence

f(a) g(a) =0 A START g(a) =5 f(b) B D E C f(c) F Example: For UCS (Uniform Cost Search): f(a) = g(a) = total cost of current shortest path from START to A Store states awaiting expansion in a priority queue for efficient retrieval of minimum f Optimal Guaranteed to find lowest cost sequence, but

Problem: No guidance as to how far any given state is from the goal Solution: Design a function h() that gives us an estimate of the distance between a state and the goal Our best guess is that A is closer to GOAL than B so maybe it is a more promising state to expand A h(a) = 3 START B h(b) = 6 GOAL C h(b) = 0

Heuristic Functions h() is a heuristic function for the search problem h(s) = estimate of the cost of the shortest path from s to GOAL h() cannot be computed solely from the states and transitions in the current problem If we could, we would already know the optimal path! h() is based on external knowledge about the problem informed search Questions:. Typical examples of h? 2. How to use h? 3. What are desirable/necessary properties of h?

Heuristic Functions Example s X x GOAL The straight-line distance is lower from s than from s so maybe s has a better chance to be on the best path START X s X h(s) = Linear-geometric distance to GOAL

Heuristic Functions Example 5 3 2 8 2 4 3 4 5 7 6 6 7 8 s GOAL How could we define h(s)?

First Attempt: Greedy Best First Search Simplest use of heuristic function: Always select the node with smallest h() for expansion (i.e., f(s) = h(s)) Initialize PQ Insert START with value h(start) in PQ While (PQ not empty and no goal state is in PQ) Pop the state s with the minimum value of h from PQ For all s in succs(s) If s is not already in PQ and has not already been visited Insert s in PQ with value h(s )

Problem 4 2 2 START A B C GOAL h = 4 h = 3 h = 2 h = h = 0 What solution do we find in this case? Greedy search clearly not optimal, even though the heuristic function is non-stupid

Trying to Fix the Problem f(a) = g(a) + h(a) = 3 g(a) = 0 A h(a) = 3 START g(a) = 5 h(b) = 6 B f(b) = g(b) + h(b) = GOAL C g(s) is the cost from START to s only h(s) estimates the cost from s to GOAL Key insight: g(s) + h(s) estimates the total cost of the cheapest path from START to GOAL going through s A* algorithm

Can A* Fix the Problem? 4 2 2 START A B C GOAL h = 4 h = 3 h = 2 h = h = 0 {(START,4)} {(A,5)} (f(a) = h(a) + g(a) = 3 + g(start) + cost(start, A) = 3 + 0 + 2) {(B,5) (C,7)} (f(c) = h(c) + g(c) = + g(a) + cost(a, C) = + 2 + 4) {(C,5)} (f(c) = h(c) + g(c) = + g(b) + cost(b, C) = + 3 + ) {(GOAL,6)}

Can A* Fix the Problem? 4 2 2 START A B C GOAL h = 4 h = 3 h = 2 h = h = 0 C is placed in the queue with {(START,4)} backpointers {A,START} {(A,5)} (f(a) = h(a) + g(a) = 3 + g(start) + cost(start, A) = 3 + 0 + 2) {(B,5) (C,7)} A lower value of f(c) is found (f(c) = h(c) + g(c) = + g(a) + cost(a, C) = + with 2 + backpointers 4) {B,A,START} {(C,5)} (f(c) = h(c) + g(c) = + g(b) + cost(b, C) = + 3 + ) {(GOAL,6)}

A* Termination Condition h = 8 S B h = 3 Queue: {(B,4) (A,8)} A h = 7 7 G D h = 2 h = C {(C,4) (A,8)} {(D,4) (A,8)} {(A,8) (G,0)} Stop when GOAL is popped from the queue!

A* Termination Condition h = 8 S A h = 7 7 G D h = 3 B h = 2 C h = Queue: {(B,4) (A,8)} {(C,4) (A,8)} {(D,4) (A,8)} {(A,8) (G,0)} We have encountered G before we have a chance to visit the branch going through A. The problem is that at each step we use only an estimate of the path cost to the goal Stop when GOAL is popped from the queue!

Revisiting States h = 8 START h = 3 B h = 7 A /2 C h = 8 A state that was already in the queue is re-visited. How is its priority updated? 7 D h = GOAL

Revisiting States h = 8 START h = 3 B h = 7 A /2 C h = 2 A state that had been already expanded is re-visited. D h = (Careful: This is a different example.) GOAL 7

Pop state s with lowest f(s) in queue If s = GOAL return SUCCESS A* Algorithm Else expand s: (inside loop) For all s in succs (s): f = g(s ) + h(s ) = g(s) + cost(s,s ) + h(s ) If (s not seen before OR s previously expanded with f(s ) > f OR s in PQ with with f(s ) > f ) Promote/Insert s with new value f in PQ previous(s ) s Else Ignore s (because it has been visited and its current path cost f(s ) is still the lowest path cost from START to s )

A Under what Conditions is A* h = 6 START h = 7 Optimal? 3 GOAL {(START,6)} {(GOAL,3) (A,8)} Final path: {START, GOAL} with cost = 3 Problem: This h() is a poor estimate of path cost to the goal state

Admissible Heuristics Define h*(s) = the true minimal cost to the goal from s h is admissible if h(s) <= h*(s) for all states s In words: An admissible heuristic never overestimates the cost to the goal. Optimistic estimate of cost to goal. A* is guaranteed to find the optimal path if h is admissible

Consistent (Monotonic) Heuristics h(s) GOAL s h(s ) Cost(s,s ) s h(s) <= h(s ) + cost(s,s )

Consistent (Monotonic) Heuristics h(s) GOAL s h(s ) Cost(s,s ) s Sort of triangular inequality implies that path cost always increases + need to expand node only once h(s) <= h(s ) + cost(s,s )

Pop state s with lowest f(s) in queue If s = GOAL return SUCCESS Else expand s: For all s in succs (s): f = g(s ) + h(s ) = g(s) + cost(s,s ) + h(s ) If (s not seen before OR s previously expanded with f(s ) > f OR s in PQ with with f(s ) > f ) Promote/Insert s with new value f in PQ previous(s ) s Else Ignore s (because it has been visited and its current path cost f(s ) is still the lowest path cost from START to s )

Examples s h(s) x GOAL For the navigation problem: The length of the shortest path is at least the distance between s and GOAL Euclidean distance is an admissible heuristic X X What about the puzzle? 8 7 5 3 h(s)? 2 2 4 3 4 5 6 6 7 8 s GOAL

s 5 4 6 8 7 3 2 2 3 8 4 7 6 5 GOAL misplaced tiles: h (s) = 7 distances: h(s) = 2 + 3 + 3 + 2 + 4 + 2 + 0 + 2 = 8

Comparing Heuristics h = misplaced tiles h 2 = Manhattan distance Iterative Deepening L = 4 steps L = 8 steps L = 2 steps 2 6,300 3.6 x 0 6 A* with heuristic h 3 39 227 A* with 2 25 73 heuristic h 2 Overestimates A* performance because of the tendency of IDS to expand states repeatedly Number of states expanded does not include log() time access to queue Example from Russell&Norvig

s Comparing Heuristics 5 4 6 8 7 3 2 2 3 8 4 7 6 5 GOAL h (s) = 7 h 2 (s) = 2 + 3 + 3 + 2 + 4 + 2 + 0 + 2 = 8 h 2 is larger than h and, at same time, A* seems to be more efficient with h 2. Is there a connection between these two observations? h 2 dominates h if h 2 (s) >= h (s) for all s For any two heuristics h 2 and h : If h 2 dominates h then A* is more efficient (expands fewer states) with h 2 Intuition: since h <= h*, a larger h is a better approximation of the true path cost

Limitations Computation: In the worst case, we may have to explore all the states O(N) The good news: A* is optimally efficient For a given h(), no other optimal algorithm will expand fewer nodes The bad news: Storage is also potentially exponential O(N)

IDA* (Iterative Deepening A*) Same idea as Iterative Deepening DFS except use f(s) to control depth of search instead of the number of transitions Example, assuming integer costs:. Run DFS, stopping at states s such that f(s) > 0 Stop if goal reached 2. Run DFS, stopping at states s such that f(s) > Stop if goal reached 3. Run DFS, stopping at states s such that f(s) > 2 Stop if goal reached..keep going by increasing the limit on f by every time Complete (assuming we use loop-avoiding DFS) Optimal More expensive in computation cost than A* Memory order L as in DFS

Summary Informed search and heuristics First attempt: Best-First Greedy search A* algorithm Optimality Condition on heuristic functions Completeness Limitations, space complexity issues Extensions Nils Nilsson. Problem Solving Methods in Artificial Intelligence. McGraw Hill (97) Judea Pearl. Heuristics: Intelligent Search Strategies for Computer Problem Solving (984) Chapters 3&4 Russel & Norvig