CS360 Homework 12 Solution

Similar documents
CS221 Practice Midterm

Informed Search. Chap. 4. Breadth First. O(Min(N,B L )) BFS. Search. have same cost BIBFS. Bi- Direction. O(Min(N,2B L/2 )) BFS. have same cost UCS

Informed Search. Day 3 of Search. Chap. 4, Russel & Norvig. Material in part from

CS 4700: Foundations of Artificial Intelligence Homework 2 Solutions. Graph. You can get to the same single state through multiple paths.

CS 188 Introduction to Fall 2007 Artificial Intelligence Midterm

Introduction to Spring 2009 Artificial Intelligence Midterm Exam

The exam is closed book, closed calculator, and closed notes except your one-page crib sheet.

Introduction to Spring 2009 Artificial Intelligence Midterm Solutions

Constraint satisfaction search. Combinatorial optimization search.

1. Tony and his three best friends (Steven, Donna, and Randy) were each at a different table.

Name: UW CSE 473 Final Exam, Fall 2014

The exam is closed book, closed calculator, and closed notes except your one-page crib sheet.

Lecture 9: Search 8. Victor R. Lesser. CMPSCI 683 Fall 2010

Tree/Graph Search. q IDDFS-? B C. Search CSL452 - ARTIFICIAL INTELLIGENCE 2

Algorithms for Picture Analysis. Lecture 07: Metrics. Axioms of a Metric

AI Programming CS S-06 Heuristic Search

CS 4100 // artificial intelligence. Recap/midterm review!

Introduction to Fall 2009 Artificial Intelligence Final Exam

Final. Introduction to Artificial Intelligence. CS 188 Spring You have approximately 2 hours and 50 minutes.

The exam is closed book, closed calculator, and closed notes except your one-page crib sheet.

Chapter 3 Deterministic planning

CS221 Practice Midterm #2 Solutions

The exam is closed book, closed calculator, and closed notes except your one-page crib sheet.

Fall 2017 November 10, Written Homework 5

Mock Exam Künstliche Intelligenz-1. Different problems test different skills and knowledge, so do not get stuck on one problem.

(tree searching technique) (Boolean formulas) satisfying assignment: (X 1, X 2 )

Notes for Lecture Notes 2

Topics in Algorithms. 1 Generation of Basic Combinatorial Objects. Exercises. 1.1 Generation of Subsets. 1. Consider the sum:

FINAL EXAM PRACTICE PROBLEMS CMSC 451 (Spring 2016)

The exam is closed book, closed calculator, and closed notes except your one-page crib sheet.

Introduction to Spring 2006 Artificial Intelligence Practice Final

Artificial Intelligence. Non-deterministic state model. Model for non-deterministic problems. Solutions. Blai Bonet

Problem set 1. (c) Is the Ford-Fulkerson algorithm guaranteed to produce an acyclic maximum flow?

Embedded Systems 15. REVIEW: Aperiodic scheduling. C i J i 0 a i s i f i d i

Analyzing Tie-Breaking Strategies for the A Algorithm

Evolving a New Feature for a Working Program

Constraint Satisfaction Problems

Chapter 6 Constraint Satisfaction Problems

Midterm Examination CS540: Introduction to Artificial Intelligence

Searching in non-deterministic, partially observable and unknown environments

Introduction to Fall 2008 Artificial Intelligence Midterm Exam

CMU Lecture 4: Informed Search. Teacher: Gianni A. Di Caro

Introduction to Arti Intelligence

Search and Lookahead. Bernhard Nebel, Julien Hué, and Stefan Wölfl. June 4/6, 2012

Chapter 0 Introduction. Fourth Academic Year/ Elective Course Electrical Engineering Department College of Engineering University of Salahaddin

Distributed Optimization. Song Chong EE, KAIST

Learning in State-Space Reinforcement Learning CIS 32

Introduction to Fall 2008 Artificial Intelligence Final Exam

CISC681/481 AI Midterm Exam

Finding optimal configurations ( combinatorial optimization)

Using first-order logic, formalize the following knowledge:

1 Computational Problems

Outline. 1 Merging. 2 Merge Sort. 3 Complexity of Sorting. 4 Merge Sort and Other Sorts 2 / 10

Principles of AI Planning

Constraint Satisfaction [RN2] Sec [RN3] Sec Outline

Section Handout 5 Solutions

Final. CS 188 Fall Introduction to Artificial Intelligence

Outline What are CSPs? Standard search and CSPs Improvements. Constraint Satisfaction. Introduction. Introduction

Constraint Satisfaction

Optimal Planning for Delete-free Tasks with Incremental LM-cut

Game Engineering CS F-12 Artificial Intelligence

CSI 4105 MIDTERM SOLUTION

1 Basic Definitions. 2 Proof By Contradiction. 3 Exchange Argument

is called an integer programming (IP) problem. model is called a mixed integer programming (MIP)

CMPSCI611: The Matroid Theorem Lecture 5

Incremental Path Planning

An experimental investigation on the pancake problem

Graph Search Howie Choset

CISC 4090: Theory of Computation Chapter 1 Regular Languages. Section 1.1: Finite Automata. What is a computer? Finite automata

Join Ordering. 3. Join Ordering

Algorithm Design Strategies V

CS 188 Fall Introduction to Artificial Intelligence Midterm 1

Chapter 7 Network Flow Problems, I

16.410: Principles of Autonomy and Decision Making Final Exam

CSC242: Intro to AI. Lecture 7 Games of Imperfect Knowledge & Constraint Satisfaction

Reading 11 : Relations and Functions

Problem One: Order Relations i. What three properties does a binary relation have to have to be a partial order?

Principles of AI Planning

Written examination: Solution suggestions TIN175/DIT411, Introduction to Artificial Intelligence

Faster than Weighted A*: An Optimistic Approach to Bounded Suboptimal Search

Searching in non-deterministic, partially observable and unknown environments

Principles of AI Planning

Travelling Salesman Problem

3.4 Relaxations and bounds

Constraint Satisfaction Problems - CSP. March 23, 2017

COS597D: Information Theory in Computer Science October 19, Lecture 10

5 Systems of Equations

Breadth First Search, Dijkstra s Algorithm for Shortest Paths

Final Exam December 12, 2017

Energy minimization via graph-cuts

CSC 8301 Design & Analysis of Algorithms: Lower Bounds

Efficient Enumeration of Regular Languages

Section Notes 8. Integer Programming II. Applied Math 121. Week of April 5, expand your knowledge of big M s and logical constraints.

Chapter 4. Greedy Algorithms. Slides by Kevin Wayne. Copyright 2005 Pearson-Addison Wesley. All rights reserved.

Discrete (and Continuous) Optimization WI4 131

CSE 573. Markov Decision Processes: Heuristic Search & Real-Time Dynamic Programming. Slides adapted from Andrey Kolobov and Mausam

Part V. Matchings. Matching. 19 Augmenting Paths for Matchings. 18 Bipartite Matching via Flows

Section 4.1: Sequences and Series

Element x is R-minimal in X if y X. R(y, x).

Tentamen TDDC17 Artificial Intelligence 20 August 2012 kl

Transcription:

CS360 Homework 12 Solution Constraint Satisfaction 1) Consider the following constraint satisfaction problem with variables x, y and z, each with domain {1, 2, 3}, and constraints C 1 and C 2, defined as follows: C 1 is defined between x and y and allows the pairs (1, 1), (2, 2), (3, 1), (3, 2), (3, 3). (A pair (a, b) means that assigning x = a and y = b does not violate the constraint. Any assignment that does not appear in the list violate the constraint.) C 2 is defined between y and z and allows the pairs (1, 1), (1, 2), (3, 1), (3, 2), (3, 3). Which values does arc consistency rule out from the domain of each variable? Suppose that we started search after establishing arc consistency, and we assign x = 1. Which values does a) forward checking and b) maintaining arc consistency rule out from the domain of each variable? Arc consistency first rules out y = 2 because there is no value of z that satisfies C 2 if y = 2. It then rules out x = 2 because, once y = 2 is ruled out, there is no value of y that satisfies C 1 if x = 2. Arc consistency does not rule out any other values at this point. If we assign x = 1 during search, forward checking rules out only y = 3 (y = 2 is already ruled out by arc consistency). Maintaining arc consistency rules out y = 3 and z = 3. 2) Show how a single ternary constraint such as A + B = C can be turned into three binary constraints by using an auxiliary variable. You may assume finite domains. (Hint: Consider a new variable that takes on values that are pairs of other values, and consider constraints such as X is the first element of the pair Y. ) Next, show how constraints with more than three variables can be treated similarly. Finally, show how unary constraints can be eliminated by altering the domains of variables. This completes the demonstration that any CSP can be transformed into a CSP with only binary constraints. To turn a single constraint C between n variables x 1,..., x n into n binary constraints, we can add a new variable y whose domain is a subset of the Cartesian product of the n variables domains. Namely, this subset contains exactly the tuples < a 1,... a n > where, i {1... n}, a i is in the domain of x i, and the assignment x 1 = a 1,..., x n = a n is not ruled out by C. We then

add the binary constraints B 1,..., B n where, i {1... n}, B i is a constraint between x i and y that rules out any assignment x i = a i, y =< b 1,..., b n > if and only if a i b i. A unary constraint is a constraint over a single variable, ruling out some of the values from its domain. Instead of using a unary constraint to rule out values from the domain of a variable, we can simply remove those values from the variable s domain. Search 3) Solve Problem 1 from Homework 11 for A* with a) the (consistent) Manhattan distance heuristic and b) the straight line distance heuristic. The Manhattan distance heuristic between two grid cells (x 1, y 1 ) and (x 2, y 2 ) is x 1 x 2 + y 1 y 2 (the length of a shortest path between the two cells, assuming that there are no obstacles on the grid). For instance, the Manhattan distance between A1 and E3 is 1 5 + 1 3 = 6. Remember to expand every state at most once. Manhattan distance heuristic: A B C D E 1 2 3 4 5 6 2 3 5 3 g 1 2 4 4 1 2 3 5 5 2 3 4 5 6 Expansion order (with f values): E1 (6), D1 (6), C1 (6), B1 (6), A1 (6), C2 (6), C3 (6), B3 (6), A3 (6) Straight-line distance heuristic (approximately): A B C D E 1 2 2.2 2.8 3.6 4.5 2 2.2 4.1 3 g 1 2 4 4 1 1.4 2.2 4.1 5 2 2.2 2.8 3.6 4.5 Expansion order (with f values): E1 (4.5), D1 (4.6), C1 (4.8), E2 (5.1), B1 (5.2), C2 (5.2), A1 (6), C3 (6), B3 (6), A3 (6). 4) We are given a sequence of integers and want to sort them in ascending order. The only operation available to us is to reverse the order of the elements in some prefix of the sequence. For instance, by reversing the first three elements of (1 2 3 4), we get (3 2 1 4). This problem is also known as the pancake flipping problem. We model this problem as a search problem, where each state corresponds to a different ordering of the elements in the sequence. Given

an initial sequence (2 4 1 3), in which order does A* expand the states, using the breakpoint heuristic described below? Assume that ties are broken toward states with larger g-values, and, if there are still ties, they are broken in lexicographic order. That is, (2 1 4 3) is preferred to (2 4 1 3). Breakpoint heuristic: A breakpoint exists between two consecutive integers if their difference is more than one. Additionally, a breakpoint exists after the last integer in the sequence if it is not the largest integer in the sequence. For instance, in (2 1 4 3), there are two breakpoints: one between 1 and 4 (since their difference is more than 1), the other after 3 (since it is at the end and of the sequence and is not the largest integer in the sequence). The breakpoint heuristic is the number of breakpoints in a given sequence. (Bonus question: Is this heuristic a) admissible and b) consistent? Why?) We start with the sequence (2.4.1.3.) which has 4 breakpoints (dots represent the breakpoints). We have only three actions available to us: reverse the order of the first two elements, first three elements, or all the elements (reversing the order of only the first element is pointless). The search tree is given below: 2.4.1.3. (4) 4.2 1.3. (4) 1.4.2 3. (4) 3.1.4.2. (5) 4.1 2 3. (4) 3 2.4.1. (5) 2 1.4 3. (5) 3 2 1.4 (4) 2 3.1.4 (6) 1 2 3 4 (4) The order of expansions (and their f-values) are as follows: (a) (2.4.1.3.) 4 (b) (1.4.2 3.) 4 (c) (4.1 2 3.) 4 (d) (3 2 1.4 ) 4 (e) (1 2 3 4 ) 4

The breakpoint heuristic is consistent (and therefore admissible). The goal state (1 2 3 4) has no breakpoints, therefore h(goal) = 0. Each action can change the number of breakpoints by at most 1 (reversing the first i elements can only effect the breakpoint after i). Therefore, for any edge (s, s ), 0 h(s) 1 + h(s ) holds. 5) Does A* always terminate if a finite-cost path exists? Why? No. If a state has an infinite number of successors (infinite branching factor), A* cannot expand that state in a finite amount of time and, therefore, does not terminate. If we assume that each state has a finite number of successors (finite branching factor), A* may still not terminate because it can get stuck exploring an infinite subspace of the state space. Consider the following state space with states {s goal, s 0, s 1,... }, and edges {(s 0, s goal ), (s 0, s 1 ), (s 1, s 2 ),... }, where c(s 0, s goal ) = 2 and, for all i, c(s i, s i+1 ) = (1/2) i. An A* search from s 0 to s goal that uses a 0-heuristic (h(s) = 0 for all states s) would have to expand s 0, s 1,... before expanding s goal (since, for all i, f(s i ) = 1+1/2+1/4+ +(1/2) i 1 < 2 = f(s goal )). Since an infinite number of expansions are required before expanding s goal, A* does not terminate. 6) Given two consistent heuristics h 1 and h 2, we compute a new heuristic h 3 by taking the maximum of h 1 and h 2. That is, h 3 (s) = max(h 1 (s), h 2 (s)). Is h 3 consistent? Why? Yes. First, for any goal state s, we show that h 3 (s) = 0. Since, h 1 and h 2 are consistent, h 1 (s) = 0 and h 2 (s) = 0. Therefore, h 3 (s) = max(h 1 (s), h 2 (s)) = 0. Then, for any edge (s, s ) in the graph, we show that 0 h 3 (s) c(s, s )+h 3 (s ). Without loss of generality, assume that h 1 (s) h 2 (s) (that is, if h 1 (s) < h 2 (s), we could simply switch h 1 and h 2, and continue with the proof). Then, h 3 (s) = max(h 1 (s), h 2 (s)) = h 1 (s). Since h 1 is consistent, we have: We get: 0 h 1 (s) c(s, s ) + h 1 (s ) 0 h 3 (s) = max(h 1 (s), h 2 (s)) = h 1 (s) c(s, s ) + h 1 (s ) c(s, s ) + max(h 1 (s ), h 2 (s )) = c(s, s ) + h 3 (s ) 7) In the arrow puzzle, we have a series of arrows pointing up or down, and we are trying to make all the arrows point up with a minimum number of action executions. The only action available to us is to chose a pair of adjacent arrows and flip both of their directions. Using problem relaxation, come up with a good heuristic for this problem.

We can relax the problem by allowing our action to flip the directions of any two arrows, instead of two adjacent arrows. In this case, we need to use at least number of arrows pointing down / 2 actions, which can be used as a consistent heuristic to solve the original problem. 8) Explain why heuristics obtained via problem relaxation are not only admissible but also consistent. Relaxing a problem means creating a supergraph of the state space by adding extra edges to the original graph of the state space and using goal distances on this supergraph as heuristics for an A* search in the original graph. We start by showing that the goal distances in the supergraph form a consistent heuristic for the supergraph. The goal distance of the goal is 0, so h(goal) = 0. For any edge (s, s ) of the supergraph, 0 h(s) c(s, s ) + h(s ) because, otherwise, h(s) would not be the goal distance of s, since there were a shorter path to the goal via s. Since the edges in the original graph is a subset of the edges in the supergraph, for all edges (s, s ) of the original graph, 0 h(s) c(s, s ) + h(s ). 9) What are the advantages and disadvantages of a) uniform-cost search and b) greedy best-first search over A* search? The only advantage uniform-cost search has is that it does not have to compute a heuristic (which can be very expensive in some domains). Both search methods guarantee optimality. Greedy best-first search offers no optimality guarantees, although it typically finds solutions with many fewer node expansions and thus much faster than A*. 10) Give a simple example that shows that A* with inconsistent but admissible heuristic values is not guaranteed to find a cost-minimal path if it expands every state at most once. 1 S h(s) = 4 2 h(a) = 3 A B h(b) = 1 1 1 C h(c) = 0 2 G h(g) = 0

In this example (where the heuristic is admissible but not consistent), when searching for a path from S to G, the states are expanded in the following order (parentheses show f-values): S (0 + 4 = 4) B (2 + 1 = 3) C (3 + 0 = 3) A (1 + 3 = 4) G (5 + 0 = 5) Note that expanding A finds a shorter path from S to C, but this is not reflected in the solution since C was expanded before A and we are not allowed to expand a state more than once. 11) Solve Problem 1 from Homework 11 for Iterative Deepening search. In the solution below, we use to mark nodes that are at the depth-limit for the current iteration of Iterative Deepening search. Expansion order in 1st iteration (depth 0): E1. Expansion order in 2nd iteration (depth 1): E1, D1, E2. Expansion order in 3rd iteration (depth 2): E1, D1, C1, E2, E3. Expansion order in 4th iteration (depth 3): E1, D1, C1, B1, C2, E2, E3, E4. Expansion order in 5th iteration (depth 4): E1, D1, C1, B1, A1, C2, C3, E2, E3, E4, E5. Expansion order in 6th iteration (depth 5): E1, D1, C1, B1, A1, C2, C3, B3, C4, E2, E3, E4, E5, D5. Expansion order in 7th iteration (depth 6): E1, D1, C1, B1, A1, C2, C3, B3, A3. Note that the nodes marked with also mark the first time that they are expanded. If we list the nodes in the order that they are first expanded, we get: E1, D1, E2, C1, E3, B1, C2, E4, A1, C3, E5, B3, C4, D5, A3. This is equivalent to the BFS expansion order.