CISC681/481 AI Midterm Exam

Similar documents
Introduction to Arti Intelligence

Final exam of ECE 457 Applied Artificial Intelligence for the Spring term 2007.

Logical Agents. Outline

Name: UW CSE 473 Final Exam, Fall 2014

Intelligent Agents. Pınar Yolum Utrecht University

Artificial Intelligence Chapter 7: Logical Agents

Lecture Overview [ ] Introduction to Artificial Intelligence COMP 3501 / COMP Lecture 6. Motivation. Logical Agents

Final exam of ECE 457 Applied Artificial Intelligence for the Fall term 2007.

Chapter 7 R&N ICS 271 Fall 2017 Kalev Kask

Inf2D 06: Logical Agents: Knowledge Bases and the Wumpus World

Logical Agents. Chapter 7

INF5390 Kunstig intelligens. Logical Agents. Roar Fjellheim

EE562 ARTIFICIAL INTELLIGENCE FOR ENGINEERS

CS 331: Artificial Intelligence Propositional Logic I. Knowledge-based Agents

Knowledge-based Agents. CS 331: Artificial Intelligence Propositional Logic I. Knowledge-based Agents. Outline. Knowledge-based Agents

Logic & Logic Agents Chapter 7 (& background)

Kecerdasan Buatan M. Ali Fauzi

AI Principles, Semester 2, Week 2, Lecture 5 Propositional Logic and Predicate Logic

Logical Agents (I) Instructor: Tsung-Che Chiang

CS 771 Artificial Intelligence. Propositional Logic

Logical Agents. Knowledge based agents. Knowledge based agents. Knowledge based agents. The Wumpus World. Knowledge Bases 10/20/14

COMP219: Artificial Intelligence. Lecture 19: Logic for KR

Logical Agent & Propositional Logic

COMP219: Artificial Intelligence. Lecture 19: Logic for KR

Title: Logical Agents AIMA: Chapter 7 (Sections 7.4 and 7.5)

Propositional Logic Not Enough

Mock Exam Künstliche Intelligenz-1. Different problems test different skills and knowledge, so do not get stuck on one problem.

Logical Agent & Propositional Logic

Introduction to Spring 2006 Artificial Intelligence Practice Final

CS 4100 // artificial intelligence. Recap/midterm review!

Constraint satisfaction search. Combinatorial optimization search.

Artificial Intelligence. Propositional logic

Logical Agents. Santa Clara University

7 LOGICAL AGENTS. OHJ-2556 Artificial Intelligence, Spring OHJ-2556 Artificial Intelligence, Spring

Finding optimal configurations ( combinatorial optimization)

Overview. Knowledge-Based Agents. Introduction. COMP219: Artificial Intelligence. Lecture 19: Logic for KR

The exam is closed book, closed calculator, and closed notes except your one-page crib sheet.

What is the relationship between number of constraints and number of possible solutions?

CS 188 Introduction to Fall 2007 Artificial Intelligence Midterm

Logical Agents. Chapter 7

Introduction to Artificial Intelligence. Logical Agents

Propositional Logic: Methods of Proof (Part II)

Price: $25 (incl. T-Shirt, morning tea and lunch) Visit:

CS 188 Introduction to AI Fall 2005 Stuart Russell Final

Logic. proof and truth syntacs and semantics. Peter Antal

Artificial Intelligence

Introduction to Intelligent Systems

Logical Inference. Artificial Intelligence. Topic 12. Reading: Russell and Norvig, Chapter 7, Section 5

CS 4700: Foundations of Artificial Intelligence

Propositional Logic: Logical Agents (Part I)

TDT4136 Logic and Reasoning Systems

Knowledge- Based Agents. Logical Agents. Knowledge Base. Knowledge- Based Agents 10/7/09

Class Assignment Strategies

CS360 Homework 12 Solution

Foundations of Artificial Intelligence

Foundations of Artificial Intelligence

Foundations of Artificial Intelligence

Logical agents. Chapter 7. Chapter 7 1

AI Programming CS S-09 Knowledge Representation

The following number (percentage) of students scored in the following ranges:

7. Logical Agents. COMP9414/ 9814/ 3411: Artificial Intelligence. Outline. Knowledge base. Models and Planning. Russell & Norvig, Chapter 7.

Propositional Logic: Methods of Proof (Part II)

A Little Deductive Logic

CS 188: Artificial Intelligence Spring 2007

Introduction to Intelligent Systems

Logical agents. Chapter 7. Chapter 7 1

Algorithms. NP -Complete Problems. Dong Kyue Kim Hanyang University

Announcements CompSci 102 Discrete Math for Computer Science

7. Propositional Logic. Wolfram Burgard and Bernhard Nebel

The exam is closed book, closed calculator, and closed notes except your one-page crib sheet.

Symbolic Logic 3. For an inference to be deductively valid it is impossible for the conclusion to be false if the premises are true.

CS 380: ARTIFICIAL INTELLIGENCE PREDICATE LOGICS. Santiago Ontañón

Artificial Intelligence

Agents that reason logically

Revised by Hankui Zhuo, March 21, Logical agents. Chapter 7. Chapter 7 1

Intelligent Agents. Formal Characteristics of Planning. Ute Schmid. Cognitive Systems, Applied Computer Science, Bamberg University

A Little Deductive Logic

Logic in AI Chapter 7. Mausam (Based on slides of Dan Weld, Stuart Russell, Subbarao Kambhampati, Dieter Fox, Henry Kautz )

Introduction to Sets and Logic (MATH 1190)

THE LOGIC OF COMPOUND STATEMENTS

Logical Agents: Propositional Logic. Chapter 7

Proposition logic and argument. CISC2100, Spring 2017 X.Zhang

Where are my glasses?

CS 4700: Artificial Intelligence

Logic & Logic Agents Chapter 7 (& Some background)

CS 380: ARTIFICIAL INTELLIGENCE LOGICAL AGENTS. Santiago Ontañón

Discrete Math I Exam II (2/9/12) Page 1

Final Exam December 12, 2017

Introduction to Metalogic

To every formula scheme there corresponds a property of R. This relationship helps one to understand the logic being studied.

Deliberative Agents Knowledge Representation I. Deliberative Agents

The Wumpus Game. Stench Gold. Start. Cao Hoang Tru CSE Faculty - HCMUT

Midterm Preparation Problems

CSE 473: Artificial Intelligence Spring 2014

cis32-ai lecture # 18 mon-3-apr-2006

Propositional Logic: Methods of Proof. Chapter 7, Part II

Proof. Theorems. Theorems. Example. Example. Example. Part 4. The Big Bang Theory

Reasoning. Inference. Knowledge Representation 4/6/2018. User

The exam is closed book, closed calculator, and closed notes except your one-page crib sheet.

1 [15 points] Search Strategies

Transcription:

CISC681/481 AI Midterm Exam You have from 12:30 to 1:45pm to complete the following four questions. Use the back of the page if you need more space. Good luck! 1. Definitions (3 points each) Define the following terms or phrases in a few sentences. (a) PEAS (define it and also explain what each letter stands for) PEAS is a description of an agent designed to perform a task. P = performance measure, E = task environment, A = actuators, and S = sensors. (b) rational agent A rational agent is one that, given a percept sequence, takes the action expected to maximize its performance measure given its knowledge about the world. (c) goal-based agent A goal-based agent is one that acts to achieve some pre-defined goal. It is usually able to estimate how far away it is from that goal. (d) logical entailment α = β M(α) M(β), i.e., α entails β iff all models of α are also models of β. (e) local maximum A local maximum is a state n in which the objective function f(n) is greater than the objective function values of all successor states.

2. Short Answer (30 points) Answer the following questions in a few sentences. (a) (5 points) A traveling salesman needs to plan a route through 50 U.S. state capitals, starting and ending in Tallahassee, Florida. The salesman (a failed AI researcher) uses a search algorithm to find the shortest route. How much space will a breadth-first search require? How about a depth-first search? For extra credit, describe an admissible heuristic for this problem. Solution is at depth 50, branching factor is 49. BFS space is O(50 49 ) and DFS space is O(50 49). (b) (5 points) We have discussed the n-queens puzzle in many different contexts: as a search problem with an unrestricted state space, as a search problem with a restricted state space, as a constraint satisfaction problem, as a local search problem. Which of these formulations do you think is best suited to solving the puzzle and why? CSP and local search are both acceptable answers. The CSP formulation provides many good heuristics for solving the problem efficiently but is harder to implement. The local search formulation is very easy to implement and works well in practice. The search formulations are less good because they keep track of path information, which is not necessary for this problem. (c) (5 points) I use a propositional logic inference rule called affirming the consequent : given α β and β, I conclude α. Is this rule sound? If so, explain why. If not, explain why not and give an example (in English) of sentences α β and β that are true, but α is false. It is not sound. β ( α β) true does not allow any conclusion about the truth of α. As an example, if α = Socrates is a man and β = Socrates is mortal, the mortality of Socrates does not mean Socrates is definitely a man.

(d) (10 points) Translate the following paragraph describing the rules of succession of the British crown into a knowledge base of sentences in first-order logic that will allow inferences about who will inherit the crown: If the king or queen has male children, the crown will be inherited by the eldest male child. If there are no male children, the crown will be inherited by the eldest female child. If the king or queen has no children, the crown will pass to his or her closest living relation. Roman Catholics may not hold the crown. Use the following constants, predicates, and functions to define new predicates and write sentences in first-order logic. Aim to define a Heir(x, M) predicate that is true if and only if x is the heir to the crown. symbol interpretation M Current king or queen M onarch(m) True F emale(x) True iff x is female (you can assume F emale(x) Male(x)) ChildOf(x, M) True iff x is a child of M ClosestRelation(x, M) True iff x is the closest relation to M that is not a child of M Catholic(x) True iff x is or ever has been Roman Catholic Age(x) Function mapping x to an integer age of x Here s one possible answer: xeldestmalechild(x, M) Male(x) Child(x, M) xeldestf emalechild(x, M) F emale(x) Child(x, M) ( ymale(y) Child(y, M) y x Age(x) > Age(y)) ( yf emale(y) Child(y, M) y x Age(x) > Age(y)) xismaleheir(x, M) Child(x, M) Male(x) EldestMaleChild(x, M) xisf emaleheir(x, M) Child(x, M) F emale(x) EldestF emalechild(x, M) xheir(x, M) Catholic(x) (IsMaleHeir(x, M) ( yismaleheir(y, M) IsF emaleheir(x, M)) ( yismaleher(y, M) yisf emaleheir(y, M) ClosestRelation(x, M)) (e) (5 points) Explain why a logical agent equipped with a knowledge base, a fixed set of actions, routines to add new sentences to the base, and inference algorithms that it uses to deduce things about the world is considered a model-based agent and not a learning agent. For extra credit, speculate on how a knowledge-based agent with a fixed set of actions and routines to add new sentences to the base, but no built-in inference routines, might learn how to perform inference. It is a model based agent because it only has fixed, pre-programmed ways to update its model. A learning agent would be able to learn new ways to update its model.

3. Robot Navigation (20 points) A robot needs to move from one room to another via a hallway as shown in the map below. GOAL Solid lines indicate impassable walls. The dashed line shows the straight-line (Euclidean) distance from the robot to its goal. Assume the robot has been programmed with a complete map of the environment, and nothing in the environment changes while the robot is deciding what to do or executing its actions. (a) Formulate the problem as a search task. In particular, carefully describe the state space and the successor function. (b) The Euclidean distance shown on the map is one possible heuristic function. Show that Euclidean distance is an admissible heuristic, i.e. it never overestimates the true distance the robot has to travel. Shade in the areas on the map the robot will consider when using this heuristic function to decide on a route to the goal. (c) Break the robot s problem down into subproblems and formulate a heuristic for each subproblem. Show that the sum of those heuristics is still admissible, and is a better heuristic than the Euclidean distance. (a) A reasonable state space is obtained by drawing a grid on the map and defining a state to be the (x,y) location of the robot in the grid. The successor function gives the positions that the robot can move to from its current position. The start state is the (x,y) location that it is currently at, and the goal test just determines whether the current position is equal to the goal position. The path cost can be defined as 1, since every move has equal cost. (b) It is admissible because the number of moves required can never be less than the straight-line distance: in the best case the agent can take the straight-line path to the goal; in the worst case it will have to navigate around walls and therefore require more steps. (c) We can break the problem down into four subproblems: get out of the first room, get to the hallway intersection, get into the second room, and get to the goal. Each of these can have a Euclidean distance heuristic. The sum is still admissible, as it cannot overestimate the total distance the robot will have to travel. It is better than the original heuristic because it dominates it; the new heuristic is always greater than or equal to the original one.

4. Constraint Satisfaction (20 points) The objective of Sudoku is to place numbers 1..n on an n n grid such that each row, column, and n n box contains each number only once. Below is a simple 4 4 puzzle. 3 3 2 4 (a) Formulate the grid above as a constraint satisfaction problem. You may use the AllDif f constraint. (b) Suppose an agent implementing backtracking search began its search by testing a 2 in the lower left box, then a 1 in the box above it. Is the agent performing forward checking? Why or why not? Is it using heuristics such as MRV (minimum remaining values), degree (most constrained variable), or LCV (least constraining value)? Why or why not? (c) For extra credit, apply forward checking and arc consistency to the given grid and show the remaining values in the domains of each variable. (a) Define 16 variables X ij, one for each square in the grid. The domain of each variable is numbers {1, 2, 3, 4}. There are four row constraints AllDiff(X r,1, X r,2, X r,3, X r,4 ), four column constraints AllDiff(X 1,c, X 2,c, X 3,c, X 4,c ), and four 2 2 box constraints AllDiff(X i,j, X i+1,j, X i,j+1, X i+1,j+1 ) (for (i, j) = (1, 1), (i, j) = (1, 3), (i, j) = (3, 1), (i, j) = (3, 3). (b) It is not performing forward checking. If it were, it would have seen that putting a 2 in the lower left box cannot result in a consistent solution it eliminates all ways to place a 2 in the lower right 2 2 grid. It is not using MRV: the lower left box has three possible values, but there is another box with only one possible value. It may or may not be using degree; not enough information is provided to say. It may or may not be using LCV; not enough information is provided to say.

5. Logical Inference (15 points) The following prepositions are given: Q, M 1, M 2, C 1Q, C 2Q, O 12, O 21, R 1, R 2. Additionally, the following prepositional sentences are true: E 1 M 1 C 1Q (M 2 C 2Q O 12 ) (1) E 2 M 2 C 2Q (M 1 C 1Q O 21 ) (2) H 1 C 1Q M 1 E 1 H 2 (3) H 2 C 2Q M 2 E 2 H 1 (4) I 1 H 1 R 1 (5) I 2 H 2 R 2 (6) Is I 1 true? Is I 2 true? Explain your argument clearly, but you do not have to show every step of the inference process, nor are you are required to use a particular inference algorithm. First, E 1 is true because M 1, C 1Q are given, and (M 2 C 2Q O 12 ) is true because M 2 C 2Q is false. E 2 is not true because C 2Q is false. From that we can conlude that H 2 is not true, and then since C 1Q, M 1 are given, E 1 is true, and H 2 is false, it follows that H 1 is true. Finally, since R 1 is given and H 1 is true, I 1 is true. I 2 is false because H 2 is false. QED.