Introduction to Intelligent Systems

Similar documents
Introduction to Intelligent Systems

Proof Methods for Propositional Logic

Logical Agent & Propositional Logic

Logical Agents. Knowledge based agents. Knowledge based agents. Knowledge based agents. The Wumpus World. Knowledge Bases 10/20/14

Artificial Intelligence

Artificial Intelligence

Logical Agents (I) Instructor: Tsung-Che Chiang

Artificial Intelligence Chapter 7: Logical Agents

Logical Agent & Propositional Logic

Intelligent Agents. Pınar Yolum Utrecht University

Revised by Hankui Zhuo, March 21, Logical agents. Chapter 7. Chapter 7 1

Class Assignment Strategies

Logical agents. Chapter 7. Chapter 7 1

Logical Agents. Santa Clara University

EE562 ARTIFICIAL INTELLIGENCE FOR ENGINEERS

TDT4136 Logic and Reasoning Systems

Logical Agents. Chapter 7

Logical Agents: Propositional Logic. Chapter 7

Introduction to Artificial Intelligence. Logical Agents

CS 380: ARTIFICIAL INTELLIGENCE PREDICATE LOGICS. Santiago Ontañón

CS 380: ARTIFICIAL INTELLIGENCE

Chapter 7 R&N ICS 271 Fall 2017 Kalev Kask

Logical Agents. Outline

CS 331: Artificial Intelligence Propositional Logic I. Knowledge-based Agents

Knowledge-based Agents. CS 331: Artificial Intelligence Propositional Logic I. Knowledge-based Agents. Outline. Knowledge-based Agents

7 LOGICAL AGENTS. OHJ-2556 Artificial Intelligence, Spring OHJ-2556 Artificial Intelligence, Spring

Outline. Logical Agents. Logical Reasoning. Knowledge Representation. Logical reasoning Propositional Logic Wumpus World Inference

Logic. Introduction to Artificial Intelligence CS/ECE 348 Lecture 11 September 27, 2001

CS 771 Artificial Intelligence. Propositional Logic

The Wumpus Game. Stench Gold. Start. Cao Hoang Tru CSE Faculty - HCMUT

INF5390 Kunstig intelligens. Logical Agents. Roar Fjellheim

Outline. Logical Agents. Logical Reasoning. Knowledge Representation. Logical reasoning Propositional Logic Wumpus World Inference

Kecerdasan Buatan M. Ali Fauzi

Inf2D 06: Logical Agents: Knowledge Bases and the Wumpus World

Last update: March 4, Logical agents. CMSC 421: Chapter 7. CMSC 421: Chapter 7 1

Lecture 7: Logical Agents and Propositional Logic

Lecture 6: Knowledge 1

Knowledge base (KB) = set of sentences in a formal language Declarative approach to building an agent (or other system):

CSC242: Intro to AI. Lecture 11. Tuesday, February 26, 13

7. Logical Agents. COMP9414/ 9814/ 3411: Artificial Intelligence. Outline. Knowledge base. Models and Planning. Russell & Norvig, Chapter 7.

Agenda. Artificial Intelligence. Reasoning in the Wumpus World. The Wumpus World

Propositional Logic: Logical Agents (Part I)

Logical Agents. Outline

Logical Agents. Soleymani. Artificial Intelligence: A Modern Approach, 3 rd Edition, Chapter 7

7. Propositional Logic. Wolfram Burgard and Bernhard Nebel

Lecture Overview [ ] Introduction to Artificial Intelligence COMP 3501 / COMP Lecture 6. Motivation. Logical Agents

Foundations of Artificial Intelligence

Foundations of Artificial Intelligence

CS 188: Artificial Intelligence Spring 2007

CS 4700: Foundations of Artificial Intelligence

Foundations of Artificial Intelligence

Logical Agents. Course material: Artificial Intelligence: A Modern Approach, 3 rd Edition, Chapter 7

Title: Logical Agents AIMA: Chapter 7 (Sections 7.4 and 7.5)

Logic. proof and truth syntacs and semantics. Peter Antal

CS:4420 Artificial Intelligence

Lecture 7: Logic and Planning

Logical Agents. Chapter 7

A simple knowledge-based agent. Logical agents. Outline. Wumpus World PEAS description. Wumpus world characterization. Knowledge bases.

Logical Agents. Soleymani. Artificial Intelligence: A Modern Approach, 3 rd Edition, Chapter 7

Propositional Logic: Logical Agents (Part I)

Logic & Logic Agents Chapter 7 (& background)

Logical agents. Chapter 7. Chapter 7 1

Price: $25 (incl. T-Shirt, morning tea and lunch) Visit:

Artificial Intelligence. Propositional logic

Inference Methods In Propositional Logic

Chương 3 Tri th ức và lập luận

Logic & Logic Agents Chapter 7 (& Some background)

Adversarial Search & Logic and Reasoning

Inference Methods In Propositional Logic

Propositional inference, propositional agents

COMP3702/7702 Artificial Intelligence Week 5: Search in Continuous Space with an Application in Motion Planning " Hanna Kurniawati"

AI Programming CS S-09 Knowledge Representation

Logic: Propositional Logic (Part I)

CSCI 5582 Artificial Intelligence. Today 9/28. Knowledge Representation. Lecture 9

Knowledge- Based Agents. Logical Agents. Knowledge Base. Knowledge- Based Agents 10/7/09

Description Logics. Foundations of Propositional Logic. franconi. Enrico Franconi

CS 7180: Behavioral Modeling and Decision- making in AI

CS 4700: Artificial Intelligence

Knowledge based Agents

CS:4420 Artificial Intelligence

Artificial Intelligence

COMP9414: Artificial Intelligence Propositional Logic: Automated Reasoning

CS 380: ARTIFICIAL INTELLIGENCE LOGICAL AGENTS. Santiago Ontañón

Advanced Topics in LP and FP

Outline. Logic. Knowledge bases. Wumpus world characteriza/on. Wumpus World PEAS descrip/on. A simple knowledge- based agent

Agents that reason logically

7.5.2 Proof by Resolution

Introduction to Arti Intelligence

First Order Logic (FOL)

Propositional Reasoning

Propositional Logic: Methods of Proof. Chapter 7, Part II

Inference in Propositional Logic

Logical Agents. September 14, 2004

Propositional Logic: Methods of Proof (Part II)

Logical Inference. Artificial Intelligence. Topic 12. Reading: Russell and Norvig, Chapter 7, Section 5

Logic in AI Chapter 7. Mausam (Based on slides of Dan Weld, Stuart Russell, Subbarao Kambhampati, Dieter Fox, Henry Kautz )

Language of Propositional Logic

Propositional Logic: Methods of Proof (Part II)

Logical Agents. Seung-Hoon Na Chonbuk National University. 1 Department of Computer Science

Propositional and First-Order Logic

Transcription:

Logical Agents Objectives Inference and entailment Sound and complete inference algorithms Inference by model checking Inference by proof Resolution Forward and backward chaining Reference Russel/Norvig: Chapter 7 Y. Xiang, CIS 3700, Introduction to Intelligent Systems 1 Knowledge-Based Agents When env is partially observable, how can agent find out things that are important but unobservable? The role of knowledge How can knowledge on task env be maintained? A knowledge base (KB) contains a set of sentences formatted according to a representation language. Each sentence represents an assertion about the env. How does agent access KB? 1) Add new sentences by Tell. 2) Query what agent needs to know by Ask. Y. Xiang, CIS 3700, Introduction to Intelligent Systems 2 Inference A. Ask is not a simple database retrieval. Answer to a query may not be a sentence added earlier. B. Answering a query often requires inference that derives new sentences from existing ones. C. An inference algorithm is associated with KB. Answer to a query may be added to KB. D. Fundamental requirement of inference is that an answer should follow from sentences in KB. Ex Wumpus World 1) A wumpus lives in a cave who eats anyone in its room. 2) Pits trap explorer, except wumpus. 3) Agent has one arrow and can shoot wumpus. 4) Rooms next to wumpus are smelly. 5) Rooms next to pit are breezy. 6) Wall bumps. 1,4 2,4 3,4 4,4 7) One room has a heap of gold. 8) Gold glitters. 1,3 2,3 3,3 4,3 9) Dying wumpus screams. 10) Room (1,1) is safe. 1,2 2,2 3,2 4,2 Y. Xiang, CIS 3700, Introduction to Intelligent Systems 3 1,1 2,1 3,1 4,1 Y. Xiang, CIS 3700, Introduction to Intelligent Systems 4 1

Exploring Wumpus World A. Sensors, actuators, and goal 1) Can perceive smell, breeze, glitter, bump, and scream, denoted by 5 variables (s,b,g,p,m). 2) Can turn, forward, shoot, and grab. 3) Get gold without being eaten or falling into pit. B. How will a knowledge-based agent explore WW? a) What is agent s knowledge state before entering WW? b) What is the knowledge state at (1,1)? c) What does agent know after moving to (2,1)? d) What does agent know after moving to (1,2)? e) What should agent do next? Reflection on Wumpus World Agent draws conclusions on unobservable events from background knowledge and observations. In deterministic environments, the conclusions are guaranteed to be correct, if available information is correct. This is a fundamental property of logical reasoning. How can such logical agents be implemented? Y. Xiang, CIS 3700, Introduction to Intelligent Systems 5 Y. Xiang, CIS 3700, Introduction to Intelligent Systems 6 Syntax and Semantics of Language Components of representation language Syntax specifies what sentences are well-formed. Semantics determines truth for each sentence in each model (possible world). 1. Ex Language for sets 2. Ex S 1 : X Y = Z. S 2 : X Y = Z. Model M 1 : X = {2,5,7}, Y = {3,7,8}, Z = {7}. 3. Ex S 3 : There is smell in (4,3). Model M 2 4. Ex M 3 is similar to M 2 but wumpus is in (4,2). Entailment Sentence entails sentence iff, in every model where is true, is true. It is denoted by. Ex For sets X and Y, we have X = Y X Y. Entailment captures the intuitive notion follow. When holds, does hold? Y. Xiang, CIS 3700, Introduction to Intelligent Systems 7 Y. Xiang, CIS 3700, Introduction to Intelligent Systems 8 2

KB and Entailment A KB can be viewed as a statement that asserts each sentence in the KB. KB is well-formed, where is a sentence. Ex Agent Ag is in WW with at most one pit. 1. Perceive nothing unusual at (1,1) and breeze at (2,1). 2. What should Ag s KB contain at the moment? 3. For S 1 = no pit in (1,2), does KB S 1 hold? 4. For S 2 = no pit in (2,2), does KB S 2 hold? 5. For S 3 = pit in (2,2), does KB S 3 hold? Inference Algorithm Inference derives a new sentence from those in KB. It is performed by an algorithm. If inference algorithm inf can derive sentence from KB, it is denoted as KB inf. How does agent utilize inference? Knowledge Base derive + Sentences Sentences entail semantics Task Observable Environment events of env follow semantics Unobservable events of env Y. Xiang, CIS 3700, Introduction to Intelligent Systems 9 Y. Xiang, CIS 3700, Introduction to Intelligent Systems 10 Evaluating Inference Algorithms 1. Result of derive ( inf ) depends on capacity of inf. 2. Given a KB, a sentence and an algorithm inf, either KB inf or KB inf. 3. Given KB and sentence, either KB or KB. 4. Algorithm inf is sound, if every derived sentence is entailed. 5. Algorithm inf is complete, if it can derive every entailed sentence. 6. Sound and complete inference algorithms WW KB in Propositional Logic (PL) Language: Propositional logic Symbols A. Room: W13, P31, S12, B21 B. Agent: L110, East0 C. Percept: Breeze0, Smell2, Bump1 D. Action: Forward0, TurnLeft3, Shoot5 Sentences 1. Rooms next to wumpus are smelly and vice versa. 2. Breeze is sensed if and only if the room is breezy. 3. Exactly one pit exists among (2,1), (3,1) and (3,2). Y. Xiang, CIS 3700, Introduction to Intelligent Systems 11 Y. Xiang, CIS 3700, Introduction to Intelligent Systems 12 3

Inference by Model Checking Aim: Decide whether KB for sentence. Inference by model checking A. Enumerate all models. Check whether is true in each model where KB is true. B. Algorithm isentailed() C. Example on WW 1. Is the algorithm sound? 2. Is the algorithm complete? 3. What is the time complexity? Y. Xiang, CIS 3700, Introduction to Intelligent Systems 13 Algorithm isentailed isentailed(kb, ) { symbols = list of symbols in kb and ; for each model m (truth assignment of all symbols), compute truth value of kb in m; if kb is true, get truth value of in m; if is false, return false; return true; } Y. Xiang, CIS 3700, Introduction to Intelligent Systems 14 P B B P P P k P B B P P P k P B B P P P k P B B P P P k 1 1 2 1 2 3 b 1 1 2 1 2 3 b 1 1 2 1 2 3 b 1 1 2 1 2 3 b 1 2 1 3 2 1 1 2 1 3 2 1 1 2 1 3 2 1 1 2 1 3 2 1 F F F F F F F F T F F F F F T F F F F F F T T F F F F F F F F F F T F F T F F F T F T F F F F T F T T F F F T F F F F F T F F F T F F T F F T F F F T F F T T F F T F F F F F F T T F F T F F T T F T F F F T T F T T F F T T F F F F T F F F F T F T F F F T F F T F F F T T F T F F F F F F T F T F F T F T F T F T F F T F T F T T F T F T F F F F T T F F F T F T T F F T F F T T F F T T F T T F F F F F T T T F F T F T T T F T F F T T T F T T F T T T F F F T F F F F F T T F F F F T F T F F F F T T T F F F F F F T F F T T F T T F F T F T F T F F T F T T T F F T F F F T F T F F F T T F T F F T F T F T F F T T T F T F F F F T F T T F F T T F T T F T F T F T T F T T T F T T F F F T T F F F F T T T F F F T F T T F F F T T T T F F F F F T T F T F F T T T F T F T F T T F T F T T T T F T F F F T T T F F F T T T T F F T F T T T F F T T T T T F F F F T T T T F F T T T T T F T F T T T T F T T T T T T F kb = { P11, ~B12, B21, B12 (P11 v P22 v P13), B21 (P11 v P22 v P31) } 15 Theorems on Entailment 1. Equivalence theorem For sentences and, and hold iff. 2. Deduction theorem For sentences and, holds iff sentence is valid. 3. Contradiction theorem For sentences and, holds iff sentence is unsatisfiable. Y. Xiang, CIS 3700, Introduction to Intelligent Systems 16 4

Using Inference Rules Inference by proof In order to confirm KB, derive entailed sentences from KB, until is derived. Proof is an alternative to model checking. Approach Equip algorithm inf with one or more inference rules 1. How do we ensure that inf is sound? 2. How do we ensure that each rule is sound? 3. How do we obtain sound rules? Inference by Proof Ex From KB = {S 1,S 2,S 3 }, decide whether rooms (1,2) and (2,1) are pit-free. S 1 : B 1,1 (P 1,2 P 2,1 ) S 2 : B 1,2 (P 1,1 P 2,2 P 1,3 ) S 3 : B 1,1 1. What is? Derive by inference rules. 2. Proof may be more efficient than model checking. Why? 3. Will proof be always more efficient than model checking? 17 Y. Xiang, CIS 3700, Introduction to Intelligent Systems 18 The Resolution Rule 1. Ex WW agent Ag perceives smell in (1,2). 2. What can Ag infer about wumpus? 3. If Ag also knows that (1,1) is wumpus-free, what can it conclude? Unit resolution: l 1 l k, m i l 1 l i-1 l i+1 l k where l 1,, l k, m i are literals and m i = l i a) A clause is a disjunction of literals. b) Two literals are complementary, if one is the negation of the other. c) The resultant clause is called resolvent. Y. Xiang, CIS 3700, Introduction to Intelligent Systems 19 Full resolution Full Resolution l 1 l k, m 1 m n l 1 l i-1 l i+1 l k m 1 m j-1 m j+1 m n where m j = l i Resolvent contains one copy of each literal, with duplicates removed. Theorem The full resolution inference rule is sound. 20 5

Conjunctive Normal Form 1. When sentences in KB are not clauses, how can resolution be applied? 2. A sentence expressed as a conjunction of clauses is in conjunctive normal form (CNF). 3. KB is a conjunction of sentences. 4. If each sentence can be converted into CNF, KB will be in CNF and resolution is applicable. Y. Xiang, CIS 3700, Introduction to Intelligent Systems 21 Converting Sentences to CNF Algorithm getcnf(s) returns CNF of a sentence s, where s consists of literals,,,,,, and (). 1. If s is a literal, return s. 2. a) If s = ( p), return getcnf(p). b) If s = (p q), return getcnf( p q). c) If s = (p q), return getcnf( p q). 3. If s = p q, return getcnf(p) getcnf(q). 4. If s = p q, return getcnf( p q). 5. If s = p q, return getcnf((p q) ( p q)). Y. Xiang, CIS 3700, Introduction to Intelligent Systems 22 Converting to CNF 6. If s = p q, call getcnf(p) that returns p 1 p m and call getcfn(q) that returns q 1 q n. Return (p 1 q 1 ) (p 1 q n ) (p m q 1 ) (p m q n ). Ex Convert R (P Q) to CNF. Idea of Resolution Algorithm 1. Determine KB by showing that KB is unsatisfiable. Why is it correct? 2. How can resolution determine unsatisfiability? 3. Theorem Sentence is unsatisfiable iff P and P for some literal P. 4. Resolution algorithm 5. Ex Determine no pit in (2,1), from KB = {S 1, S 2 }. S 1 : B 1,1 (P 1,2 P 2,1 ) S 2 : B 1,1 Y. Xiang, CIS 3700, Introduction to Intelligent Systems 23 Y. Xiang, CIS 3700, Introduction to Intelligent Systems 24 6

Resolution Algorithm Resolution(kb, ) { clau = clauses of kb in CNF; new={}; do { for each p, q in clau with complementary literals, resolvent = Resolve(p, q); if resolvent =, return kb ; new = new {resolvent}; if new clau, return kb ; clau = clau new; new={}; } } Y. Xiang, CIS 3700, Introduction to Intelligent Systems 25 Properties of Resolution Algorithm A. Does Resolution(kb, ) always terminate? 1. Resolution closure is the set of all clauses derivable by repeated resolution. 2. Is resolution closure finite? B. Is Resolution(kb, ) complete? C. Theorem If a set of clauses is unsatisfiable, their resolution closure contains the empty clause. D. What is the time complexity of Resolution()? Y. Xiang, CIS 3700, Introduction to Intelligent Systems 26 Horn Clauses 1. Motivation: Trade expressiveness in knowledge representation for inference efficiency. 2. A Horn clause is a disjunction of literals of which at most one is positive. 3. Definite clause: One positive literal and at least one negative literal A. Implicative form B. Body and head 4. Fact: Only a positive literal 5. Integrity constraint: No positive literal Y. Xiang, CIS 3700, Introduction to Intelligent Systems 27 Forward Chaining Inference with Horn clauses can be performed by forward chaining and backward chaining. 1. Both are intuitive to humans. 2. Deciding entailment takes linear time in size of KB. Forward chaining A. Derive sentences by applying Horn clauses along the direction from body to head. B. Algorithm for forward chaining Y. Xiang, CIS 3700, Introduction to Intelligent Systems 28 7

FC(kb, q) { // kb: Horn clauses; q: positive literal; agenda = facts in kb as a list of symbols; for each symbol s in kb, inferred[s] = false; for each clause c in kb, count[c] = # of literals in body(c); if q is in agenda, return true; while agenda is not empty, do p = pop(agenda); if inferred[p] == false, do inferred[p] = true; for each clause c with p in body(c), do count[c]--; if count[c] == 0, do if head(c) == q, return true; push(head(c), agenda); return false; } Example on Forward Chaining Ex For KB below and query Q, decide if KB Q. C 1 : E X P C 2 : E Y Q C 3 : D X Y C 4 : B D X C 5 : A B E C 6 : A C 7 : B C 8 : D Y. Xiang, CIS 3700, Introduction to Intelligent Systems 29 Y. Xiang, CIS 3700, Introduction to Intelligent Systems 30 Properties of Forward Chaining 1. Is forward chaining sound? 2. Is forward chaining complete? 3. Modify FC(kb, q) to FC(kb) as follows. A. Delete argument q and all statements with q. B. Change last statement to return. 4. FC(kb) derives every atomic sentence entailed. 5. Implication to FC(kb, q) 6. What is the time complexity of forward chaining? 7. What is the price paid for efficiency? Backward Chaining 1. Find implications in KB that conclude query literal q, and try to derive their premises. 2. Ex Backward chaining 3. Cost is often much less than forward chaining. 4. Ex Forward chaining derived P before Q. Is P relevant to query Q? Y. Xiang, CIS 3700, Introduction to Intelligent Systems 31 Y. Xiang, CIS 3700, Introduction to Intelligent Systems 32 8

Summary 1. In partially observable environment, agent can infer the unobservable from knowledge and percepts. 2. Knowledge representation (KR) and inference are fundamental issues of AI. 3. Focus: Deterministic environment, propositional logic for KR, and inference to decide entailment 4. Inference: model checking, resolution and chaining A. They are sound and complete. B. Trade-off between expressiveness and efficiency Y. Xiang, CIS 3700, Introduction to Intelligent Systems 33 9