Logical Agents (I) Instructor: Tsung-Che Chiang

Similar documents
7 LOGICAL AGENTS. OHJ-2556 Artificial Intelligence, Spring OHJ-2556 Artificial Intelligence, Spring

CS 331: Artificial Intelligence Propositional Logic I. Knowledge-based Agents

Knowledge-based Agents. CS 331: Artificial Intelligence Propositional Logic I. Knowledge-based Agents. Outline. Knowledge-based Agents

Logical Agent & Propositional Logic

Intelligent Agents. Pınar Yolum Utrecht University

Logical Agent & Propositional Logic

EE562 ARTIFICIAL INTELLIGENCE FOR ENGINEERS

Logical Agents. Outline

Logical Agents. Chapter 7

Title: Logical Agents AIMA: Chapter 7 (Sections 7.4 and 7.5)

Chapter 7 R&N ICS 271 Fall 2017 Kalev Kask

Class Assignment Strategies

Introduction to Intelligent Systems

INF5390 Kunstig intelligens. Logical Agents. Roar Fjellheim

Artificial Intelligence Chapter 7: Logical Agents

Outline. Logical Agents. Logical Reasoning. Knowledge Representation. Logical reasoning Propositional Logic Wumpus World Inference

Logical Agents. Knowledge based agents. Knowledge based agents. Knowledge based agents. The Wumpus World. Knowledge Bases 10/20/14

Introduction to Intelligent Systems

CS 771 Artificial Intelligence. Propositional Logic

Proof Methods for Propositional Logic

Outline. Logical Agents. Logical Reasoning. Knowledge Representation. Logical reasoning Propositional Logic Wumpus World Inference

Artificial Intelligence

Inf2D 06: Logical Agents: Knowledge Bases and the Wumpus World

Logic. Introduction to Artificial Intelligence CS/ECE 348 Lecture 11 September 27, 2001

Logical agents. Chapter 7. Chapter 7 1

TDT4136 Logic and Reasoning Systems

Logical Agents: Propositional Logic. Chapter 7

Introduction to Artificial Intelligence. Logical Agents

Revised by Hankui Zhuo, March 21, Logical agents. Chapter 7. Chapter 7 1

Artificial Intelligence. Propositional logic

Artificial Intelligence

Propositional Logic: Logical Agents (Part I)

Logical Agents. Santa Clara University

Knowledge base (KB) = set of sentences in a formal language Declarative approach to building an agent (or other system):

Kecerdasan Buatan M. Ali Fauzi

Logic. proof and truth syntacs and semantics. Peter Antal

CS 380: ARTIFICIAL INTELLIGENCE

The Wumpus Game. Stench Gold. Start. Cao Hoang Tru CSE Faculty - HCMUT

CSC242: Intro to AI. Lecture 11. Tuesday, February 26, 13

CS 380: ARTIFICIAL INTELLIGENCE PREDICATE LOGICS. Santiago Ontañón

7. Logical Agents. COMP9414/ 9814/ 3411: Artificial Intelligence. Outline. Knowledge base. Models and Planning. Russell & Norvig, Chapter 7.

CS 4700: Foundations of Artificial Intelligence

Foundations of Artificial Intelligence

Foundations of Artificial Intelligence

Lecture 7: Logical Agents and Propositional Logic

Logical Agents. Chapter 7

Foundations of Artificial Intelligence

CS 188: Artificial Intelligence Spring 2007

7. Propositional Logic. Wolfram Burgard and Bernhard Nebel

Agents that reason logically

AI Programming CS S-09 Knowledge Representation

Logical Agents. Soleymani. Artificial Intelligence: A Modern Approach, 3 rd Edition, Chapter 7

Inference Methods In Propositional Logic

Propositional Logic: Logical Agents (Part I)

Logical agents. Chapter 7. Chapter 7 1

Agenda. Artificial Intelligence. Reasoning in the Wumpus World. The Wumpus World

Logical Agents. Course material: Artificial Intelligence: A Modern Approach, 3 rd Edition, Chapter 7

Last update: March 4, Logical agents. CMSC 421: Chapter 7. CMSC 421: Chapter 7 1

Logical Agents. Outline

Propositional Logic. Logic. Propositional Logic Syntax. Propositional Logic

Inference Methods In Propositional Logic

Lecture Overview [ ] Introduction to Artificial Intelligence COMP 3501 / COMP Lecture 6. Motivation. Logical Agents

Chương 3 Tri th ức và lập luận

Lecture 6: Knowledge 1

Propositional Logic Part 1

Propositional Logic: Methods of Proof (Part II)

CS 7180: Behavioral Modeling and Decision- making in AI

Logic & Logic Agents Chapter 7 (& background)

Logical Inference. Artificial Intelligence. Topic 12. Reading: Russell and Norvig, Chapter 7, Section 5

Logical Agents. Soleymani. Artificial Intelligence: A Modern Approach, 3 rd Edition, Chapter 7

Knowledge based Agents

CS 4700: Artificial Intelligence

Propositional Logic: Methods of Proof (Part II)

Lecture 7: Logic and Planning

Inference in Propositional Logic

Propositional Logic: Methods of Proof. Chapter 7, Part II

Artificial Intelligence Knowledge Representation I

CS:4420 Artificial Intelligence

Logic & Logic Agents Chapter 7 (& Some background)

Adversarial Search & Logic and Reasoning

COMP9414: Artificial Intelligence Propositional Logic: Automated Reasoning

Logical Agents. Administrative. Thursday: Midterm 1, 7p-9p. Next Tuesday: DOW1013: Last name A-M DOW1017: Last name N-Z

CSCI 5582 Artificial Intelligence. Today 9/28. Knowledge Representation. Lecture 9

Logic: Propositional Logic (Part I)

CS:4420 Artificial Intelligence

Price: $25 (incl. T-Shirt, morning tea and lunch) Visit:

Outline. Logic. Knowledge bases. Wumpus world characteriza/on. Wumpus World PEAS descrip/on. A simple knowledge- based agent

COMP219: Artificial Intelligence. Lecture 19: Logic for KR

Advanced Topics in LP and FP

COMP219: Artificial Intelligence. Lecture 19: Logic for KR

7.5.2 Proof by Resolution

Deliberative Agents Knowledge Representation I. Deliberative Agents

Propositional Logic: Methods of Proof (Part II)

A simple knowledge-based agent. Logical agents. Outline. Wumpus World PEAS description. Wumpus world characterization. Knowledge bases.

Propositional inference, propositional agents

Description Logics. Foundations of Propositional Logic. franconi. Enrico Franconi

Propositional and First-Order Logic

COMP3702/7702 Artificial Intelligence Week 5: Search in Continuous Space with an Application in Motion Planning " Hanna Kurniawati"

First Order Logic (FOL)

Overview. Knowledge-Based Agents. Introduction. COMP219: Artificial Intelligence. Lecture 19: Logic for KR

Transcription:

Logical Agents (I) Instructor: Tsung-Che Chiang tcchiang@ieee.org Department of Computer Science and Information Engineering National Taiwan Normal University Artificial Intelligence, Spring, 2010 編譯有誤 實作有誤 執行有誤 結果有誤 統計有誤 496470255 496470308-2 496470011-5 496470035 496470310-5 496470413-5 495211298 496470229-2 -5-5 496479562-5 496470293 496470463 496470267 496470334-10 -5 496470384 495730127-5 495730086-5 496470358-10 -5 495470313-5 -5 496470023-20 -5 2 1

Outline Knowledge-based Agents The Wumpus World Logic Propositional Logic Reasoning Patterns in Propositional Logic Effective Propositional Inference Agents based on Propositional Logic Summary 3 Knowledge-based Agents Humans know things and do reasoning. Knowledge and reasoning play a crucial role in dealing with partially observable environments. e.g. diagnosing a patient Understanding natural language also requires reasoning. 4 2

Knowledge-based Agents Knowledge-based agents can benefit from knowledge expressed in very general forms to suit many purposes. They are able to accept new tasks explicitly described by goals, achieve competence by being told new knowledge about the environment, and adapt to changes by updating the relevant knowledge. 5 Knowledge-based Agents The central component of a knowledgebased agent is its knowledge base (KB). A KB is a set of sentences. Each sentence is expressed in a knowledge representation language and represents some assertion about the world. 6 3

Knowledge-based Agents TELL: add new sentences to the KB ASK: query what is known Both tasks may involve inference deriving new sentences from old. When one ASKS a question of the KB, the answer should follow what has been TELLED (told) to the KB. 7 Knowledge-based Agents The KB may initially contain some background knowledge. The details of the representation language are hidden inside three functions. The details of the inference mechanism are hidden inside TELL and ASK. Artificial Intelligence: A Modern Approach, 2nd ed., Figure 7.1 8 4

Knowledge-based Agents One can build a knowledge-based agent by TELLing it what it needs to know. But how? Declarative approach It adds one by one the sentences that represent the designer s knowledge. Design of the representation language is important. Procedural approach It encodes desired behaviors directly as program code. Minimizing the role of explicit representation and reasoning can result in a much more efficient system. 9 Knowledge-based Agents We will see both declarative and procedural approaches later. A successful agent must combine both elements in its design. 10 5

The Wumpus World PEAS description Environment: It is a 4 4 grid of rooms. The agent always starts from [1, 1], facing to the right. The locations of the gold and the wumpus are chosen randomly. Each square other than the start can be a pit with probability 0.2. The agent has only one arrow. Artificial Intelligence: A Modern Approach, 2nd ed., Figure 7.2 11 The Wumpus World PEAS description Performance measure: +1000 for picking up the gold -1000 for falling into a pit or being eaten by the wumpus -1 for each action -10 for shooting the (only one) arrow 12 6

The Wumpus World PEAS description Actuators: Move forward Turn left/right by 90 Grab Shoot Sensors: Stench Breeze Glitter Bump Scream Artificial Intelligence: A Modern Approach, 2nd ed., Figure 7.2 13 The Wumpus World Breeze Stench Glitter 14 7

The Wumpus World Fundamental property of reasoning In each case where the agent draws a conclusion from the available information, that conclusion is guaranteed to be correct if the available information is correct. 15 Logic Sentences in the KB are expressed according to the syntax of the representation language. e.g. x+y=4 is a well-formed sentence, whereas x4y+= is not. A logic must also define the semantics of the language. It defines the truth of each sentence w.r.t. each possible world. 16 8

Logic Model We will use the term model in place of possible world. We will say m is a model of to mean that sentence is true in model m. Entailment We use to mean that the sentence entails the sentence. if and only if in every model in which is true, is also true. The truth of is contained in the truth of. 17 Logic Example The agent has detected nothing in [1, 1] and a breeze in [2, 1]. It is interested in whether the adjacent squares [1, 2], [2, 2], and [3, 1] contain pits. There are 2 3 = 8 possible models. The percepts and the rules of the wumpus world constitute the KB. The KB is true in models that follow what the agent knows. 18 9

Logic 8 possible models nothing in [1, 1] and breeze in [2, 1] 19 Logic 1 = There is no pit in [1, 2] KB 1 Artificial Intelligence: A Modern Approach, 2nd ed., Figure 7.5 20 10

Logic 2 = There is no pit in [2, 2] KB 2 Artificial Intelligence: A Modern Approach, 2nd ed., Figure 7.5 21 Logic The previous example shows how an inference algorithm called model checking works. It enumerates all possible models to check that is true in all models in which KB is true. 22 11

Logic If an inference algorithm i can derive from KB, we write KB i. An inference algorithm is called sound or truthpreserving if it derives only entailed sentences. Model checking is a sound algorithm (when it is applicable). An inference algorithm is complete if it can derive any sentence that is entailed. 23 Logic If KB is true in the real world, then any sentence derived from KB by a sound inference procedure is also true in the real world. Artificial Intelligence: A Modern Approach, 2nd ed., Figure 7.6 24 12

Logic Grounding How do we know that KB is true in the real world? The simple answer is that the agent s sensors create the connection. What about the rest of the agent s knowledge? Knowledge that is not a direct representation of a single percept could be produced by a sentence construction procedure called learning. KB may not be true in the real world, but with good learning procedures there is reason for optimism. 25 Tea Time Wumpus World http://www.youtube.com/watch?v=tgrxla1ey4a Wumpus World Game http://www.inthe70s.com/games/wumpus/index.shtml# http://www.funzac.com/play/wumpus%20world.html 26 13

Propositional Logic Syntax A proposition symbol stands for a proposition that can be true or false. special symbols: True and False The atomic sentences are indivisible syntactic elements. They consist of a single proposition symbol. Complex sentences are constructed from simpler sentences using logical connectives. A literal is either an atomic sentence or a negated atomic sentence. 27 Propositional Logic A BNF grammar Sentence AtomicSentence ComplexSentence AtomicSentence True False Symbol Symbol P Q R ComplexSentence negation conjunction disjunction implication biconditional Sentence (Sentence Sentence) (Sentence Sentence) (Sentence Sentence) (Sentence Sentence) 28 14

Propositional Logic Semantics In propositional logic, a model simply fixes the truth value for every proposition symbol. The semantics must specify how to compute the truth value of any sentence, given a model. Truth table Artificial Intelligence: A Modern Approach, 2nd ed., Figure 7.8 29 Propositional Logic P Q says that If P is true, then I am claiming that Q is true. Otherwise, I am making no claim. P Q shows that it is true whenever both P Q and Q P are true. e.g. B 1,1 (P 1,2 P 2,1 ) B 1,1 (P 1,2 P 2,1 ) is true but incomplete. 30 15

Propositional Logic We often omit the parentheses by obeying the order of precedence (from highest to lowest):,,,, and. P Q R Sis equivalent to (( P) (Q R)) S We allow A B C, A B C, and A B C. However, we do not allow A B C since it is ambiguous. A (B C) and (A B) C have different meaning. 31 Propositional Logic A logical knowledge base is a conjunction of sentences. If we start with an empty KB and do TELL(KB, S 1 ), TELL(KB, S n ) then we have KB = S 1 S n. 32 16

Propositional Logic A simple knowledge base for the wumpus world (only considering the pits) There is no pit in [1, 1]. R 1 : P 1, 1 A square is breezy if and only if there is a pit in a neighboring square. (True in all wumpus worlds) R 2 : B 1,1 (P 1,2 P 2,1 ) R 3 : B 2,1 (P 1,1 P 2,2 P 3,1 ) Agent percepts R 4 : B 1,1 R 5 : B 2,1 P? OK B 33 Inference The aim of logical inference is to decide whether KB for some sentence. Our first algorithm will enumerate the models and check that is true in every model in which KB is true. e.g. In the previous slide, we have seven relevant proposition symbols. There are 2 7 = 128 possible models, and in three of them KB is true. 34 17

Inference 1 = P 1, 2 OK B R 1 : P 1, 1 R 2 : B 1,1 (P 1,2 P 2,1 ) R 3 : B 2,1 (P 1,1 P 2,2 P 3,1 ) R 4 : B 1,1 128 models R 5 : B 2,1 Artificial Intelligence: A Modern Approach, 2nd ed., Figure 7.9 35 Inference Exercise Write down the related rules. Apply the truth table to do model checking. Wumpus world: Is there a breeze in [2,2]? B? B Minesweeper: Where is the mine? 1 1 D A B 1 1 C 1 36 18

Inference Artificial Intelligence: A Modern Approach, 2nd ed., Figure 7.10 37 Inference Analysis TT-Entails is sound and complete. However, If KB and contain n symbols, then there are 2 n possible models. The time complexity is O(2 n ) and the space complexity is O(n). We will see more efficient algorithms later. But every known inference algorithm for propositional logic has a worst-case complexity that is exponential in the size of the input. (Propositional entailment is co-np-complete.) 38 19

Inference Before we plunge into the details of logical inference algorithms, we need some additional concepts related to entailment. equivalence validity satisfiability 39 Inference Logical equivalence Two sentences and are logically equivalent if they are true in the same set of models. We write this as. An alternative definition is if and only if and. 40 20

Inference Logical equivalence Artificial Intelligence: A Modern Approach, 2nd ed., Figure 7.11 41 Inference Validity (tautology) A sentence is valid if it is true in all models. The deduction theorem: For any sentences and, if and only if the sentence ( ) is valid. We can think of the TT-ENTAILS algorithm as checking the validity of (KB ). Conversely, every valid implication sentence describes a legitimate inference. 42 21

Inference Satisfiability A sentence is satisfiable if it is true in some model. If a sentence is true in a model m, we say that m satisfies or that m is a model of. Determining the satisfiability of sentences in propositional logic was the first problem proved to be NP-complete. 43 Inference Satisfiability Validity and satisfiability are connected. if and only if the sentence ( ) is unsatisfiable. Proving from by checking the unsatisfiability of ( ) is called proof by contradiction. One assumes a sentence to be false and shows that this leads to a contradiction with known axioms. 44 22

Reasoning Patterns in PL Inference rules Modus Ponens And-Elimination And-Introduction Or-Introduction All of the logical equivalences in slide 41 can be used as inference rules. 45 Reasoning Patterns in PL Example R 1 : P 1, 1 R 2 : B 1,1 (P 1,2 P 2,1 ) R 3 : B 2,1 (P 1,1 P 2,2 P 3,1 ) R 4 : B 1,1 R 5 : B 2,1 R 6 : (B 1,1 (P 1,2 P 2,1 )) ((P 1,2 P 2,1 ) B 1,1 ) R 7 : (P 1,2 P 2,1 ) B 1,1 R 8 : B 1,1 (P 1,2 P 2,1 ) R 9 : (P 1,2 P 2,1 ) R 10 : P 1,2 P 2,1 OK Biconditional elimination of R 2 And elimination of R 6 Contraposition of R 7 Modus Ponens with R 4 and R 8 De Morgan s Rule with R 9 46 23

Reasoning Patterns in PL The sequence of applications of inference rules is called a proof. Finding proofs is exactly like finding solutions to search problems. The successor function can be defined to generate all possible applications of inference rules. 47 Reasoning Patterns in PL Searching for proofs is an alternative to enumerating models. Although inference in propositional logic is NPcomplete, finding a proof can be highly efficient. e.g. The previous proof ignores B 3,1, P 1,1, P 2,2, etc. The simple truth-table algorithm, on the other hand, would be overwhelmed by the exponential explosion of models. 48 24

Reasoning Patterns in PL Monotonicity The set of entailed sentences can only increase as information is added to the KB. For any sentences and, if KB, then (KB ). It means that inference rules can be applied whenever suitable premises are found in the KB the conclusion of the rule must follow regardless of what else is in the KB. 49 Conjunctive Normal Form A sentence expressed as a conjunction of disjunctions of literals is said to be in conjunctive normal form (CNF). A sentence in k-cnf has exactly k literals per clause. Every sentence can be transformed into a CNF sentence. exercise: B 1,1 (P 1,2 P 2,1 ) 50 25

Conjunctive Normal Form Example B 1,1 (P 1,2 P 2,1 ) (B 1,1 (P 1,2 P 2,1 )) ((P 1,2 P 2,1 ) B 1,1 ) ( B 1,1 (P 1,2 P 2,1 )) ( (P 1,2 P 2,1 ) B 1,1 ) ( B 1,1 P 1,2 P 2,1 ) ( P 1,2 P 2,1 ) B 1,1 ) ( B 1,1 P 1,2 P 2,1 ) (( P 1,2 B 1,1 ) ( P 2,1 B 1,1 )) ( B 1,1 P 1,2 P 2,1 ) ( P 1,2 B 1,1 ) ( P 2,1 B 1,1 ) 51 Resolution Unit resolution l i and m are complementary literals. Full resolution l i and m j are complementary literals. 52 26

Resolution The resulting clause should contain only one copy of each literal. (The removal of multiple copies of literals is called factoring.) A B, A B A The resolution rule applied only to disjunctions of literals. (But, recall that every sentence can be transformed into a 3-CNF sentence.) 53 Resolution Any complete search algorithm, applying only the resolution rule, can derive any conclusion entailed by any knowledge base. Given that A is true, we cannot generate the consequence A B. But we can answer whether A B is true. This is called refutation completeness, meaning that resolution can always be used to either confirm or refute a sentence. 54 27

A Resolution Algorithm Resolution-based inference procedures follow the principle of proof by contradiction. To show that KB, we show that (KB ) is unsatisfiable. First (KB ) is converted into CNF. Then, each pair of clauses that contain complementary literals is resolved to produce a new clause. 55 A Resolution Algorithm The process continues until one of two things happen: there are no new clauses that can be added, in which case KB does not entail ; or, two clauses (P and P) resolve to yield the empty clause, in which case KB entails 56 28

A Resolution Algorithm Artificial Intelligence: A Modern Approach, 2nd ed., Figure 7.12 57 A Resolution Algorithm Example KB R 2 : B 1,1 (P 1,2 P 2,1 ) // ( B 1,1 P 1,2 P 2,1 ) ( P 1,2 B 1,1 ) ( P 2,1 B 1,1 ) R 4 : B 1,1 We wish to prove = P 1,2 KB CNF Any clause containing two complementary literals can be discarded. Artificial Intelligence: A Modern Approach, 2nd ed., Figure 7.13 58 29

A Resolution Algorithm Analysis The resolution closure RC(S) of a set of clause S is the set of all clauses derivable by repeated application of the resolution rule to clauses in S or their derivatives. RC(S) must be finite because there are only finitely many distinct clauses constructed by symbols P 1,, P k appearing in S. PL-RESOLUTION always terminates. Note that the last sentence might not be true without the factoring step that removes multiple copies of literals. 59 A Resolution Algorithm Analysis The Ground Resolution Theorem: If a set of clauses is unsatisfiable, then the resolution closure of those clauses contains the empty clause. We can prove this theorem by demonstrating its contrapositive: if the closure RC(S) does not contain the empty clause, then S is satisfiable. 60 30

A Resolution Algorithm Analysis (contd.) We can construct a model for S with suitable truth values for P 1,, P k. For i from 1 to k, if there is a clause containing the literal P i such that all its other literals are false under the assignment of P 1,, P i-1, then assign false to P i. otherwise, assign true to P i. 61 Forward & Backward Chaining The completeness of resolution makes it a very important inference method. In many practical situations, however, the full power of resolution is not needed. Real-world KBs often contain only clauses of a restricted kind called Horn clauses. 62 31

Forward & Backward Chaining Horn clauses A Horn clause is a disjunction of literals of which at most one is positive. In the following algorithm, we assume that each clause contains exactly one positive literal for simplicity. Exact one positive literal: definite clause Definite clause without negative literal: fact Without positive literal: integrity constraint Every Horn clause can be written as an implication. 63 Forward & Backward Chaining Horn clauses Inference with Horn clauses can be done through the forward chaining and backward chaining algorithms. Deciding entailment with Horn clauses can be done in time that is linear to the size of KB. 64 32

Forward & Backward Chaining Forward chaining It begins from known facts (the clauses with only a positive literal) in the KB. If all the premises of an implication are known, then its conclusion is added to the set of known facts. The process continues until the query is added or until no further inferences can be made. 65 Forward & Backward Chaining HEAD[c]: the positive literal of c Artificial Intelligence: A Modern Approach, 2nd ed., Figure 7.14 66 33

Forward & Backward Chaining Artificial Intelligence: A Modern Approach, 2nd ed., Figure 7.15 67 Forward & Backward Chaining Analysis Forward chaining is sound: every inference is essentially an application of Modus Ponens. Forward chaining is complete: Consider the final state of the inferred table after the algorithm terminates. The table contains true for each symbol inferred during the process, and false for all other symbols. We can view this table as a model of the KB. Any atomic sentence entailed by the KB must be true in this model, and thus it is inferred by the algorithm. 68 34

Forward & Backward Chaining Backward chaining As its name suggest, it works backwards from the query. If the query q is known to be true, no work is needed. Otherwise, the algorithm finds those implications in the KB that conclude q. If all the premises of one of those implications can be proved true (by backward chaining), then q is true. 69 Forward & Backward Chaining 70 35

Forward & Backward Chaining Forward chaining is an example of data-driven reasoning. It can be used within an agent to derive conclusions from incoming percepts. Backward chaining is a form of goal-directed reasoning. It is useful for answering specific questions such as What should I do now. Its cost is often lower since it touches only relevant facts. An agent should share the work between forward and backward reasoning. 71 36