Plan Knowledge-Based Agents Logics Propositional Logic KB Agents and Propositional Logic Announcements Assignment2 mailed out last week. Questions? Knowledge-Based Agents So far, what we ve done is look at the basic approach for problem-solving as search in a state space, requiring a specification of a state space, and a description of actions as transformations among states. This can be very powerful and useful. However, sometimes it can be inefficient. For example, what about cases where an agent has only partial knowledge about the world? For example, consider a simple vacuum world, comprised of two connected rooms and a vacuuming agent. Each of the rooms might have dirt on the floor. The agent is capable of moving between the rooms, and of vacuuming the room that it currently is in. Given this description, how many possible states of the world might there be? Now, what if the agent knows where it is and that there is dirt on the floor, but not which room(s) the dirt is in? What would the initial state of the world be? We can capture this partial knowledge by a set of states, where the set of states includes all those deemed possible by the agent. We can solve the problem by finding a single sequence of operators that leads from any of the initial states to the goal states. For example: Vacuum the current room, move to the other room, and vacuum the other room. Had it known more specifically which state it was in, it might have avoided one of those vacuuming operations. This manner of solving the problem is possible, but rather unwieldy. If the agent doesn t know which of the n rooms has dirt, then there are 2 n world states making up the initial knowledge state. Searching the space of sets of states can blow the set up enormously. This seems a waste in this case, as a
simple bit of reasoning tells us that if we just go to each room and vacuum, we will end up with a clean house whatever state it was in previously. As in this example, it is very often the case that it is feasible to characterize a partial state of knowledge more concisely than to list all the possible physical states. Think about the example: It is very concise to say that the agent knows that some of the rooms are dirty. It is another matter to list all the ways in which some of the rooms can be dirty. If it knows some of the rooms are dirty, this enumeration should be possible in a very mechanical way; but perhaps it can avoid much of the enumeration - which leads to the combinatorics - by just manipulating its knowledge rather than its models of the states. This goes to the heart of developing an agent based on knowledge. It frees us up from having to talk about specifics of world states, and instead talk simply about what an agent knows. Note, though, that knowing is more than just a matter of retrieving facts from some kind of a data base. What an agent knows can be well beyond what is explicitly represented. In the vacuum example, if the agent knows that some of the rooms are dirty it can answer all kinds of questions: Might room A be dirty? Must room A be dirty? Even without storing all of the answers in its database, it can derive them from what it knows. Thus, a knowledge-based agent is one that it makes sense to ASK questions of, and to TELL knowledge to. Such an agent can be usefully described as having knowledge, where this knowledge can be changed by TELLing it other knowledge, and the agent s current knowledge state (and its consequences) can be probed by ASKing. The knowledge state of such an agent is called its knowledge base, or KB. The idea of programming an agent by TELLing it things is called the declarative approach. For example, consider our vacuum cleaner agent. First, we TELL it that the house has exactly two rooms, A and B. Then we TELL it that some room or rooms have dirt in them. We could ask it whether room A has dirt in it... what should it answer? We could ask it whether room A might have dirt in it... what should it answer? Then we TELL it that room A has no dirt in it. What should it answer if we ask whether room A might have dirt in it. How about whether room B
might have dirt in it? How about whether room B has dirt in it? Note that here we were able to talk about interacting with the agent by TELLing and ASKing, and we possess some notion of how changing the knowledge of the agent should change what it believes based on the knowledge. In essence, the contents of its knowledge base lead (or should lead) to new conclusions about the world. This is the notion of entailment, which we ll soon return to. First, though, note that the TELL/ASK interface is just a higher-level abstraction of the agent, no different from the other abstractions we typically use in computer science. In describing a machine, we have a choice about whether to think of it at the programming language level, the machine language level, at the digital level, down to the transistors. Here we are adding a new higher level, called the knowledge-level. Just as with the other abstractions, to the extent that the higher level is an accurate characterization, it can dramatically simplify our model of the agent, and help us develop more complex behaviors. Logics Of course, to make the knowledge-level abstraction work, it needs to be grounded in a computational form. We need several things. We need a notation to write down the KB. This is called the knowledge representation language. A KR language consists of two parts: syntax: the legal sentences semantics: the facts in this world to which the sentences correspond Note that the syntax defines a structure, while the semantics ascribes meaning to the symbols embedded in the syntax. We are used to that as computer scientists - what we name a variable isn t important; rather, it is how we expect it to be used (what is it bound to) and where it fits into the program that is crucial. Interpretations. The semantics thus defines an interpretation for symbols. For example, the interpretation of a sentence DirtyA might be the fact that there is dirt in room A. The sentence is true if it is case in the world that this fact holds.
A KR language with a precisely defined syntax and semantics is called a logic. In addition, we typically associate with the formal language a reasoning method or inference mechanism, by which some sentences are derived from others. It is useful to consider some properties of an inference mechanism. First, we need the notion of entailment. A fact is entailed by a state of knowledge (a KB), if, for every world possible in that state of knowledge, the fact is true. (notation, KB = α). For example, in our dirty room case, the KB claims that it is true that either room A or room B is dirty, and that room A isn t dirty. The fact that room B is dirty is entailed by the KB - at least in what we would usually consider a normal world. But just because something is entailed doesn t mean that we can build a mechanism to find it. Next we need the notion of derivation. If we can derive α from the KB using our inference mechanism, we write KB α, and call the record of the inference procedure a proof. So now the key is the degree to which our inference mechanism derives things that are entailed. We will say that our inference mechanism (or the logic system that includes it) is sound if it only derives things that are truly entailed. We will say that it is complete if it can derive everything that is truly entailed... Soundness (if derivation, then entailed) and completeness (if entailment, then derived). Sentences being true, valid, satisfiable, unsatisfiable. First requires a specific interpretation, others talk about properties wrt the set of interpretations. For example, the sentence D x or not D x is true in all interpretations, no matter what the symbol D x happens to mean. The sentence D x or D y is not valid but is satisfiable, because under some interpretations it is true, and in others it is false. But the sentence D x and not D x is neither valid nor satisfiable. Note that we have been working with a very broad definition of logic. In fact, any language that is operated on by formal rules is a logic. For example, we are all familiar with the logic of mathematics. We can have sentences, such as equations: 2 + x**2 = y + 2. We have inference rules. For
example, if we know our equation above is true, then we also know that x**2 = y, based on our inference mechanism that says that if we subtract (add) equal amounts from (to) both sides of an equation, then the equation we get out has the same truth value. Note that this inference rule leads to derivations more generally: if X+C = Y+C then X=Y. We could test to see if this rule itself is valid, or satisfiable, or unsatisfiable. We could substitute different values for X, Y, and C. If we stumble on the case, for example, where X = Y = C = 2, then the first equality holds and so does the second one, so it is satisfiable. If we have X = 2 and Y = 3, then X+C is not equal to Y+C, so our inference rule doesn t apply anyway. In fact, if we had enough time, we might conclude that the inference rule is valid - every time we find an X and Y that make the test part true, the other part also is true. As an aside, sometimes we might equate valid with vacuous. For example, the statement Either it is raining outside or it isn t, is valid, but not especially enlightening. But these kinds of tautologies just scratch the surface - in fact, valid sets of sentences are very powerful: to know something must be true even though you don t know all the details is very useful. Of course, we probably wouldn t want to establish validity through an exhaustive enumeration. For mathematics, if we want to verify that something like the above is valid, we might appeal to other proof procedures that might involve yet other inference mechanisms. But there might be other logics where we could verify the validity of a set of sentences in more of a brute force method. This is the case for some of the more specific usages of the term logic. Often the term logic is reserved for some particular general-purpose languages, one of which, propositional logic, we discuss today, and the other, first-order logic, we discuss next time. Propositional Logic Propositional Logic has a syntax to define what are the legal sentences. Give BNF, explain terminals.
Sentence = True False PropositionalSymbol (Sentence) Sentence Sentence Sentence Sentence Sentence Sentence Sentence Sentence Sentence What are the semantics of these? Well, we can really get at them by looking at truth-tables, where we compose truth values for complex expressions. For example, the implication case looks like this... We can test validity by generating all interpretations. Validity is the right property for an agent because its results are invariant wrt interpretations of the primitive symbols. Can test entailment by determining whether the corresponding implication is valid. Requires time O(2 n ) for a sentence with n propositional symbols. A sentence is satisfiable iff its negation is not valid. Some of the rooms are dirty: DirtyA V DirtyB If A has been vacuumed, then it isn t dirty: VacuumedA => ~DirtyA Now, given what we know, we might suppose that if A has been vacuumed, then it must be B that is dirty: VacuumedA => DirtyB. This sounds reasonable. We can check it with a truth table. DA DB VA DAvDB VA=>~DA VA=>DB T T T T F T T T F T T T T F T T F F T F F T T T F T T T T T F T F T T T F F T F T F F F F F T T
In short, whenever the propositions are such that the two sentences that we ve been told are true are indeed true, it is also the case that the third sentence is also true. This illustrates how we can show that two (or in general we could use more) sentences entail a third one, based on truth tables. In fact, these patterns allow us to establish inference rules for making derivations that are sound. For example: Modus Ponens (Implication-elimination) And-Elimination And-Introduction Or-Introduction Double-Negation-Elimination Unit Resolution (same as MP) Resolution α β,α β α 1 α 2 h α n 7 α i α 1,α 2,h,α n 7 α 1 α 2 h α n α i 7 α 1 α 2 h α n α α α β, β α α β, β γ α γ For example, we can verify the resolution rule using truth tables. We can use these inference rules to construct proofs. Example: Some of the rooms are dirty: DirtyA V DirtyB Vacuuming cleans them: VacuumA => ~DirtyA Convert latter clause to ~VacuumA V ~DirtyA Now use resolution to get: ~VacuumA V DirtyB
Of course, this could be converted back to implication: VacuumA => DirtyB This was what we proved with truth tables. What if told that A has been vacuumed? Then using Modus Ponens we can conclude DirtyB. And of course, with or-introduction, we can infer from DirtyB that it is true that DirtyA V DirtyB is true as well Inference is Search! Finding a proof can be viewed as a search procedure, where inference rules are operators that transform one set of sentences to another, augmented set. Branching factor depends on how many inference rules are applicable. However, there is no need to ever back up (remove sentences generated along an abandoned path), since propositional logic is monotonic, if KB entails α, then (KB,S) entails α, for any sentence S. Note that we can answer any entailment question in time O(2 n ), using validity procedure. It does not seem that we can do better in the general case. In some special cases, however, more efficient procedures are available. 1. If all we use is conjunction (no disjunction or negation), then essentially our KB is a database, and can answer any question via lookup. 2. A sentence of the form 8 P 1 P 2 l P n Q is called a Horn sentence. If all sentences are Horn, then we can simply apply Modus Ponens until there are no more conclusions, and answer any entailment question in polynomial time.