Possibilistic Logic. Damien Peelman, Antoine Coulon, Amadou Sylla, Antoine Dessaigne, Loïc Cerf, Narges Hadji-Hosseini.

Similar documents
Encoding formulas with partially constrained weights in a possibilistic-like many-sorted propositional logic

COMP9414: Artificial Intelligence Propositional Logic: Automated Reasoning

Advanced Topics in LP and FP

In : Methodologies for Intelligent Systems, 5 (Z.W. Ras, M. Zemankova, M.L. Emrich, eds.), North-Holland, Amsterdam, 1990, [11] Dubois D.

Description Logics. Foundations of Propositional Logic. franconi. Enrico Franconi

COMP219: Artificial Intelligence. Lecture 19: Logic for KR

Possibilistic logic 1

Modeling Defeasible Argumentation within a Possibilistic Logic Framework with Fuzzy Unification

Toward multiple-agent extensions of possibilistic logic Didier Dubois, Henri Prade

Conflict-Based Belief Revision Operators in Possibilistic Logic

Overview. Knowledge-Based Agents. Introduction. COMP219: Artificial Intelligence. Lecture 19: Logic for KR

COMP219: Artificial Intelligence. Lecture 19: Logic for KR

Logic: Propositional Logic (Part I)

Consistency checking for extended description logics

A Strong Relevant Logic Model of Epistemic Processes in Scientific Discovery

6. Logical Inference

Inference in Propositional Logic

Learning Goals of CS245 Logic and Computation

Propositional Logic: Logical Agents (Part I)

Propositional Logic Language

Foundations of Artificial Intelligence

7. Propositional Logic. Wolfram Burgard and Bernhard Nebel

THE LOGIC OF COMPOUND STATEMENTS

CHAPTER 4 CLASSICAL PROPOSITIONAL SEMANTICS

On the Use of an ATMS for Handling Conflicting Desires

Propositional Reasoning

Propositional Logic: Review

Knowledge representation DATA INFORMATION KNOWLEDGE WISDOM. Figure Relation ship between data, information knowledge and wisdom.

Foundations of Artificial Intelligence

Logical Agents (I) Instructor: Tsung-Che Chiang

Introduction to fuzzy logic

On the Compilation of Stratified Belief Bases under Linear and Possibilistic Logic Policies

Possibilistic Safe Beliefs

PL: Truth Trees. Handout Truth Trees: The Setup

Propositional and Predicate Logic - V

Symbolic Logic 3. For an inference to be deductively valid it is impossible for the conclusion to be false if the premises are true.

A Possibilistic Extension of Description Logics

Reasoning Under Uncertainty: Introduction to Probability

Propositional Logic: Logical Agents (Part I)

Pei Wang( 王培 ) Temple University, Philadelphia, USA

Reasoning Under Uncertainty: Introduction to Probability

Logic: Propositional Logic Truth Tables

Knowledge based Agents

Propositional Logic: Models and Proofs

Measuring conflict and agreement between two prioritized belief bases

Fuzzy Propositional Logic for the Knowledge Representation

A Split-combination Method for Merging Inconsistent Possibilistic Knowledge Bases

Propositional Logic Arguments (5A) Young W. Lim 11/8/16

Propositional Logic Arguments (5A) Young W. Lim 11/30/16

Reasoning with Inconsistent and Uncertain Ontologies

Logic in AI Chapter 7. Mausam (Based on slides of Dan Weld, Stuart Russell, Subbarao Kambhampati, Dieter Fox, Henry Kautz )

AI Programming CS S-09 Knowledge Representation

Comparison of Rough-set and Interval-set Models for Uncertain Reasoning

Uncertain Logic with Multiple Predicates

On flexible database querying via extensions to fuzzy sets

Propositional Resolution

Artificial Intelligence. Propositional Logic. Copyright 2011 Dieter Fensel and Florian Fischer

Propositional Logic: Methods of Proof (Part II)

Knowledge base (KB) = set of sentences in a formal language Declarative approach to building an agent (or other system):

CS1021. Why logic? Logic about inference or argument. Start from assumptions or axioms. Make deductions according to rules of reasoning.

Propositional Resolution Introduction

Description Logics. Deduction in Propositional Logic. franconi. Enrico Franconi

Logical Agents. Knowledge based agents. Knowledge based agents. Knowledge based agents. The Wumpus World. Knowledge Bases 10/20/14

Propositional Logic: Methods of Proof. Chapter 7, Part II

Title: Logical Agents AIMA: Chapter 7 (Sections 7.4 and 7.5)

Deductive Systems. Lecture - 3

Artificial Intelligence: Knowledge Representation and Reasoning Week 2 Assessment 1 - Answers

Introduction to Pragmatics

EE562 ARTIFICIAL INTELLIGENCE FOR ENGINEERS

Bayesian Reasoning. Adapted from slides by Tim Finin and Marie desjardins.

On analysis of the unicity of Jeffrey s rule of conditioning in a possibilistic framework

How to determine if a statement is true or false. Fuzzy logic deal with statements that are somewhat vague, such as: this paint is grey.

Logical Agents. Chapter 7

Introduction to Artificial Intelligence Propositional Logic & SAT Solving. UIUC CS 440 / ECE 448 Professor: Eyal Amir Spring Semester 2010

Deliberative Agents Knowledge Representation I. Deliberative Agents

Intelligent Systems. Propositional Logic. Dieter Fensel and Dumitru Roman. Copyright 2008 STI INNSBRUCK

Foundations of Artificial Intelligence

Logic Background (1A) Young W. Lim 12/14/15

Artificial Intelligence Knowledge Representation I

Reasoning with Uncertainty

UNCERTAINTY. In which we see what an agent should do when not all is crystal-clear.

Intelligent Agents. Pınar Yolum Utrecht University

Resolving Incompatibilities among Procedural Goals under Uncertainty

Propositional and First-Order Logic

Chapter 7 R&N ICS 271 Fall 2017 Kalev Kask

Propositional Resolution Part 1. Short Review Professor Anita Wasilewska CSE 352 Artificial Intelligence

Belief revision: A vade-mecum

Artificial Intelligence Chapter 7: Logical Agents

Propositional Logic: Methods of Proof (Part II)

Chapter 11: Automated Proof Systems (1)

Chapter 4: Classical Propositional Semantics

Logical agents. Chapter 7. Chapter 7 1

Logical Agent & Propositional Logic

Epistemic Modals and Informational Consequence

Agenda. Artificial Intelligence. Reasoning in the Wumpus World. The Wumpus World

KRR. Example. Example (formalization) Example (cont.) Knowledge representation & reasoning

From a Possibility Theory View of Formal Concept Analysis to the Possibilistic Handling of Incomplete and Uncertain Contexts

Knowledge Representation. Propositional logic

Logical Inference. Artificial Intelligence. Topic 12. Reading: Russell and Norvig, Chapter 7, Section 5

Measuring the Good and the Bad in Inconsistent Information

Transcription:

Possibilistic Logic Damien Peelman, Antoine Coulon, Amadou Sylla, Antoine Dessaigne, Loïc Cerf, Narges Hadji-Hosseini November 21, 2005 1 Introduction In real life there are some situations where only incomplete information is available. This information can be seen as an incomplete evidence. Yet, it is important to reason in with the help of an incomplete knowledge base and partially inconsistent knowledge. Possibilistic logic offers an approach in this direction by respecting the fuzzyness of the available information. 2 Relationship of possibility theory and fuzzy sets In order to reason in an incomplete knowledge base it is important to introduce at least some basic mathematical background and to explain the relationship of possibilisitc logic with the concept of fuzzy sets which emerged as a mathematical tool describing the type of model people usually use when reasoning about systems. Roughly, fuzzy sets allow to model sentences like John is tall and the tomato is red where the definition of what can be seen as tall or red are fuzzy and not exactly defined. The degree of membership is expressed by the membership function and can have values between 0 and 1. However, possibilistic logic is not fuzzy logic in terms of being a multi-valued logic. It is more a technique to reason in systems, where necessity and certainty qualified statements appear. It is a fuzzy restriction of the truth value of a sentence. The notion of a possibility distribution plays a major role in possibility theory. In fact, a possibility distribution π x (u) attached to a variable x, whose value is unknown, expresses the possibility of the equality x = u. π x (u) = 0 means x = u is impossible π x (u) = 1 means x = u is completely allowed π x (u) > π x (u ) means x = u is preferred to x = u, where π x > π x means π x is more specific than π x The principle of minimal specificity leads to the maximal degree of possibility in terms of a maximal possibility distribution. The possibility distribution allows to express two kinds of statements, necessity qualified statements concerning a sentence and certainty qualified statements, where the first kind of statements expresses to what extent the available evidence entails the truth of this sentence, whereas the degree of possibility expresses to what extent the truth of of this sentence is not incompatible with the available evidence. For reasoning in a knowledge base 1

consisting of necessity valued statements it is sufficient to know or to calculate the possibility distribution fulfilling the principle of minimal specificity. The set of models (interpretation where the sentence is true) is also a fuzzy set meaning that there are no clear boundaries. The possibility distribution is equivalent to the membership function of all interpretations to the fuzzy set of models. 3 Automated deduction Dubois and Prade conceived in 1991 a formal system (that is to say a set of axioms and inference rules) which can be used to deduce automatically the certainty of any formula given a knowledge base. This system only uses necessity qualified statements. The knowledge base is therefore a set of formulas weighted with a degree of certainty. Usually this degree is expressed with a number between 0 (for totally uncertain statement, what does not mean that it is a false statement!) to 1 (for a totally certain statement). In fact, only an ordered relationship on the formulas is required. For instance, automatic deduction can be made with the following knowledge base (inspired by the French real-tv show Super Senior ): Φ 1 ((Eliminated(Loretta) Eliminated(Georgette)) ( Eliminated(Loretta) Eliminated(Georgette)) certain ) Φ 2 ( x LosesHerDentures(x) Eliminated(x) moderately certain ) quite uncer- Φ 3 (LosesHerDentures(Loretta) tain ) Φ 4 ( x SwimsWith(x, Jean-Edouard) Eliminated(x) almost certain ) Dubois and Prade also proved that if you apply repeatedly the inference rule (c 1 α 1 ),(c 2 α 2 ) (R(c 1,c 2 ) min(α 1,α 2 )) (where R is any classical resolvent of c 1 and c 2, two clauses from the knowledge base, and α i the degree of certainty of c i ) named resolution rule, you generate only clauses that can actually be infered by possibilistic logic. Moreover you can generate all of them. To compute the degree of certainty of the formula ϕ = Eliminated(Loretta) we can use what is called refutation by resolution. The principle of this method is to add ( ϕ certain ) = ( Eliminated(Loretta) certain ) to the knowledge base and then to apply repeatedly the resolution rule to compute the inconsistency degree of this new knowledge base. This inconsistency has been proved to be the degree of certainty of ϕ. With this example, we can prove that Eliminated(Loretta) is quite uncertain. To do so we apply the resolution rule with c 1 = ( Eliminated(Loretta) certain ) and c 2 = Φ 2 and deduce ( LosesHerDentures(Loretta) moderately certain ) (since min( certain, moderately certain ) = moderately certain ). Then by applying again the resolution rule with c 1 = ( LosesHerDentures(Loretta) and c 2 = Φ 3 we obtain a contradiction with a degree of certainty quite uncertain (since min( moderately certain, quite uncertain ) = moderately certain ). We can also prove that Eliminated(Georgette) is totally uncertain since we cannot find any contradiction by adding ( Eliminated(Georgette) certain ) to the knowledge base. Eliminated(Georgette) is said to be consistent with 2

the knowledge base. Possibilistic logic is linked to the theory of belief revision. In this theory, a knowledge base must always remain consistent when updated by the addition of new believes. To do so the least strong believes (that is to say the ones with the lowest degrees of certainty) are dropped if their degrees of certainty fall under the inconsistency degree of the updated knowledge base. 4 Generalisation The basic version of possibilistic logic that we have discussed so far may be not sufficient to model some kinds of incomplete information we may wish to handle, such as : possibility-qualified sentences, for instance it is possible that John comes conditional sentences, whose condition depends on a fuzzy predicate the later John arrives, the more certain the meeting will not be quiet sentences involving vague predicates, for instance if the temperature is high then there will be only a few participants Previously, we had two kinds of formulas: necessity-valued formulas denoted (ϕ(nα)) expressing that N(ϕ) > α possibility-valued formulas denoted (Π(Nα)) expressing that N(Π) > α Then we extend the language to make it handle both. Thus, a knowledge base F (a set of formulas) can combine both forms. i.e: F = (p (N 0.7)), (p q(π 0.8)) Knowing the relation between the two forms, we can extend simply all the previous notions. Relation: Π(ϕ) = 1 - N( ϕ) The less an event is possible, the more it is sure that it will not happen. Thus, we can: express knowledge about ignorance from what we know express inconcistancy in a better way... All these generalisations allow to handle more accurately real problems because we can now take into account all the precise aspects of one particular real problem. 5 Application to negotiation In multi-agents systems, the agents sometimes need share resources or informations, and to obtain that they may need to negotiate. That means that they have to find an compromise that is satisfying for each of them. For instance, let s imagine a seller and a buyer discussing the price of an object. The seller want to sell his object at the higher price and the buyer at the lower one. They are negotiating. There are two ways to handle a negotiation ; a third part agent or program can compute the best price, or the agents involved in the negotiation can discuss by themselves. The possibilistic logic can help us to model the second one, because it allows us to manipulate 3

uncertain values, and, in our case, unstable mental state. Three possibilistic bases will be used for each agent : a static set of goals : K = {(k i, α i ), i = 1,...,n}, a knowledge base : G = {(g i, β i ), i = 1,...,m}, a set of goals for the other agent : GO = {(go i, δ i ), i = 1,...,p}. A possibility distribution is associated with each base : π K, π G, π GO. Let be X the set of all the offers that can be made by an agent. For instance, x X it is the price of the object. During the negotiation the agents play a dialog composed by actions. Each action is chosen regarding the possibilistic model of beliefs (possibilistic knowledge) and goals. The basic actions are : Offer(x) : the agent makes an offer of value x X, Defy(x) : the agent asks for an explanation of the offer, Argue(S) : the agent gives a new set of knowledge to explain his last offer, GiveUp() : the agent stops negotiating, Accept(x) : the offer x is accepted, the negotiation ends, Accept(S) : the knowledge set S is accepted, Refuse(x) : the agent considers the offer x as unacceptable. It can be proved that every negotiation terminates. And this termination is an Accept(x) or a GiveUp() action. Example : Mary and Peter are looking for a vacation country. Peter wants a country not expensive, possibly sunny. Mary wants a country sunny, not very hot and not expensive Peter believes that Tunisia is not expensive and that Italy is expensive. Mary believes that Tunisia is sunny and that Italy is sunny, not hot, and should not be expensive. Efficiently searching in complex search space is a common need in AI problem solvers. This efficiency has often been achieved by introducing into the problem solver complex control structures that implicitly represent knowledge about the domain, but such design are error-prone and inflexible. Instead, the Assumption-based Truth Maintenance System (ATMS) provides a general mechanism for controlling problem solvers by explicitly representing the structure of the search space and the dependencies of the reasoning steps. The basic principle of the possibilistic ATMS is to associate to each clause a weight alpha which is its necessity degree in order to handle more or less uncertain information. A possibilistic ATMS is capable of answering the following questions: Under what configuration of assumptions is the proposition p certain to the degree alpha? 4

What is the inconsistency degree of a given configuration of assumptions? In a given configuration of assumptions, to what degree is each proposition certain? In a possibilistic ATMS, each piece of information is represented by a propositional clause, which enables: A uniform representation for all pieces of knowledge which means no differentiated storage and treatment between justifications and disjunction of assumptions The capability of handling negated assumptions as assumptions, so environments and nogoods can may contains negations of assumptions A simple and uniform algorithm for the computation of labels and nogoods So far, possibility and necessity degrees have been considered as degrees of uncertainty linked to incomplete information. We can also interpret them in a different way in the scope of constraint-based reasoning. When the degree alpha of a clause is equal to one, the clause must not be violated. When alpha is equal to zero, the clause can be dropped. In this framework : Tautologies are imperatives Contradictions are tolerated Violating one of two constraints can be allowed The possibility distribution induced by a set of N-Valued constraints represents the fuzzy feasibility domain, subnormalization indicating that some constraints can be violated Also, it is clear that for a given problem, there generally exists a specific algorithm whose complexity is better than, or at least as good as the complexity of the necessity-valued semantic evaluation. However, the translation into necessityvalued logic can be useful because : The search method is independent from the problem The pruning properties of the semantic evaluation procedure can confer to the algorithm a good average complexity Necessity-valued logic enables a richer representation capability in the formulation of a problem. A typical example of such problem is the minmax assignment also called bottleneck assignment problem : n tasks must be assigned to n machines with one and only one task per machine. The total cost of the global assignment is not the sum, but the maximum of the costs of the elementary assignment. 6 Conclusion Possibilistic logic is a logic of uncertainty. It takes into account incomplete evidences an partially inconsistent knowledge. Possibilistic logic is different from fuzzy or probabilistic logic. Possibilistic logic is adapted to human reasoning. It is an usefull tool for artificial life. 5

7 References Didier DUBOIS, Jerome LANG, Henri PRADE Possibilistic logic I.R.I.T. Toulouse. Didier DUBOIS Sebastien KONIECNY Henri PRADE Quasi-possibilistic logic and measures of conflict I.R.I.T. Toulouse. 6