INSTITUT FÜR INFORMATIK der Technischen Universität München. Lehrstuhl VIII Rechnerstruktur/-architektur Prof. Dr. E. Jessen

Size: px
Start display at page:

Download "INSTITUT FÜR INFORMATIK der Technischen Universität München. Lehrstuhl VIII Rechnerstruktur/-architektur Prof. Dr. E. Jessen"

Transcription

1 INSTITUT FÜR INFORMATIK der Technischen Universität München Lehrstuhl VIII Rechnerstruktur/-architektur Prof. Dr. E. Jessen Forschungsgruppe "`Automated Reasoning"' Probabilistic Reasoning with Maximum Entropy - The System PIT Manfred Schramm, Volker Fischer Keywords: Nonmonotonic Reasoning, Reasoning under Incomplete Knowledge, Reasoning with Maximum Entropy, Default Reasoning, Probabilistic Expert Systems, Binary Decision Diagrams 1 Abstract We present a system for common sense reasoning based on propositional logic, the probability calculus and the concept of model-quantication. The task of this system PIT (for Probability Induction Tool) is to deliver decisions under incomplete knowledge but to keep the necessary additional assumptions as minimal as possible. Following this task it shows non-monotonic behavior in two ways: Non-monotonic decisions can be the result of reasoning in a single probability model (via conditionalization) or in a set of probability models (via additional principles of rational decisions, justi- ed by model-quantication). As the concept of modelquantication delivers a precise semantics we know the corresponding decisions to make sense in many problems of common sense reasoning. We will show this with an example from default reasoning and an example of medical diagnosis. 2 Introduction X-CRClassication: I.2.1, I.2.3, I.2.4, I.2.5 principles) which are able to support rational decisions based on incomplete knowledge. These principles have their common source in the concept of modelquantication (see section 4.1) and nd their dense representation in the (well-known) principle of Maximum Entropy (see [10]). As this principles form the theoretical base for our system we continue by describing the properties of our language in connection to non-monotonic reasoning (see sec. 3) sketching the dierent principles of model quantication, their relation and their connection to ecient implementations (see section 4) explaining the parts of system and their tasks (see section 5) giving a set of examples for working with the system (see section 6) describing the necessary resources and some url's for additional information (see section 8). Propositional logic is a well-researched area of science and allows the specication of many kinds of exact knowledge. However it lacks the features necessary for modeling most real world situations. Especially there are three features missing from propositional logic for modeling common sense reasoning: The ability to describe uncertain knowledge, the ability to support plausible reasoning under incomplete knowledge and a appropriate modeling of the common sense if - then. On the other hand reasoning with probability models is also well-researched, incorporates reasoning with uncertainty and has a mature concept for the common sense conditional (by "conditionalization"). But as the language is very ne grained (context-sensitive), it lacks the ability to deal with incomplete knowledge. We therefore enrich the probability calculus with additional (context-sensitive, global) constraints (resp. 3 The Language and its relation to Nonmonotonic Reasoning Combining propositional logic with probability theory we specify a range of certainty by combining each propositional assumption with a probability interval. To achieve a probabilistic representation we allow the propositional assumptions to be accompanied by probability intervals. We additionally use the classical conditional probability statement to model the common sense conjunctive if-then. All (conditional or unconditional) probability statements representing information are called (linear) constraints. A specication therefore consists of a set of constraints. For example the constraint (in the syntax of PIT) P(a b) = (0.5, 1.0];

2 may represent the information \if I know b (and only b), I decide for a" or \Normally b's are a" or \More than half of the patients with symptom b b suer on the symptom a" where of course a and b could be arbitrary propositional expressions and the interval (0.5, 1] may be any contiguous subset of [0,1]. For any specication we get a corresponding set of probability models (P-Models) which can either be empty (i.e. the constraints are inconsistent), contain a single P-Model (i.e. the information is consistent and complete) or contain an innite number of P-Models (i.e. the information is incomplete). A P-Model is a function which assigns an interpretation 1 a number between [0,1]. The sum of all assignments equals one. (For a formal description we refer to [21] or any introduction to reasoning with probabilities.) Given the case of the single P-Model we can compute the probability of an arbitrary propositional sentence by summing the probabilities of the interpretations which evaluate the sentence to true. Similarly we could calculate the probability of a conditional statement P (ajb) by comparing the total probability of a ^ b to the probability of b. Consequently we evaluate the truth value of an arbitrary probability statement (called query) depending on the given P-Model fullling the statement. This type of reasoning reveals non-monotonic behavior: In a single P-Model the following statements (we use \*" for the propositional \^", \+" for the propositional \_" and \-" for the propositional \:") could be (simultaneously) true: Example 1: P( a b ) = (0.5,1]; P(-a b * c ) = (0.5,1]; P( a b * c * d) = (0.5,1]; describing informations like If I (only) know b, I decide for a. If I (only) know b and c, I decide for not-a. If I (only) know b, c and d, I decide for a. Given the case of the innite set and calculating the probabilities of the queries for all P-Models, only very few decisions are supported by all P-Models. Example 2: The following query is surprisingly 2 not true in all P- Models: Constraint-1: P( c a ) = (0.5, 1]; Constraint-2: P( c b ) = (0.5, 1]; Query: Q( c a + b) = (0.5, 1]; describing a information like If I (only) know a, I decide for c. If I (only) know b, I decide for c. and a query like Should I decide for c if I only know (a or b)? The innite set therefore contains very "unintuitive" probability models. More general stated calculating the query in the dierent P-Models we receive the full range of values between [0, 1] for nearly every query (which is clearly useless for our goal to support decisions). Therefore the remaining degrees of freedom have to be xed or, in other words, the innite set has to be reduced to one \best" P-Model without introducing subjective biases. Given such a \best" P-Model a precise probability for any probability statement can be obtained. We solve this problem using the concept of model-quantication. The term model-quantication refers to the idea that we should choose this P-Model (macro state) which is supported by the most micro states (see section 4.1). Principles like Indierence, Independence and Maximum Entropy choose P-Models which are justied in the light of model-quantication (see [9, 15, 21]). Using this principles reveals another type of nonmonotonic behavior. Example 3: Given the only constraint Constraint-1: P(fly birds) = [0.7, 1]; we obtain the conclusion Conclusion-1: Q(fly birds * lives-in-antarctic) = [0.7, 1]; But given the following two constraints Constraint-1: P(fly birds) = [0.7, 1]; Constraint-2: P(fly birds * lives-in-antarctic) = [0, 0.2]; Conclusion-1 can not be obtained. 4 The additional principles and their justication 4.1 The concept of model-quantication Given an urn U having N numbered balls in n different colors (U := (N1; : : : ; N n ) with P i N i = N) the corresponding probability vector v is dened as v := (v1; : : : ; v n ) with v i := N i =N. If our knowledge is not sucient to determine a unique composition (N1; : : : ; N n ) (resp. a distribution v) for which of the compositions we should decide? The classical answer is to calculate the number F N (v) of possible llings (where a possible lling is a function which 1 an interpretation is as usual an assignment of truth values to the propositional variables; such an assignment is also called possible world resp. in the context of probabilities an elementary event 2 for real-world situations in which this conclusion is also not true refer to Simpson's Paradox in [1, 14, 24] 2

3 maps each ball to a specic color) for each composition by the multinomial formula 4.2 The principle of Maximum Entropy and its ecient application F N (v) = N! Q n i=1 (v i N)! = N! Q n i=1 N i! (1) and to choose that composition which maximizes F N (v). Example: Comparing the compositions C1 = (9; 4; 3) ) F16(9=16; 4=16; 3=16) = and C2 = (7; 7; 2) ) F16(7=16; 7=16; 2=16) = we would decide for C2. Dening the Entropy of a probability distribution v as H(P ) =? X i v i ln(v i ) and comparing the decision F N (v) > F N (~v) with the decision H(v) > H(~v) we get 9M1 : 8M2 : M2 > M1 : F M2 (~v) > F M2 (v), H(~v) > H(v) (by using the Stirling formula for N! (see [10])). Thus the concept of maximizing Entropy is a generalization of counting the number of possible llings of an urn if N is large 3 or unknown. Justied by the argument of model-quantication the principle of maximum entropy (MaxEnt) selects the P- Model with maximal entropy (the MaxEnt-Model). If the set of P-Models is generated by linear (inequality) constraints like dened above this method is known to lead to an unique P-Model P (see [11]). Calculating the MaxEnt-Model is not a new idea (see [3, 7, 9, 11, 15, 8]) but very expensive in the worst case. The main problem is that the number of interpretations grows exponentially with the number of propositional variables. To avoid this eect in the average case, some weaker principles (see gure) are used to reduce the complexity of the calculations. The most prominent concept for this task concerns the principle of Independence: Taking the propositional variables as nodes and connecting them according to their presence in the same constraint yields an undirected graph G. Applying the principle of Independence is to read G as independence map (see [16]) or, in other words, to demand the global Markov property for our P-Models (see [12]) relative to G. The use of this principle is not only consistent with MaxEnt (i.e. its application does not change the result) furthermore the graphical representation can be used to calculate the MaxEnt-Model by local 4 computations (see [12, 18]). Another concept which is completely consistent with MaxEnt (see [5] for a proof of this) and which leads also to a ecient calculation of the MaxEnt-Model in the average case is the principle of Indierence. The history of this famous principle goes back to Laplace and Keynes. Let us quote Jaynes (see [9]) for a short and informal denition of this principle: Logic on P-Models with Maximum Entropy Logic on P-Models with Indifference and Independence Logic on P-Models Propositional Logic Less P-Models, more Conclusions \If the available evidence gives us no reason to consider proposition a1 either more or less likely than a2, then the only honest way we can describe that state of knowledge is to assign them equal probabilities: P (a1) = P (a2)." Certainly this qualitative description leaves open many questions of its precise meaning but it is much stronger than just demanding every elementary event to be equally probable if no constraints are present. In [20, 19] we give a formal denition on how to decide whether the available evidence (i.e. a set of constraints) gives us reason to consider a proposition more or less likely than another. By applying this principle we obtain equivalence classes of elementary events which need not to be distinguished by the optimization process. Joining the elementary events in equivalence classes can make some of the constraints identical with the consequence that these constraints can be removed for the optimization. 3 for a small N (N = 32, yielding 85 dierent compositions) we compared the decisions based on H with those based on FN. In only 9 from 3570 cases they were not equivalent (which should support the trust in this method even for small N) 4 depending on the density of links in the graph 3

4 The two principles 5 are independent in a logical sense (i.e no principles subsumes the other). Consequently there are cases where just one principle could be applied (resp. which leads to a reduction of the search space). Example 4: P(b a) = [0.8, 1]; P(c b) = [0.8, 1]; P(a c) = [0.8, 1]; The principle of Indierence yields P( a* b *-c) = P( a*-b * c) = P(-a* b*c) and P( a*-b *-c) = P(-a* b *-c) = P(-a*-b*c). With this equalities the three constraints become equivalent and can be reduced to one. (Whereas the corresponding graph is fully connected and does not lead to any independence properties resp. simplications.) Our last medical application in the ESPRIT project MIX contains highly connected concepts. As the use of indierence was more likely to lead to signicant simplications than the use of independence PIT mainly works with the principle of Indierence (whereas the implementation of the principle of Independence is part of our current work). 5 Description of the PIT System extended for the use with indierence and probability intervals. This iterative update technique runs ef- ciently on large problems (with e.g. more than 10 7 variables and 10 2 constraints). In the case of consistent constraints the probability of the queries are calculated for the MaxEnt-Model and printed in a symbolic and numerical form. If the constraints are inconsistent no queries are answered. 6 Applications & Examples Now we demonstrate the use of PIT in two important applications. 6.1 An Example for Default Reasoning For default reasoning with our system we chose an example from Lifschitz's collection of non-monotonic benchmarks (which can be found as B4 in [13]). Consider the following colloquial specication: (a1) Quakers are normally pacists. (a2) Republicans are normally hawks. (a3) Pacists are normally politically active. (a4) Hawks are normally politically active. (a5) Pacists are not hawks. After the knowledge and the queries have been modeled as probability statements two steps have to follow: Step 1: Compilation The task of the compilation using the program tabl is to transform the (user-friendly) symbolic input into tables of linear constraints which form a compact mathematical representation for the use in the optimization. Doing so the compiler performs consistency checks and proves propositional subproblems. For this task we use Binary Decision Diagrams (see [2]) which already include some forms of indierence. Additionally tabl simplies the table of linear constraints using the principle of Indierence. Our experiments show that indierence is indeed a very powerful principle to reduce the complexity of the representation. Step2: Optimization The task of the optimization using the program maxent is the calculation of the MaxEnt-Model. For this task we use an algorithm described in [18] which we We will show that the following common sense conclusions hold under our system: (c1) Quakers who are not republicans are [normally] pacists. (c2) Republicans who are not Quakers are [normally] not pacists. (c3) Quakers, Republicans, Pacists and Hawks are [normally] politically active. This example can of course be formalized in many ways. However, at least to our understanding, it requires that for an arbitrary Quaker the probability that he is a pacist is greater than 0.5. We will therefore choose the weakest possible formalization for the term normally and only require that the corresponding probability ranges in the interval (0:5; 1] 6. So we express defaults by very weak conditional probability constraints using the largest plausible intervals. This representation is strong enough to solve 5 which do also help to understand the properties of the P-Model chosen by MaxEnt (see [19, 20]) similar to axiomatic approaches (see [15, 22, 23]) 6 Regardless of the exact subinterval of (0:5; 1] we choose for modeling the constraints, we can show that the conclusions hold within our approach. One (mathematical) argument for using this interval (especially the lower border of 0.5 ) is that this is the only possibility to express assumptions and conclusions within the same interval. In most examples of this type of inductive reasoning the conclusions show less certainty than the premises, so increasing the lower border of the interval of the assumptions does not increase the lower border of the conclusion by the same size. 4

5 most 7 benchmarks of non-monotonic reasoning (e.g. the benchmarks from [13]). As stated above we choose conditional probabilities to model the conditional statements in both assumptions and conclusions. This is a well-known and widely used convention (compare [4]). To represent the ve concepts Quaker, Pacist, Republican, Hawk and Politically active we choose the set of variables V = fqu; P a; Re; Ha; P og. With these decisions we restate the problem in our language in a le (here named B4, comment lines begin with "#"). # Specification of the output file %B4.dat # setting the epsilon for the open intervals %eps = # In the implemented system a half-open # interval (a,b] is realized as the # closed interval [a+eps,b], where # eps is of the same order of magnitude # as the desired overall accuracy. # Declaration of the variables var Qu, Pa, Re, Ha, Po; # Declaration of the defaults P( Pa Qu) = (0.5, 1]; P( Ha Re) = (0.5, 1]; P( Po Pa) = (0.5, 1]; P( Po Ha) = (0.5, 1]; # Declaration of the logical constraint P( -Ha Pa) = 1; # Declaration of the queries Q( Pa Qu * -Re ) = (0.5,1]; Q( -Pa -Qu * Re ) = (0.5,1]; # The following disjunction # represent the colloquial ''and'' Q( Po Qu + Re + Pa + Ha ) = (0.5,1]; # sign for the end of the file. As we have 5 propositional variables the set of elementary events contains 2 5 = 32 possible worlds. The logical constraint (a5) eliminates 8 of these interpretations as impossible (i.e. their probability is 0). By applying the principle of Indierence the number of equivalence classes is restricted to 11 and the number of constraints is reduced to 2. After having reached this reductions by compiling our input-le with the command tabl B4 which results in a compiled le (here named B4.dat) we call maxent by the command maxent B4.dat From the calculated MaxEnt-Model we gain the following data about the assumptions: (a1) P (P a j Qu) = 0:501(= 0:5+eps). The probability of assumption (a1) in the MaxEnt-Model is the lowest one compatible with the original constraint. (a2) P (Ha j Re) = 0:501(= 0:5+eps). (a3) P (P o j P a) = 0:501(= 0:5+eps). (a4) P (P o j Ha) = 0:501(= 0:5+eps). (a5) P (:Ha j P a) = 1. Remember that (a5) is a logical constraint and there is no freedom to choose another probability here. As one can see the MaxEnt-Model tries to be as uncommitted about the assumptions as the specication allows (i.e. it chooses a probability for the assumptions that provides the minimum amount of information). In this case, as is to be expected, the model is found at the (lower) border of the intervals. Nevertheless all of the desired conclusions are supported 8 : (c1) P (P a j Qu ^ :Re) = 0: (c2) P (:P a j :Qu ^ Re) = 0: (c3) P (P o j Qu _ Re _ P a _ Ha) = 0: The probability for all three conclusions falls into the interval (0:5; 1] ergo all three conclusions hold under our modeling assumptions An Example for a Medical Database A typical application of PIT is in medical diagnosis. The semantics of reasoning with probabilities (resp. frequencies [6]) is easy to explain. The specication of probability intervals demands less eort than the specication of exact probabilities (usually necessary when reasoning in probabilistic networks). In the ESPRIT project MIX we successfully applied PIT to a medical database. This database consists of more than 100 rules (determinated by a physician) by using 7 qualitative steps (from 'almost never' up to 'almost always'). From this statements, the following le 7 see the footnote in section Conclusion 8 as the real printout of maxent does not contain any symbolic information (the queries and constraints are just numbered) we give the result in a more convenient format 9 similar results could be obtained for other choices of the intervals; this holds especially for the so called worst-case analysis (see [21]) where we choose the values in the intervals rst and apply MaxEnt afterwards 5

6 was generated. The questions are e.g. whether a patient with a number of symptoms is intoxicated with the poison 'adt' or with 'carbamate'. The answers were obtained within a few minutes (average, variing with the available hardware and its actual load). Symptoms with 3 possible values are mapped onto the combinations of two two-valued variables where the 4th value is known to be zero. Instead of P( b a ) we use the more convenient notation P( a - > b ). % database.dat var FC1, FC2, PAS1, PAS2, QRSIsnormal, QTIsnormal, ROTIsbrisk, adt, alc, bar, ben, car, patientiscalm, pupils1, pupils2, regardisnormal, temp1, temp2, tonusishypertonia, urineisyes, phe; #adjusting 3 valued variables p( -temp1 * -temp2 ) = 0; : : : (more rules follow here) # specific Rules P(tonusIshypertonia * -ROTIsbrisk * PAS1 * PAS2 - > adt) = [0.65, 1.0]; : : : (more rules follow here) # Rules under the condition of only adt p( adt*-alc*-bar*-ben*-car*-phe - > tonusishypertonia ) = [0.75, 1.0]; p( adt*-alc*-bar*-ben*-car*-phe - > urineisyes ) = [0.75, 1.0]; : : : (more rules follow here) # Rules under the condition of only carbamate p( -adt*-alc*-bar*-ben*car*-phe - > patientiscalm ) = [0.95, 1.0]; p( -adt*-alc*-bar*-ben*car*-phe - > -ROTIsbrisk) = [0.75, 1.0]; : : : (more rules follow here) # Queries # Is the patient with the given symptoms # intoxicated with adt? q( temp1 * temp2 * patientiscalm * pupils1 * pupils2 * -tonusishypertonia * -ROTIsbrisk * PAS1 * -PAS2 * FC1 * -FC2 * QRSIsnormal * -QTIsnormal * urineisyes - > adt) ; # Is the patient with the given symptoms # intoxicated with carbamate? q( temp1 * temp2 * patientiscalm * pupils1 * -pupils2 * -tonusishypertonia * ROTIsbrisk * -PAS1 * PAS2 * FC1 * -FC2 * QRSIsnormal * QTIsnormal * urineisyes - > car) ;. 7 Conclusion and Ongoing work In this paper we shortly described our concept and implementation for reasoning with incomplete and uncertain (probabilistic) knowledge. Based on propositional logic the specication can contain logical constraints, constraints with xed probabilities or constraints with probabilities ranging in intervals. This very general features can also be used for qualitative non-monotonic reasoning. Most 10 non-monotonic benchmarks (e.g. those from [13] using unary predicates) can be modeled within our (logical weak) formalization we described above. In our last work we showed the concept to work with an application of medium size (medical diagnosis problem within the ESPRIT Basic Research Project 9119 MIX, based on 24 propositional variables and more than 100 probabilistic constraints (see section 6.2)). Further improvements of PIT in current projects include: A user interface to make the program more convenient to use. The use of the principle of Independence to reduce the complexity of the representation. 8 Additionals Necessary System-Resources: LINUX / SUNOS / HPUX WWW and user support: You will nd an introduction with further literature and our binaries at A more elaborated example with user-interface has been developed in the ESPRIT-Project MIX and is available under query.html 10 An exception of this is the well known proposal of the And-rule but it is known that this rule causes problems (consider e.g. the Lottery Paradox as described in [17]). Furthermore as MaxEnt is based on combinatorial calculations the result of MaxEnt has some meaning independent from proposed rules of non-monotonic reasoning. If it contradicts a proposed rule it also calls into question this rule, or to be more concrete, the rule will not be useful for reasoning on proportions. 6

7 References [1] Blyth, C. On Simpson's Paradox and the Sure-Thing Principle. Journal of the American Statistical Association 67, 338 (June 1972), 363{381. [2] Bryant, R. Symbolic Boolean Manipulation with Ordered Binary Decision Diagrams. Tech. Rep. CMU-CS , School of Computer Science, Carnegie Mellon University, Available at [3] Cheeseman, P. A Method of Generalize Bayesian Probability Values for Expert Systems. In Proc. of the 8th International Joint Conference on Articial Intelligence (IJCAI-83), Karlsruhe (San Mateo, CA, 1983), vol. 1, Morgan Kaufman, pp. 198{202. [4] E.W. Adams. Probability and the Logic of Conditionals. In Aspects of Inductive Logic, J. Hintikka and P.Suppes, Eds. North Holland, Amsterdam, [5] Fischer, V., and Schramm, M. Ecient Compilation of Probabilistic Expressions for Use in MaxEnt Optimization Problems. Tech. Rep. TUM-I9636, Institut fur Informatik, Technische Universitat Munchen, [6] Gigerenzer, G. The psychology of good judment: Frequency formats and simple algorithms. Medical Decision Making (1996). [7] Goldman, S., and Rivest, R. A Non-Iterative Maximum Entropie Algorithm. In Uncertainty in Articial Intelligence, J. Lemmer and L. Kanal, Eds., vol. 2. Elsevier Science, 1988, pp. 133{148. [8] Goldszmidt, M., Morris, P., and Pearl, J. A Maximum Entropy Approach to Non-Monotonic Reasoning. In Proc. of the 8th National Conference on Articial Intelligence, Boston (1990), AAAI, AAAI Press/MIT Press, pp. 646{652. [9] Jaynes, E. Where do we stand on Maximum Entropy? In Papers on Probability, Statistics and Statistical Physics, R. Rosenkrantz, Ed. Kluwer Academic Publishers, 1978, pp. 210{314. [10] Jaynes, E. Probability Theory: The Logic of Science. Continually updated, fragmentary edition of approximately 700 pages available from ftp://bayes.wustl.edu//pub/jaynes, [11] Kapur, J., and Kesavan, H. Entropy Optimization Prinziples with Applications. Academic Press, [12] Lauritzen, S. L. Graphical Models. Oxford Science Publications, [13] Lifschitz, V. Benchmark Problems for formal Non-Monotonic Reasoning. In Non-Monotonic Reasoning: 2nd International Workshop (1989), R. et al, Ed., vol. 346 of LNAI, Springer, pp. 202{219. [14] Neufeld, E., and Horton, J. Conditioning on Disjunctive Knowledge: Simpson's Paradox in Default Logic. In Uncertainty in Articial Intelligence, M. Henrion, R. Shachter, L. Kanal, and J. Lemmer, Eds., vol. 5. Elsevier Science, 1990, pp. 117{125. [15] Paris, J., and Vencovska, A. A Note on the Inevitability of Maximum Entropy. International Journal of Approximate Reasoning 3 (1990), 183{223. [16] Pearl, J. Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference. Morgan Kaufmann, San Mateo, CA, [17] Poole, D. What the Lottery Paradox Tells us about Default Reasoning. In Proc. of the 1 st International Conference on Principles of Knowledge Representation and Reasoning (San Mateo, CA, 1989), R. Brachman, H. Levesque, and R. Reiter, Eds., Morgan Kaufmann, pp. 333{340. [18] Rodder, W., and Meyer, C.-H. Coherent knowledge processing at maximum entropy by spirit. KI 96 (Dresden) (1996). [19] Schramm, M. Indierenz, Unabhangigkeit und maximale Entropie: Eine Wahrscheinlichkeitstheoretische Semantik fur nichtmonotones Schlieen. No. 4 in Dissertationen zur Informatik. CS Press, Munchen, Submitted and accepted as a Ph.D. Thesis at the Fakultat fur Philosophie, Logik und Wissenschaftstheorie, Ludwigs-Maximilians-Universitat zu Munchen (German Language). [20] Schramm, M., and Greiner, M. Foundations: Indierence, Independence & Maxent. In Maximum Entropy and Bayesian Methods in Science and Engeneering (Proc. of the MaxEnt'94) (1995), J. Skilling, Ed., Kluwer Academic Publishers. 7

8 [21] Schramm, M., and Schulz, S. SETHEO/NN and PIT { Implementation Descriptions. MIX Deliverable S4.2, [22] Shore, J., and Johnson, R. Axiomatic Derivation of the Principle of Maximum Entropy and the Principle of Minimum Cross Entropy. IEEE Transactions on Information Theory IT-26, 1 (1980), 26{37. [23] Skilling, J. The Axioms of Maximum Entropy. In Maximum Entropy and Bayesian Methods in Science and Engineereing, G. Erickson and C. Smith, Eds., vol. 1 { Foundations. Kluwer Academic Publishers, [24] Whittaker, J. Graphical Models in applied multivariate Statistics. John Wiley,

2 Situation A (Two-valued Model) Describing the Situation Reaching the rst goal

2 Situation A (Two-valued Model) Describing the Situation Reaching the rst goal A Probabilistic Stopping Criterion for the Evaluation of Benchmarks (Extended Version) Michael Greiner, Manfred Schramm z Keywords Evaluation of benchmarks, heuristic comparison of algorithms, general

More information

Bayesian Reasoning. Adapted from slides by Tim Finin and Marie desjardins.

Bayesian Reasoning. Adapted from slides by Tim Finin and Marie desjardins. Bayesian Reasoning Adapted from slides by Tim Finin and Marie desjardins. 1 Outline Probability theory Bayesian inference From the joint distribution Using independence/factoring From sources of evidence

More information

Uncertainty. Logic and Uncertainty. Russell & Norvig. Readings: Chapter 13. One problem with logical-agent approaches: C:145 Artificial

Uncertainty. Logic and Uncertainty. Russell & Norvig. Readings: Chapter 13. One problem with logical-agent approaches: C:145 Artificial C:145 Artificial Intelligence@ Uncertainty Readings: Chapter 13 Russell & Norvig. Artificial Intelligence p.1/43 Logic and Uncertainty One problem with logical-agent approaches: Agents almost never have

More information

PROBABILISTIC LOGIC. J. Webster (ed.), Wiley Encyclopedia of Electrical and Electronics Engineering Copyright c 1999 John Wiley & Sons, Inc.

PROBABILISTIC LOGIC. J. Webster (ed.), Wiley Encyclopedia of Electrical and Electronics Engineering Copyright c 1999 John Wiley & Sons, Inc. J. Webster (ed.), Wiley Encyclopedia of Electrical and Electronics Engineering Copyright c 1999 John Wiley & Sons, Inc. PROBABILISTIC LOGIC A deductive argument is a claim of the form: If P 1, P 2,...,andP

More information

A logically sound method for uncertain reasoning. with quantied conditionals. Gabriele Kern-Isberner. FernUniversitat Hagen, Fachbereich Informatik

A logically sound method for uncertain reasoning. with quantied conditionals. Gabriele Kern-Isberner. FernUniversitat Hagen, Fachbereich Informatik [To appear in: ECSARU/FAPR 97] A logically sound method for uncertain reasoning with quantied conditionals Gabriele Kern-Isberner FernUniversitat Hagen, Fachbereich Informatik P.O. Box 940, D-58084 Hagen,

More information

Splitting a Default Theory. Hudson Turner. University of Texas at Austin.

Splitting a Default Theory. Hudson Turner. University of Texas at Austin. Splitting a Default Theory Hudson Turner Department of Computer Sciences University of Texas at Austin Austin, TX 7872-88, USA hudson@cs.utexas.edu Abstract This paper presents mathematical results that

More information

Knowledge base (KB) = set of sentences in a formal language Declarative approach to building an agent (or other system):

Knowledge base (KB) = set of sentences in a formal language Declarative approach to building an agent (or other system): Logic Knowledge-based agents Inference engine Knowledge base Domain-independent algorithms Domain-specific content Knowledge base (KB) = set of sentences in a formal language Declarative approach to building

More information

INTRODUCTION TO NONMONOTONIC REASONING

INTRODUCTION TO NONMONOTONIC REASONING Faculty of Computer Science Chair of Automata Theory INTRODUCTION TO NONMONOTONIC REASONING Anni-Yasmin Turhan Dresden, WS 2017/18 About the Course Course Material Book "Nonmonotonic Reasoning" by Grigoris

More information

Robert Givan. David McAllester. Sameer Shalaby. Abstract

Robert Givan. David McAllester. Sameer Shalaby. Abstract Natural Language Based Inference Procedures applied to Schubert's Steamroller Robert Givan rlg@ai.mit.edu David McAllester dam@ai.mit.edu Sameer Shalaby Abstract We have previously argued that the syntactic

More information

Human interpretation and reasoning about conditionals

Human interpretation and reasoning about conditionals Human interpretation and reasoning about conditionals Niki Pfeifer 1 Munich Center for Mathematical Philosophy Language and Cognition Ludwig-Maximilians-Universität München www.users.sbg.ac.at/~pfeifern/

More information

Independence Concepts for Convex Sets of Probabilities. Luis M. De Campos and Serafn Moral. Departamento de Ciencias de la Computacion e I.A.

Independence Concepts for Convex Sets of Probabilities. Luis M. De Campos and Serafn Moral. Departamento de Ciencias de la Computacion e I.A. Independence Concepts for Convex Sets of Probabilities Luis M. De Campos and Serafn Moral Departamento de Ciencias de la Computacion e I.A. Universidad de Granada, 18071 - Granada - Spain e-mails: lci@robinson.ugr.es,smc@robinson.ugr

More information

Dempster's Rule of Combination is. #P -complete. Pekka Orponen. Department of Computer Science, University of Helsinki

Dempster's Rule of Combination is. #P -complete. Pekka Orponen. Department of Computer Science, University of Helsinki Dempster's Rule of Combination is #P -complete Pekka Orponen Department of Computer Science, University of Helsinki eollisuuskatu 23, SF{00510 Helsinki, Finland Abstract We consider the complexity of combining

More information

Computing the acceptability semantics. London SW7 2BZ, UK, Nicosia P.O. Box 537, Cyprus,

Computing the acceptability semantics. London SW7 2BZ, UK, Nicosia P.O. Box 537, Cyprus, Computing the acceptability semantics Francesca Toni 1 and Antonios C. Kakas 2 1 Department of Computing, Imperial College, 180 Queen's Gate, London SW7 2BZ, UK, ft@doc.ic.ac.uk 2 Department of Computer

More information

A Preference Semantics. for Ground Nonmonotonic Modal Logics. logics, a family of nonmonotonic modal logics obtained by means of a

A Preference Semantics. for Ground Nonmonotonic Modal Logics. logics, a family of nonmonotonic modal logics obtained by means of a A Preference Semantics for Ground Nonmonotonic Modal Logics Daniele Nardi and Riccardo Rosati Dipartimento di Informatica e Sistemistica, Universita di Roma \La Sapienza", Via Salaria 113, I-00198 Roma,

More information

MAI0203 Lecture 7: Inference and Predicate Calculus

MAI0203 Lecture 7: Inference and Predicate Calculus MAI0203 Lecture 7: Inference and Predicate Calculus Methods of Artificial Intelligence WS 2002/2003 Part II: Inference and Knowledge Representation II.7 Inference and Predicate Calculus MAI0203 Lecture

More information

An Independence Relation for Sets of Secrets

An Independence Relation for Sets of Secrets Sara Miner More Pavel Naumov An Independence Relation for Sets of Secrets Abstract. A relation between two secrets, known in the literature as nondeducibility, was originally introduced by Sutherland.

More information

Lecture 10: Introduction to reasoning under uncertainty. Uncertainty

Lecture 10: Introduction to reasoning under uncertainty. Uncertainty Lecture 10: Introduction to reasoning under uncertainty Introduction to reasoning under uncertainty Review of probability Axioms and inference Conditional probability Probability distributions COMP-424,

More information

On some Metatheorems about FOL

On some Metatheorems about FOL On some Metatheorems about FOL February 25, 2014 Here I sketch a number of results and their proofs as a kind of abstract of the same items that are scattered in chapters 5 and 6 in the textbook. You notice

More information

Modeling and reasoning with uncertainty

Modeling and reasoning with uncertainty CS 2710 Foundations of AI Lecture 18 Modeling and reasoning with uncertainty Milos Hauskrecht milos@cs.pitt.edu 5329 Sennott Square KB systems. Medical example. We want to build a KB system for the diagnosis

More information

1. what conditional independencies are implied by the graph. 2. whether these independecies correspond to the probability distribution

1. what conditional independencies are implied by the graph. 2. whether these independecies correspond to the probability distribution NETWORK ANALYSIS Lourens Waldorp PROBABILITY AND GRAPHS The objective is to obtain a correspondence between the intuitive pictures (graphs) of variables of interest and the probability distributions of

More information

PROPOSITIONAL LOGIC. VL Logik: WS 2018/19

PROPOSITIONAL LOGIC. VL Logik: WS 2018/19 PROPOSITIONAL LOGIC VL Logik: WS 2018/19 (Version 2018.2) Martina Seidl (martina.seidl@jku.at), Armin Biere (biere@jku.at) Institut für Formale Modelle und Verifikation BOX Game: Rules 1. The game board

More information

LTCS Report. Decidability and Complexity of Threshold Description Logics Induced by Concept Similarity Measures. LTCS-Report 16-07

LTCS Report. Decidability and Complexity of Threshold Description Logics Induced by Concept Similarity Measures. LTCS-Report 16-07 Technische Universität Dresden Institute for Theoretical Computer Science Chair for Automata Theory LTCS Report Decidability and Complexity of Threshold Description Logics Induced by Concept Similarity

More information

(A 3 ) (A 1 ) (1) COMPUTING CIRCUMSCRIPTION. Vladimir Lifschitz. Department of Computer Science Stanford University Stanford, CA

(A 3 ) (A 1 ) (1) COMPUTING CIRCUMSCRIPTION. Vladimir Lifschitz. Department of Computer Science Stanford University Stanford, CA COMPUTING CIRCUMSCRIPTION Vladimir Lifschitz Department of Computer Science Stanford University Stanford, CA 94305 Abstract Circumscription is a transformation of predicate formulas proposed by John McCarthy

More information

Boolean Algebra and Propositional Logic

Boolean Algebra and Propositional Logic Boolean Algebra and Propositional Logic Takahiro Kato September 10, 2015 ABSTRACT. This article provides yet another characterization of Boolean algebras and, using this characterization, establishes a

More information

This second-order avor of default logic makes it especially useful in knowledge representation. An important question is, then, to characterize those

This second-order avor of default logic makes it especially useful in knowledge representation. An important question is, then, to characterize those Representation Theory for Default Logic V. Wiktor Marek 1 Jan Treur 2 and Miros law Truszczynski 3 Keywords: default logic, extensions, normal default logic, representability Abstract Default logic can

More information

Foundations of Artificial Intelligence

Foundations of Artificial Intelligence Foundations of Artificial Intelligence 7. Propositional Logic Rational Thinking, Logic, Resolution Joschka Boedecker and Wolfram Burgard and Bernhard Nebel Albert-Ludwigs-Universität Freiburg May 17, 2016

More information

1) Totality of agents is (partially) ordered, with the intended meaning that t 1 v t 2 intuitively means that \Perception of the agent A t2 is sharper

1) Totality of agents is (partially) ordered, with the intended meaning that t 1 v t 2 intuitively means that \Perception of the agent A t2 is sharper On reaching consensus by groups of intelligent agents Helena Rasiowa and Wiktor Marek y Abstract We study the problem of reaching the consensus by a group of fully communicating, intelligent agents. Firstly,

More information

Maximal Introspection of Agents

Maximal Introspection of Agents Electronic Notes in Theoretical Computer Science 70 No. 5 (2002) URL: http://www.elsevier.nl/locate/entcs/volume70.html 16 pages Maximal Introspection of Agents Thomas 1 Informatics and Mathematical Modelling

More information

Intelligent Agents. First Order Logic. Ute Schmid. Cognitive Systems, Applied Computer Science, Bamberg University. last change: 19.

Intelligent Agents. First Order Logic. Ute Schmid. Cognitive Systems, Applied Computer Science, Bamberg University. last change: 19. Intelligent Agents First Order Logic Ute Schmid Cognitive Systems, Applied Computer Science, Bamberg University last change: 19. Mai 2015 U. Schmid (CogSys) Intelligent Agents last change: 19. Mai 2015

More information

Quantifying uncertainty & Bayesian networks

Quantifying uncertainty & Bayesian networks Quantifying uncertainty & Bayesian networks CE417: Introduction to Artificial Intelligence Sharif University of Technology Spring 2016 Soleymani Artificial Intelligence: A Modern Approach, 3 rd Edition,

More information

Argumentation and rules with exceptions

Argumentation and rules with exceptions Argumentation and rules with exceptions Bart VERHEIJ Artificial Intelligence, University of Groningen Abstract. Models of argumentation often take a given set of rules or conditionals as a starting point.

More information

Handbook of Logic and Proof Techniques for Computer Science

Handbook of Logic and Proof Techniques for Computer Science Steven G. Krantz Handbook of Logic and Proof Techniques for Computer Science With 16 Figures BIRKHAUSER SPRINGER BOSTON * NEW YORK Preface xvii 1 Notation and First-Order Logic 1 1.1 The Use of Connectives

More information

Discrete Probability and State Estimation

Discrete Probability and State Estimation 6.01, Spring Semester, 2008 Week 12 Course Notes 1 MASSACHVSETTS INSTITVTE OF TECHNOLOGY Department of Electrical Engineering and Computer Science 6.01 Introduction to EECS I Spring Semester, 2008 Week

More information

Approximation of Belief Functions by Minimizing Euclidean Distances

Approximation of Belief Functions by Minimizing Euclidean Distances Approximation of Belief Functions by Minimizing Euclidean Distances Thomas Weiler and Ulrich Bodenhofer Software Competence Center Hagenberg A-4232 Hagenberg, Austria e-mail: {thomas.weiler,ulrich.bodenhofer}@scch.at

More information

Nested Epistemic Logic Programs

Nested Epistemic Logic Programs Nested Epistemic Logic Programs Kewen Wang 1 and Yan Zhang 2 1 Griffith University, Australia k.wang@griffith.edu.au 2 University of Western Sydney yan@cit.uws.edu.au Abstract. Nested logic programs and

More information

Chapter 7 R&N ICS 271 Fall 2017 Kalev Kask

Chapter 7 R&N ICS 271 Fall 2017 Kalev Kask Set 6: Knowledge Representation: The Propositional Calculus Chapter 7 R&N ICS 271 Fall 2017 Kalev Kask Outline Representing knowledge using logic Agent that reason logically A knowledge based agent Representing

More information

Ch.6 Uncertain Knowledge. Logic and Uncertainty. Representation. One problem with logical approaches: Department of Computer Science

Ch.6 Uncertain Knowledge. Logic and Uncertainty. Representation. One problem with logical approaches: Department of Computer Science Ch.6 Uncertain Knowledge Representation Hantao Zhang http://www.cs.uiowa.edu/ hzhang/c145 The University of Iowa Department of Computer Science Artificial Intelligence p.1/39 Logic and Uncertainty One

More information

Tutorial: Nonmonotonic Logic

Tutorial: Nonmonotonic Logic Tutorial: Nonmonotonic Logic PhDs in Logic (2017) Christian Straßer May 2, 2017 Outline Defeasible Reasoning Scratching the Surface of Nonmonotonic Logic 1/52 Defeasible Reasoning What is defeasible reasoning?

More information

Uncertainty and Bayesian Networks

Uncertainty and Bayesian Networks Uncertainty and Bayesian Networks Tutorial 3 Tutorial 3 1 Outline Uncertainty Probability Syntax and Semantics for Uncertainty Inference Independence and Bayes Rule Syntax and Semantics for Bayesian Networks

More information

Comparing normative argumentation to other probabilistic systems. Simon Parsons. Mile End Road. London E1 4NS.

Comparing normative argumentation to other probabilistic systems. Simon Parsons. Mile End Road. London E1 4NS. Comparing normative argumentation to other probabilistic systems Simon Parsons Department of Electronic Engineering Queen Mary and Westeld College Mile End Road London E1 4NS S.Parsons@qmw.ac.uk Abstract

More information

In: Proceedings of KR Probabilistic Reasoning in Terminological Logics. Manfred Jaeger. Max-Planck-Institut fur Informatik,

In: Proceedings of KR Probabilistic Reasoning in Terminological Logics. Manfred Jaeger. Max-Planck-Institut fur Informatik, In: Proceedings of KR-94 1 Probabilistic Reasoning in Terminological Logics Manfred Jaeger Max-Planck-Institut fur Informatik, Im Stadtwald, D-66123 Saarbrucken Abstract In this paper a probabilistic extensions

More information

Foundations of Artificial Intelligence

Foundations of Artificial Intelligence Foundations of Artificial Intelligence 7. Propositional Logic Rational Thinking, Logic, Resolution Wolfram Burgard, Maren Bennewitz, and Marco Ragni Albert-Ludwigs-Universität Freiburg Contents 1 Agents

More information

2 SUMMARISING APPROXIMATE ENTAILMENT In this section we will summarise the work in (Schaerf and Cadoli 995), which denes the approximate entailment re

2 SUMMARISING APPROXIMATE ENTAILMENT In this section we will summarise the work in (Schaerf and Cadoli 995), which denes the approximate entailment re Computing approximate diagnoses by using approximate entailment Annette ten Teije SWI University of Amsterdam The Netherlands annette@swi.psy.uva.nl Abstract The most widely accepted models of diagnostic

More information

Artificial Intelligence Chapter 7: Logical Agents

Artificial Intelligence Chapter 7: Logical Agents Artificial Intelligence Chapter 7: Logical Agents Michael Scherger Department of Computer Science Kent State University February 20, 2006 AI: Chapter 7: Logical Agents 1 Contents Knowledge Based Agents

More information

Intelligent Agents. Pınar Yolum Utrecht University

Intelligent Agents. Pınar Yolum Utrecht University Intelligent Agents Pınar Yolum p.yolum@uu.nl Utrecht University Logical Agents (Based mostly on the course slides from http://aima.cs.berkeley.edu/) Outline Knowledge-based agents Wumpus world Logic in

More information

Applying Bayesian networks in the game of Minesweeper

Applying Bayesian networks in the game of Minesweeper Applying Bayesian networks in the game of Minesweeper Marta Vomlelová Faculty of Mathematics and Physics Charles University in Prague http://kti.mff.cuni.cz/~marta/ Jiří Vomlel Institute of Information

More information

Boolean Algebra and Propositional Logic

Boolean Algebra and Propositional Logic Boolean Algebra and Propositional Logic Takahiro Kato June 23, 2015 This article provides yet another characterization of Boolean algebras and, using this characterization, establishes a more direct connection

More information

26. Januar Introduction to Computational Semantics

26. Januar Introduction to Computational Semantics 1 Lehrstuhl für Künstliche Intelligenz Institut für Informatik Friedrich-Alexander-Universität Erlangen-Nürnberg 26. Januar 2006 1 Slides are mainly due to J. Bos and P. Blackburn course on An Introduction

More information

COMP219: Artificial Intelligence. Lecture 19: Logic for KR

COMP219: Artificial Intelligence. Lecture 19: Logic for KR COMP219: Artificial Intelligence Lecture 19: Logic for KR 1 Overview Last time Expert Systems and Ontologies Today Logic as a knowledge representation scheme Propositional Logic Syntax Semantics Proof

More information

Belief revision: A vade-mecum

Belief revision: A vade-mecum Belief revision: A vade-mecum Peter Gärdenfors Lund University Cognitive Science, Kungshuset, Lundagård, S 223 50 LUND, Sweden Abstract. This paper contains a brief survey of the area of belief revision

More information

Two-Valued Logic Programs

Two-Valued Logic Programs Two-Valued Logic Programs Vladimir Lifschitz University of Texas at Austin, USA Abstract We define a nonmonotonic formalism that shares some features with three other systems of nonmonotonic reasoning

More information

Data-Efficient Information-Theoretic Test Selection

Data-Efficient Information-Theoretic Test Selection Data-Efficient Information-Theoretic Test Selection Marianne Mueller 1,Rómer Rosales 2, Harald Steck 2, Sriram Krishnan 2,BharatRao 2, and Stefan Kramer 1 1 Technische Universität München, Institut für

More information

INSTITUT FÜR INFORMATIK

INSTITUT FÜR INFORMATIK INSTITUT FÜR INFORMATIK DER LUDWIGMAXIMILIANSUNIVERSITÄT MÜNCHEN Bachelorarbeit Propagation of ESCL Cardinality Constraints with Respect to CEP Queries Thanh Son Dang Aufgabensteller: Prof. Dr. Francois

More information

Integrating Correlated Bayesian Networks Using Maximum Entropy

Integrating Correlated Bayesian Networks Using Maximum Entropy Applied Mathematical Sciences, Vol. 5, 2011, no. 48, 2361-2371 Integrating Correlated Bayesian Networks Using Maximum Entropy Kenneth D. Jarman Pacific Northwest National Laboratory PO Box 999, MSIN K7-90

More information

Bound and Free Variables. Theorems and Proofs. More valid formulas involving quantifiers:

Bound and Free Variables. Theorems and Proofs. More valid formulas involving quantifiers: Bound and Free Variables More valid formulas involving quantifiers: xp(x) x P(x) Replacing P by P, we get: x P(x) x P(x) Therefore x P(x) xp(x) Similarly, we have xp(x) x P(x) x P(x) xp(x) i(i 2 > i) is

More information

Chapter 2 Background. 2.1 A Basic Description Logic

Chapter 2 Background. 2.1 A Basic Description Logic Chapter 2 Background Abstract Description Logics is a family of knowledge representation formalisms used to represent knowledge of a domain, usually called world. For that, it first defines the relevant

More information

Pei Wang( 王培 ) Temple University, Philadelphia, USA

Pei Wang( 王培 ) Temple University, Philadelphia, USA Pei Wang( 王培 ) Temple University, Philadelphia, USA Artificial General Intelligence (AGI): a small research community in AI that believes Intelligence is a general-purpose capability Intelligence should

More information

COMP219: Artificial Intelligence. Lecture 19: Logic for KR

COMP219: Artificial Intelligence. Lecture 19: Logic for KR COMP219: Artificial Intelligence Lecture 19: Logic for KR 1 Overview Last time Expert Systems and Ontologies Today Logic as a knowledge representation scheme Propositional Logic Syntax Semantics Proof

More information

2 Transition Systems Denition 1 An action signature consists of three nonempty sets: a set V of value names, a set F of uent names, and a set A of act

2 Transition Systems Denition 1 An action signature consists of three nonempty sets: a set V of value names, a set F of uent names, and a set A of act Action Languages Michael Gelfond Department of Computer Science University of Texas at El Paso Austin, TX 78768, USA Vladimir Lifschitz Department of Computer Sciences University of Texas at Austin Austin,

More information

An Empirical Study of Probability Elicitation under Noisy-OR Assumption

An Empirical Study of Probability Elicitation under Noisy-OR Assumption An Empirical Study of Probability Elicitation under Noisy-OR Assumption Adam Zagorecki and Marek Druzdzel Decision Systems Laboratory School of Information Science University of Pittsburgh, Pittsburgh,

More information

Towards a Structured Analysis of Approximate Problem Solving: ACaseStudy in Classification

Towards a Structured Analysis of Approximate Problem Solving: ACaseStudy in Classification Towards a Structured Analysis of Approximate Problem Solving: ACaseStudy in Classification Perry Groot and Annette ten Teije and Frank van Harmelen Division of Mathematics and Computer Science, Faculty

More information

Knowledge representation DATA INFORMATION KNOWLEDGE WISDOM. Figure Relation ship between data, information knowledge and wisdom.

Knowledge representation DATA INFORMATION KNOWLEDGE WISDOM. Figure Relation ship between data, information knowledge and wisdom. Knowledge representation Introduction Knowledge is the progression that starts with data which s limited utility. Data when processed become information, information when interpreted or evaluated becomes

More information

Non-elementary Lower Bound for Propositional Duration. Calculus. A. Rabinovich. Department of Computer Science. Tel Aviv University

Non-elementary Lower Bound for Propositional Duration. Calculus. A. Rabinovich. Department of Computer Science. Tel Aviv University Non-elementary Lower Bound for Propositional Duration Calculus A. Rabinovich Department of Computer Science Tel Aviv University Tel Aviv 69978, Israel 1 Introduction The Duration Calculus (DC) [5] is a

More information

Introduction to Logic in Computer Science: Autumn 2006

Introduction to Logic in Computer Science: Autumn 2006 Introduction to Logic in Computer Science: Autumn 2006 Ulle Endriss Institute for Logic, Language and Computation University of Amsterdam Ulle Endriss 1 Plan for Today The first part of the course will

More information

Deliberative Agents Knowledge Representation I. Deliberative Agents

Deliberative Agents Knowledge Representation I. Deliberative Agents Deliberative Agents Knowledge Representation I Vasant Honavar Bioinformatics and Computational Biology Program Center for Computational Intelligence, Learning, & Discovery honavar@cs.iastate.edu www.cs.iastate.edu/~honavar/

More information

Logic and Proofs. (A brief summary)

Logic and Proofs. (A brief summary) Logic and Proofs (A brief summary) Why Study Logic: To learn to prove claims/statements rigorously To be able to judge better the soundness and consistency of (others ) arguments To gain the foundations

More information

COHERENCE AND PROBABILITY. Rosangela H. Loschi and Sergio Wechsler. Universidade Federal de Minas Gerais e Universidade de S~ao Paulo ABSTRACT

COHERENCE AND PROBABILITY. Rosangela H. Loschi and Sergio Wechsler. Universidade Federal de Minas Gerais e Universidade de S~ao Paulo ABSTRACT COHERENCE AND PROBABILITY Rosangela H. Loschi and Sergio Wechsler Universidade Federal de Minas Gerais e Universidade de S~ao Paulo ABSTRACT A process of construction of subjective probability based on

More information

k-subsets of V: Finding a collection which forms a t-design is equivalent to the problem of solving a system of diophantine equations, as mentioned ab

k-subsets of V: Finding a collection which forms a t-design is equivalent to the problem of solving a system of diophantine equations, as mentioned ab Simple 6- and 7-designs on 19 to 33 points Anton Betten 1, Reinhard Laue, Alfred Wassermann Universitat Bayreuth, Lehrstuhl II fur Mathematik, Lehrstuhl fur Mathematik und ihre Didaktik, D-95440 Bayreuth

More information

02 Propositional Logic

02 Propositional Logic SE 2F03 Fall 2005 02 Propositional Logic Instructor: W. M. Farmer Revised: 25 September 2005 1 What is Propositional Logic? Propositional logic is the study of the truth or falsehood of propositions or

More information

Extremal problems in logic programming and stable model computation Pawe l Cholewinski and Miros law Truszczynski Computer Science Department Universi

Extremal problems in logic programming and stable model computation Pawe l Cholewinski and Miros law Truszczynski Computer Science Department Universi Extremal problems in logic programming and stable model computation Pawe l Cholewinski and Miros law Truszczynski Computer Science Department University of Kentucky Lexington, KY 40506-0046 fpaweljmirekg@cs.engr.uky.edu

More information

Gödel s Incompleteness Theorems

Gödel s Incompleteness Theorems Seminar Report Gödel s Incompleteness Theorems Ahmet Aspir Mark Nardi 28.02.2018 Supervisor: Dr. Georg Moser Abstract Gödel s incompleteness theorems are very fundamental for mathematics and computational

More information

Unprovability of circuit upper bounds in Cook s theory PV

Unprovability of circuit upper bounds in Cook s theory PV Unprovability of circuit upper bounds in Cook s theory PV Igor Carboni Oliveira Faculty of Mathematics and Physics, Charles University in Prague. Based on joint work with Jan Krajíček (Prague). [Dagstuhl

More information

Logic as The Calculus of Computer Science

Logic as The Calculus of Computer Science 1 Ottobre, 2007 1 Università di Napoli Federico II What is a Logic? A Logic is a formalism with a sintax a semantics an inference mechanism for reasoning Historical Diagram The First Age of Logic: Symbolic

More information

A Possibilistic Extension of Description Logics

A Possibilistic Extension of Description Logics A Possibilistic Extension of Description Logics Guilin Qi 1, Jeff Z. Pan 2, and Qiu Ji 1 1 Institute AIFB, University of Karlsruhe, Germany {gqi,qiji}@aifb.uni-karlsruhe.de 2 Department of Computing Science,

More information

The Limitation of Bayesianism

The Limitation of Bayesianism The Limitation of Bayesianism Pei Wang Department of Computer and Information Sciences Temple University, Philadelphia, PA 19122 pei.wang@temple.edu Abstract In the current discussion about the capacity

More information

Probabilistic Logics and Probabilistic Networks

Probabilistic Logics and Probabilistic Networks Probabilistic Logics and Probabilistic Networks Jan-Willem Romeijn, Philosophy, Groningen Jon Williamson, Philosophy, Kent ESSLLI 2008 Course Page: http://www.kent.ac.uk/secl/philosophy/jw/2006/progicnet/esslli.htm

More information

COMP9414: Artificial Intelligence Propositional Logic: Automated Reasoning

COMP9414: Artificial Intelligence Propositional Logic: Automated Reasoning COMP9414, Monday 26 March, 2012 Propositional Logic 2 COMP9414: Artificial Intelligence Propositional Logic: Automated Reasoning Overview Proof systems (including soundness and completeness) Normal Forms

More information

7. Propositional Logic. Wolfram Burgard and Bernhard Nebel

7. Propositional Logic. Wolfram Burgard and Bernhard Nebel Foundations of AI 7. Propositional Logic Rational Thinking, Logic, Resolution Wolfram Burgard and Bernhard Nebel Contents Agents that think rationally The wumpus world Propositional logic: syntax and semantics

More information

Pengju XJTU 2016

Pengju XJTU 2016 Introduction to AI Chapter13 Uncertainty Pengju Ren@IAIR Outline Uncertainty Probability Syntax and Semantics Inference Independence and Bayes Rule Wumpus World Environment Squares adjacent to wumpus are

More information

Computing Least Common Subsumers in Description Logics with Existential Restrictions*

Computing Least Common Subsumers in Description Logics with Existential Restrictions* Computing Least Common Subsumers in Description Logics with Existential Restrictions* Franz Baader, Ralf Kiisters, Ralf Molitor LuFg Theoretische Informatik, RWTH Aachen email: {baader,kuesters,molitor}@infonnatik.rwth-aachcn.dc

More information

CONSERVATION by Harvey M. Friedman September 24, 1999

CONSERVATION by Harvey M. Friedman September 24, 1999 CONSERVATION by Harvey M. Friedman September 24, 1999 John Burgess has specifically asked about whether one give a finitistic model theoretic proof of certain conservative extension results discussed in

More information

Omitting Types in Fuzzy Predicate Logics

Omitting Types in Fuzzy Predicate Logics University of Ostrava Institute for Research and Applications of Fuzzy Modeling Omitting Types in Fuzzy Predicate Logics Vilém Novák and Petra Murinová Research report No. 126 2008 Submitted/to appear:

More information

Introduction to Artificial Intelligence Propositional Logic & SAT Solving. UIUC CS 440 / ECE 448 Professor: Eyal Amir Spring Semester 2010

Introduction to Artificial Intelligence Propositional Logic & SAT Solving. UIUC CS 440 / ECE 448 Professor: Eyal Amir Spring Semester 2010 Introduction to Artificial Intelligence Propositional Logic & SAT Solving UIUC CS 440 / ECE 448 Professor: Eyal Amir Spring Semester 2010 Today Representation in Propositional Logic Semantics & Deduction

More information

Bayesian belief networks

Bayesian belief networks CS 2001 Lecture 1 Bayesian belief networks Milos Hauskrecht milos@cs.pitt.edu 5329 Sennott Square 4-8845 Milos research interests Artificial Intelligence Planning, reasoning and optimization in the presence

More information

System f2lp Computing Answer Sets of First-Order Formulas

System f2lp Computing Answer Sets of First-Order Formulas System f2lp Computing Answer Sets of First-Order Formulas Joohyung Lee and Ravi Palla Computer Science and Engineering Arizona State University, Tempe, AZ, USA {joolee,ravi.palla}@asu.edu Abstract. We

More information

Undecidable Problems. Z. Sawa (TU Ostrava) Introd. to Theoretical Computer Science May 12, / 65

Undecidable Problems. Z. Sawa (TU Ostrava) Introd. to Theoretical Computer Science May 12, / 65 Undecidable Problems Z. Sawa (TU Ostrava) Introd. to Theoretical Computer Science May 12, 2018 1/ 65 Algorithmically Solvable Problems Let us assume we have a problem P. If there is an algorithm solving

More information

2.2 Lowenheim-Skolem-Tarski theorems

2.2 Lowenheim-Skolem-Tarski theorems Logic SEP: Day 1 July 15, 2013 1 Some references Syllabus: http://www.math.wisc.edu/graduate/guide-qe Previous years qualifying exams: http://www.math.wisc.edu/ miller/old/qual/index.html Miller s Moore

More information

Logical Agents (I) Instructor: Tsung-Che Chiang

Logical Agents (I) Instructor: Tsung-Che Chiang Logical Agents (I) Instructor: Tsung-Che Chiang tcchiang@ieee.org Department of Computer Science and Information Engineering National Taiwan Normal University Artificial Intelligence, Spring, 2010 編譯有誤

More information

7 LOGICAL AGENTS. OHJ-2556 Artificial Intelligence, Spring OHJ-2556 Artificial Intelligence, Spring

7 LOGICAL AGENTS. OHJ-2556 Artificial Intelligence, Spring OHJ-2556 Artificial Intelligence, Spring 109 7 LOGICAL AGENS We now turn to knowledge-based agents that have a knowledge base KB at their disposal With the help of the KB the agent aims at maintaining knowledge of its partially-observable environment

More information

Finite-Delay Strategies In Infinite Games

Finite-Delay Strategies In Infinite Games Finite-Delay Strategies In Infinite Games von Wenyun Quan Matrikelnummer: 25389 Diplomarbeit im Studiengang Informatik Betreuer: Prof. Dr. Dr.h.c. Wolfgang Thomas Lehrstuhl für Informatik 7 Logik und Theorie

More information

Propositional Logic: Models and Proofs

Propositional Logic: Models and Proofs Propositional Logic: Models and Proofs C. R. Ramakrishnan CSE 505 1 Syntax 2 Model Theory 3 Proof Theory and Resolution Compiled at 11:51 on 2016/11/02 Computing with Logic Propositional Logic CSE 505

More information

BETWEEN THE LOGIC OF PARMENIDES AND THE LOGIC OF LIAR

BETWEEN THE LOGIC OF PARMENIDES AND THE LOGIC OF LIAR Bulletin of the Section of Logic Volume 38:3/4 (2009), pp. 123 133 Kordula Świȩtorzecka BETWEEN THE LOGIC OF PARMENIDES AND THE LOGIC OF LIAR Abstract In the presented text we shall focus on some specific

More information

On Markov Properties in Evidence Theory

On Markov Properties in Evidence Theory On Markov Properties in Evidence Theory 131 On Markov Properties in Evidence Theory Jiřina Vejnarová Institute of Information Theory and Automation of the ASCR & University of Economics, Prague vejnar@utia.cas.cz

More information

185.A09 Advanced Mathematical Logic

185.A09 Advanced Mathematical Logic 185.A09 Advanced Mathematical Logic www.volny.cz/behounek/logic/teaching/mathlog13 Libor Běhounek, behounek@cs.cas.cz Lecture #1, October 15, 2013 Organizational matters Study materials will be posted

More information

In a second part, we concentrate on interval models similar to the traditional ITL models presented in [, 5]. By making various assumptions about time

In a second part, we concentrate on interval models similar to the traditional ITL models presented in [, 5]. By making various assumptions about time Complete Proof Systems for First Order Interval Temporal Logic Bruno Dutertre Department of Computer Science Royal Holloway, University of London Egham, Surrey TW0 0EX, United Kingdom Abstract Dierent

More information

A Proof-Theoretic Approach to Irrelevance: Richard E. Fikes. KSL, Stanford University. et al., 1994b].

A Proof-Theoretic Approach to Irrelevance: Richard E. Fikes. KSL, Stanford University. et al., 1994b]. A Proof-Theoretic Approach to Irrelevance: Foundations and Applications Alon Y. Levy AT&T Bell Laboratories Murray Hill, NJ, 7974 levy@research.att.com Richard E. Fikes KSL, Stanford University Palo Alto,

More information

4.1 Notation and probability review

4.1 Notation and probability review Directed and undirected graphical models Fall 2015 Lecture 4 October 21st Lecturer: Simon Lacoste-Julien Scribe: Jaime Roquero, JieYing Wu 4.1 Notation and probability review 4.1.1 Notations Let us recall

More information

Foundations of Artificial Intelligence

Foundations of Artificial Intelligence Foundations of Artificial Intelligence 7. Propositional Logic Rational Thinking, Logic, Resolution Joschka Boedecker and Wolfram Burgard and Frank Hutter and Bernhard Nebel Albert-Ludwigs-Universität Freiburg

More information

A Stochastic l-calculus

A Stochastic l-calculus A Stochastic l-calculus Content Areas: probabilistic reasoning, knowledge representation, causality Tracking Number: 775 Abstract There is an increasing interest within the research community in the design

More information

An Algorithm for Automatic Demonstration of Logical Theorems

An Algorithm for Automatic Demonstration of Logical Theorems An Algorithm for Automatic Demonstration of Logical Theorems Orlando Zaldivar-Zamorategui Jorge Carrera-Bolaños Abstract. The automatic demonstration of theorems (ADT) is and has been an area of intensive

More information