Automatic Proofs in Equational Logic

Size: px
Start display at page:

Download "Automatic Proofs in Equational Logic"

Transcription

1 Automatic Proofs in Equational Logic master thesis in computer science by Thomas Sternagel submitted to the Faculty of Mathematics, Computer Science and Physics of the University of Innsbruck in partial fulfillment of the requirements for the degree of Master of Science supervisor: Dr. Harald Zankl, Institute of Computer Science Innsbruck, 16 October 2012

2

3 Master Thesis Automatic Proofs in Equational Logic Thomas Sternagel ( ) 16 October 2012 Supervisor: Dr. Harald Zankl

4

5 Eidesstattliche Erklärung Ich erkläre hiermit an Eides statt durch meine eigenhändige Unterschrift, dass ich die vorliegende Arbeit selbständig verfasst und keine anderen als die angegebenen Quellen und Hilfsmittel verwendet habe. Alle Stellen, die wörtlich oder inhaltlich den angegebenen Quellen entnommen wurden, sind als solche kenntlich gemacht. Die vorliegende Arbeit wurde bisher in gleicher oder ähnlicher Form noch nicht als Magister-/Master-/Diplomarbeit/Dissertation eingereicht. Datum Unterschrift

6

7 Abstract Proving if an equation follows from a set of given equations is a fundamental problem in computer science. In the positive case this problem can be solved with equational logic by creating proof trees using an inference system. Alternatively, Knuth-Bendix completion (if successful) yields a decision procedure for this problem. The main part of this thesis is to study the relationships between completion and equational logic and to compute proof trees automatically with the help of completion. The results were integrated into the completion tool KBCV. Furthermore if completion fails we try to find counter models with the help of matrix interpretations and an SMT solver.

8

9 Acknowledgments I want to thank the usual suspects at the coffee break for providing me with the needed time to get things back into perspective. I also want to thank my supervisor for his constant encouragement and understanding. Finally I want to express my gratitude towards my girlfriend for without her unshakable belief in me this master thesis would not have come to pass. Finally I thank the Hypo Tirol Bank for partially supporting the research conducted within this thesis. vii

10

11 Contents 1. Introduction Chapter Notes Preliminaries Equational Logic Inference Rules Term Rewriting Termination Completion Chapter Notes Recording Completion Record Phase Compare Phase Recall Phase Plant & Grow Phase Soundness of Recording Completion Optimization Compare on Demand Proof Tree Size Chapter Notes Model Finding Matrix Interpretations Chapter Notes Automation Implementation Input Formats Installation Termination Checks Features Completion Equational Logic Proofs Counter Models Certification Chapter Notes Experiments Test Environment ix

12 6.2. Results Chapter Notes Conclusion Summary Future Work Bibliography 47 A. Command Reference 50 B. Graphical User Interface 54 B.1. Main Menu B.1.1. File Menu B.1.2. Edit Menu B.1.3. View Menu B.1.4. Settings Menu B.1.5. Help Menu B.2. Operations Panel B.3. Equations List B.4. Rules List B.5. LPO Precedence B.6. Undo/Redo-Stack B.7. Status Messages Field B.8. Dialogs and Windows B.8.1. Add Equations B.8.2. LPO Settings B.8.3. Termination Prover Settings B.8.4. Automatic Completion Settings Index 59

13 1. Introduction The word problem is the task to determine if two terms are equivalent with respect to a given set of equations. It is a fundamental question in mathematics and computer science and in the positive case we may make use of the inference system for equational logic (see Figure 2.1 on page 5) to establish a proof tree as evidence for a given conjecture. To understand the problem we are trying to solve, consider the following set of equations: 1 E = {ff f, ggf g} Part of a proof tree which shows that fgf fgg holds with respect to E may look as follows: [a] ggff gf [c] fggff fgf [s] fgf fggff ff f [c]. [c].. [t] fggff fggf fggf fgg [t] fggff fgg [t] fgf fgg The catch is that finding such a tree by hand needs a lot of experience and is undecidable in general, cf. [14] or Example in [1] on page 60. On the other hand the word problem for equational logic is known to be semi-decidable. This means that if a proof tree exists we can always find it by enumerating all possible proof trees up to a fixed size until we find the one we are looking for. But enumeration of proof trees seems not very promising in general. What we want is an algorithm which constructs a proof tree in many cases. A well established method to get a decision procedure for the word problem is completion (if successful). The basic idea of Knuth-Bendix completion [12] is based on Newman s Lemma [17], which states that for terminating systems confluence and local confluence coincide. Hence we maintain a set of terminating rewrite rules and want to make it locally confluent. In this approach a reduction order is given as input to orient equations into rewrite rules and thus maintains termination of the intermediate rewrite systems R i. We continue the procedure until we arrive at some R n where all critical pairs are joinable (if completion is successful). But this just means that R n is locally confluent and because it is also terminating it follows that it is confluent by Newman s Lemma. Even if we have proven that an equation holds with respect to the set of equations by completion for example, the question of finding a corresponding proof tree in equational logic remains open. 1 Here f and g are unary function symbols and we write ff instead of f(f(x)). 1

14 1 Introduction Now, is it possible that we make use of completion to arrive at an equational logic proof tree automatically? If not, may we at least figure out if there is no such proof tree at all or we just were not able to find it? What are the relationships between equational logic and completion? Those are the questions which we will investigate in this thesis. This thesis is subdivided into seven chapters. Each of these starts with a short overview of the chapters contents followed by some sections elaborating the presented topics. Also a lot of examples are interspersed among the text to help the reader understand the presented issues. The contents of all chapters are briefly summarized in the sequel. The next chapter sums up the needed preliminaries for the chapters to follow. First the basics of equational logic and term rewriting are described. This is followed by an introduction to completion. Chapter 3 constitutes the main theoretical part of this thesis. Here we introduce recording completion and then describe each of its four phases in detail. We also give a proof of soundness of recording completion. Then we discuss some possible optimizations of the procedure. In Chapter 4 we describe matrix interpretations and how they may be used in collaboration with an SMT solver to find counter models for conjectures in equational logic. We also have an implementation of the methods described in Chapters 3 and 4. The tool KBCV and how recording completion was integrated into it is the topic of Chapter 5. We state where to get the tool and how to use it and also specify the main features and inner workings of it. Chapter 6 gives some experimental results. We start with a description of the used hardware and the test bed. Then we compare KBCV to some state-ofthe-art completion tools. Finally Chapter 7 concludes with a summary of all previous chapters and some ideas for possible future work. At the very end of this thesis you will find an appendix containing the command reference for the command line version of KBCV on the one hand and a short description of its graphical user interface on the other Chapter Notes At the end of each chapter you find chapter notes which recap the chapters contents and give some additional information like references to related topics and comparisons to methods similar to those at hand. There are three important publications related to this thesis. The first is my bachelor thesis [19] which describes in detail KBCV 1.0 which was extensively overhauled in order to integrate the results of this thesis. I also co-authored a workshop paper [23] accepted at the IWC 2012 held in Nagoya, which briefly describes recording completion and then sheds more light on the formalizations which were in order to arrive at certifiable proofs for completion and equational logic. This work made it possible that proofs generated by KBCV are now certifiable with the proof checker CeTA [20, 23]. Finally I co-authored a system 2

15 1.1 Chapter Notes description [21] of KBCV 1.7, the previous version of the tool, accepted and presented at IJCAR 2012 in Manchester. Readers not familiar with equational logic, term rewriting and completion should now read on in the next chapter while experts in those topics might as well jump directly to Chapter 3 to learn about recording completion. 3

16 2. Preliminaries This chapter gives a concise introduction to the topics underlying the main results of this thesis. We will begin to sum up the basics of equational logic and take a short survey of its inference system. Then we very briefly recap term rewriting. Finally the concept of completion together with a system of inference rules which model it are described Equational Logic We will assume some familiarity with terms and start with the most basic definitions. A signature is a set of function symbols associated with fixed arities. Let F be a signature and V a set of variables disjoint from F. By T (F, V) we denote the set of terms over F and V. Positions are used to address symbol occurrences in terms. Given a term t we use Pos(t) to denote the set of positions induced by the term t and we write t p with p Pos(t) for the subterm of t at position p. We write s[t] p for the result of replacing s p with t in s. The subset of positions p Pos(t) such that the root of t p is in F is denoted by Pos F (t). A substitution σ is a finite mapping from variables to terms. We extend substitutions to terms by defining f(t 1,..., t n )σ = f(t 1 σ,..., t n σ) and write tσ for the result of applying σ to a term t. The set of substitutions over F and V we denote by Σ(F, V). A variable substitution is a substitution from V to V, a renaming is a bijective variable substitution, and a term s is a variant of a term t if s = tσ for some renaming σ. Let be a new function symbol which does neither occur in F nor in V and which we will call hole. Now a context is a term t over T (F { }, V) with exactly one hole in it. We denote the set of all contexts by C(F, V). An equation is a pair of terms (l, r), written l r. A finite set of equations is called an equational system (ES for short). An F-algebra A over a signature F consists of a carrier set (or domain) A and a mapping associating each function symbol f F with an interpretation function f A : A n A. An assignment α is a mapping from V to A. Now we inductively define a mapping [α] A from the set of terms to A as follows: [α] A (x) = α(x) [α] A (f(t 1,..., t n )) = f A ([α] A (t 1 ),..., [α] A (t n )) An equation s t is said to hold in an algebra A (written A = s t) if and only if [α] A (s) = [α] A (t) for all assignments α. If every equation from E holds in A (A = E) we say that A is a model of E. 4

17 2.1 Equational Logic [r] reflexivity [s] symmetry [t] transitivity [a] application [c] congruence t t s t t s s t, t u s u lσ rσ s 1 t 1,..., s n t n f(s 1,..., s n ) f(t 1,..., t n ) t T (F, V) (l r) E, σ Σ(F, V) f F Figure 2.1.: The inference rules of equational logic. Now if an equation s t is derivable from E with the help of the inference rules of Figure 2.1 we also write E s t and say that s t is a syntactic consequence of E. On the other hand if s t holds in all models of E we write E = s t and say that s t is a semantic consequence of E. Due to Birkhoff s Theorem [3] the relations and = coincide. We will call the set of all equations s t which are a consequence of E the equational theory of E Inference Rules In this section we will look at a system of inference rules for equational logic. The rules are shown in Figure 2.1. Here t, s, l, r, t i and s i are terms, σ is a substitution, and f a function symbol with arity n. Each rule works on a single equation. We start out with the equation for which we want to know if it is a consequence of a given set of equations and apply rules from the inference system in a bottom up manner. [r] reflexivity [s] symmetry [t] transitivity t t s t t s s t, t u s u t T (F, V) The first three rules [r], [s] and [t] just capture the properties of an equivalence relation (reflexivity, symmetry, and transitivity). The remaining two rules are more subtle. 5

18 2 Preliminaries [a] application lσ rσ (l r) E, σ Σ(F, V) Application [a] states that equational logic is closed under substitutions, i.e., if we take an equation from E and apply the same substitution on both sides the new equation is a consequence of E. [c] congruence s 1 t 1,..., s n t n f(s 1,..., s n ) f(t 1,..., t n ) f F Finally congruence [c] means that equational logic is also closed under all n-ary function symbols f, i.e., if s 1 t 1,..., s n t n are provable from E then also f(s 1,..., s n ) f(t 1,..., t n ) is provable from E. Now if an equation is really a consequence of E we may start out with this equation and build an equational logic proof tree bottom-up where the leaves are either applications of [a] or the equations are directly in E. Let us look at an example to get this clear: Example 2.1. As we will see in Section 2.3 the equation fgf fgg is a consequence of E = {ff f, ggf g}. On this account we also know that there has to exist an equational logic proof tree. Now given the inference rules of equational logic we start out with our equation fgf fgg at the bottom and try to establish a complete proof tree. A part of such a proof tree (as shown in the introduction) may look as follows: [a] ggff gf [c] fggff fgf [s] fgf fggff ff f [c]. [c].. [t] fggff fggf fggf fgg [t] fggff fgg [t] fgf fgg But how did we arrive at this? Let s try to find it by hand. So given the equation at the bottom which inference rule should we use? Obviously the initial equation is not directly in E. Also the left- and right-hand sides are different so we cannot apply [r] at this point. Application of [s] would be possible, but it does not seem that we would gain much from it. So maybe we try [t]? Here the next challenge awaits. If we want to use transitivity with which term should we instantiate t in the inference rule? In the example above we chose fggff but at this point it is by no means clear that this was a reasonable choice. To learn about another approach to this problem, namely completion, we first have to introduce term rewriting. 6

19 2.2 Term Rewriting 2.2. Term Rewriting A rewrite rule is a pair of terms (l, r), written l r such that l is not a variable and r contains only variables also contained in l. A term rewrite system (TRS for short) R is a finite set of rewrite rules. A string rewrite system (SRS for short) is a TRS over a signature that contains only unary function symbols. For convenience we will drop parentheses and variables when working with SRSs. So for example f(g(f(x))) will simply be written as fgf. A rewrite relation is a binary relation on terms that is closed under contexts and substitutions. We write s R t if there exists a rewrite rule l r in R, a substitution σ, and a position p Pos(s) such that s p = lσ and t = s[rσ] p, and we say that s rewrites to t in one rewrite step. We will drop the subscript R from R and its derivatives in the sequel if no confusion can arise. A term s is said to be in normal form (NF for short) with respect to R if there is no term t with s t, otherwise it is said to be reducible. We write s t if t s. The symmetric closure of will be denoted by. The concatenation of rewrite steps yields rewrite sequences. As usual denotes the reflexive and transitive closure of. A conversion between terms s and t consists of a finite sequence of terms t 1,..., t n with n 1 such that s = t 1 t 2 t n = t. In this case write s t and say that s and t are convertible. If l r is a rewrite rule and σ a renaming then the rewrite rule lσ rσ is called a variant of l r. An overlap (l 1 r 1, p, l 2 r 2 ) σ of a TRS R consists of variants l 1 r 1 and l 2 r 2 of rewrite rules of R without common variables, a position p Pos F (l 2 ), and a most general unifier σ of l 1 and l 2 p. If p = ɛ then we require that l 1 r 1 and l 2 r 2 are not variants of each other. From such an overlap we obtain the critical pair (l 2 σ)[r 1 σ] p r 2 σ. The set of all critical pairs of R is denoted by CP(R). A TRS R is called left-reduced if for every rewrite rule l r R the lefthand side l is in normal form with respect to R \ {l r}. We say that R is right-reduced if the right-hand side r of every rewrite rule l r R is a normal form. A TRS that is both left- and right-reduced is simply called reduced. Two terms s, t are joinable if there exists a term u such that s u t. This is called a joining sequence for s and t. A term s is said to be confluent if for all terms t and u with t s u we have that t and u are joinable. A TRS R is confluent if all terms are confluent. We also say that it has the Church-Rosser property (CR for short). A term s is locally confluent if for all terms t and u with t s u we have that t and u are joinable. A TRS R is locally confluent if all terms are locally confluent. We also say that it has the weak Church-Rosser property (WCR for short). A TRS R is said to be equivalent to an ES E if and only if s R t coincides with the equational theory of E. 7

20 2 Preliminaries deduce orient simplify delete compose collapse (E, R) (E {s t}, R) (E {s. t}, R) (E, R {s t}) (E {s. t}, R) (E {u. t}, R) (E {s s}, R) (E, R) (E, R {s t}) (E, R {s u}) (E, R {s t}) (E {u t}, R) if s u t if s > t if s u if t u if s u Figure 2.2.: The inference rules of completion Termination A term t is called terminating if it admits no infinite rewrite sequences starting at t. A TRS R is terminating if all terms are terminating. We also say that it is strongly normalizing (SN for short). A reduction order > is a proper order on terms that is well-founded, monotone, and stable. A TRS R is terminating if and only if there exists a reduction order > such that l > r for each rewrite rule l r R. A proper order on a finite signature is called a precedence and denoted by the symbol. For any precedence the induced lexicographic path order (LPO) [10] denoted by lpo is a reduction order. We write s lpo t if s = f(s 1,..., s n ) and one of the following cases holds: (1) t = f(t 1,..., t n ) and i : j < i : s j = t j, s i lpo t i, j > i : s lpo t j (2) t = g(t 1,..., t m ) and f g and j : s lpo t j (3) i : s i lpo t or s i = t A confluent and terminating TRS R is said to be complete Completion Given a set of equations E we sometimes need to know if another equation is a (semantic) consequence of this set. This is called the word problem for E. To solve the word problem automatically it would be nice to have an algorithm to apply. Unfortunately the word problem (like other non trivial problems) is undecidable in general [14]. Because much effort was spent in the past to investigate the word problem we can make use of completion [12] today to tackle it. Informally speaking the completion procedure takes as input a reduction 8

21 2.3 Completion order and a set of equations E and tries to construct an equivalent TRS R out of it which is confluent and terminating. If successful this yields a decision procedure for the word problem of E because to test E s t we reduce s and t to normal form with respect to R (those exist since R is terminating). Since R is also confluent the normal forms have to be unique. Hence E s t if and only if the normal forms of s and t coincide. There are a few different approaches towards completion. The one by Bachmair [2] using a system of inference rules seems to be the most adequate choice in our case. The inference rules are listed in Figure 2.2 and we will explain them rule by rule in the following. Basically the inference rules use pairs (E, R). Here E is a set of equations and R is a terminating TRS. We start out with the pair (E 0, ) where E 0 denotes the initial set of equations for which we try to build a decision procedure. Additionally we use a reduction order > as input which we need to orient equations into rewrite rules. Writing (E, R) (E, R ) to indicate that (E, R ) is obtained from (E, R) by one of the inference rules of Figure 2.2 we define a completion procedure: Definition 2.2. A completion procedure is a program that accepts as input a finite set of equations E 0 (together with a reduction order >) and uses the inference rules of Figure 2.2 to construct a sequence (E 0, ) (E 1, R 1 ) (E 2, R 2 ) (E 3, R 3 ) Such a sequence is called a run of the completion procedure on input E 0 and >. A finite run (E 0, ) n (, R n ) is successful if R n is locally confluent. The below-mentioned result follows from [1, Theorem 7.2.8] specialized to finite runs. Lemma 2.3. Let (E 0, ) n (, R n ) be a successful run of completion. Then R n is terminating, confluent, and equivalent to E 0. The procedure may have three different outcomes. Firstly after n applications of rules we arrive at (, R n ) where all critical pairs of R n are joinable, i.e., R n is WCR and because each intermediate R i is ensured to be terminating by the given reduction order > it follows that R n is confluent by Newman s Lemma. Hence R is complete and also equivalent to E 0 by construction of the rules of the inference system. In this case R n yields a decision procedure for the word problem for E 0 (as sketched above). Another possibility is that we get stuck, i.e., the orient inference rule is not applicable anymore, while E n is not empty yet. In this case we have to output failure because the completion procedure did not succeed in finding a decision procedure for the word problem of E 0. In the third case the procedure may run forever adding new rules to R n and never arriving at an empty E n. 9

22 2 Preliminaries After this overview let us take a closer look at the individual rules. deduce (E, R) (E {s t}, R) if s u t Maybe the most important rule is deduce which is used to derive new consequences from the given equations. For terminating systems we can restrict deduce to overlaps. The rule then reads like this: If we find an overlap between rules in R that yields a critical pair s t then we add this new equation to our set E and continue with the new E while R remains unchanged. orient (E {s. t}, R) (E, R {s t}) if s > t The rule orient uses the given reduction order > to orient equations from E (if possible) into rewrite rules in R. Here s. t means that orient can be instantiated with s t or t s. simplify (E {s. t}, R) (E {u. t}, R) if s u Simplify is used to keep the equations in E as simple as possible. To this end we use the rules from R to reduce the terms in equations from E to normal form with respect to R. Here the dot above should again emphasize that the left-hand side or the right-hand side of an equation in E can be reduced with respect to R. delete (E {s s}, R) (E, R) The delete rule is used to remove trivial equations, i.e., equations where the left-hand side is the same as the right-hand side, from the set E. So far the rules mainly worked on the equations in E. The final two rules are used to keep R reduced. compose (E, R {s t}) (E, R {s u}) if t u Compose works on right-hand sides of rules in R. If we are able to reduce the right-hand side t of a rule s t to u with respect to R we replace s t by s u in R ensuring right-reducedness of R. collapse (E, R {s t}) (E {u t}, R) if s u Finally collapse works on left-hand sides of rules in R and is used to reduce them to normal form with respect to the other rules in R. Since the new lefthand side u is possibly smaller than t with respect to the reduction order > we have to remove the rule from R and add it as an equation to E again. The above in the side-condition of the rule just means that s is reducible with a 10

23 2.3 Completion rule l r R where l must not be reducible by s t. 1 The best way to get a grip on this system of inference rules is to look at an example: Example 2.4. Given the set of equations E = {ff f, ggf g} we will use the rules from the inference system of Figure 2.2 to arrive at a confluent and terminating TRS R equivalent to E constituting a decision procedure for the word problem for E. To orient equations we will just use LPO with empty precedence. We start out with the pair (E, ). Now we apply orient twice to orient the two equations in E into rewrite rules in R from left to right. This gives (, {ff f, ggf g}). At this point we find an overlap between the left-hand sides of the two new rules and deduce the critical pair ggf gf. Our tuple looks as follows: ({ggf gf}, {ff f, ggf g}) As you may have already noticed the left-hand side of this new equation can be reduced with the second rule in R by application of simplify which gives us: ({g gf}, {ff f, ggf g}) We immediately orient this simplified equation into a rule from right to left arriving at (, {ff f, ggf g, gf g}). The last two rules overlap and we deduce the new critical pair gg g which we orient from left to right yielding: (, {ff f, ggf g, gf g, gg g}). Now we use this new rule to collapse the left-hand side of the second rule ggf g and add the new equation to E: ({gf g}, {ff f, gf g, gg g}). We simplify the left-hand side of the new equation using rule gf g. This yields the trivial equation g g which we delete immediately arriving at: (, {ff f, gf g, gg g}). 1 This is called the strict encompassment condition. In [20] we show that for finite completion runs this condition can be dropped. 11

24 2 Preliminaries All of the four critical pairs are joinable, the set of equations is empty, and R = {ff f, gf g, gg g} constitutes a decision procedure for the word problem for our initial E. For example if we want to know if the equation fgf fgg is a consequence of E we just rewrite both sides to normal form with respect to R. If the two normal forms are the same the two terms are equivalent with respect to E and we know that the equation is a consequence of E. In this example we have fgf fg fgg by the second and third rules of R respectively. Hence both sides rewrite to the normal form fg and hence fgf fgg is a consequence of E. As we will see in Chapter 3 the key to our approach towards automatic equational logic proofs lies hidden in the side-conditions of the discussed inference rules Chapter Notes In this chapter we briefly introduced the basics of equational logic and term rewriting. We further described the fundamentals of completion. For a thorough introduction to these topics see [1] or [16]. A proof of undecidability of the word problem for equational logic can be found in [14]. Completion was first introduced in [12]. Bachmair uses an inference system to model completion [2]. As shown in [20] the strict encompassment side-condition for the collapse-rule may be dropped for finite completion proofs, i.e., where the completion procedure stops successfully after a finite number of steps. The lexicographic path order is described in [10]. In the next chapter we propose an approach to find equational logic proof tees automatically based on a combination of equational logic and completion. 12

25 3. Recording Completion In this chapter we will see how the inference rules of completion may be augmented in order to store intermediate information which will be needed to construct equational logic proof trees automatically. We first will describe the new inference system and then look at all four phases of recording completion in turn by means of an example. Then we will show recording completion to be sound and finally investigate some possible optimizations of the procedure. The main idea of this approach is to investigate the relationship between completion and equational logic. The former yields a decision procedure for the word problem of a given set of equations if successful while the latter gives us the means to build equational logic proof trees if we can figure out how or have a lot of time and just enumerate proof trees up to a given size. In this section we show that combining these two methods allows to construct proof trees automatically. In the sequel we will refer to completion with respect to the inference rules from Figure 2.2 (in contrast to recording completion) with normal or standard completion. Take a look at E 0 and R n from Section 2.3 E 0 ff f ggf g R n ff f gf g gg g together with the joining sequence fgf fg fgg with respect to R n. Here the second and the third rule from R n were used to arrive at the normal form fg. Those are both not rules corresponding to equations from E 0. They were derived along the way during the completion process. But to be able to construct an equational logic proof tree out of the sequence we first of all need to resolve the derived rules in the sequence until they correspond to equations in E 0. Unfortunately normal completion though it derives new rules, does not save any information about how it derived them. This information lies hidden in the side-conditions of the inference rules for completion. So if we store this intermediate information somehow we may use it later on to build an equational logic proof tree from a joining sequence. This extended kind of completion we will call recording completion, since it is able to record the history of how rules in the complete TRS were found. To ensure this we extend the inference rules of completion with a third component H (short 13

26 3 Recording Completion deduce (E, R, H) (E {m : s t}, R, H {m : s j u k t}) if s j u k t orient l (E {i : s t}, R, H) (E, R {i : s t}, H) if s > t orient r (E {i : s t}, R, H {i : s u t}) (E, R {i : t s}, H {i : t 1 u 1 s}) if t > s simplify l (E {i : s t}, R, H) (E {m : u t}, R, H {m : u l s i t}) simplify r (E {i : s t}, R, H) (E {m : s u}, R, H {m : s i t l u}) if s l u if t l u delete compose collapse (E {i : s s}, R, H {i : s v s}) (E, R, H) (E, R {i : s t}, H) (E, R {m : s u}, H {m : s i t j u}) (E, R {i : s t}, H) (E {m : u t}, R, H {m : u j s i t}) if t j u if s j u Figure 3.1.: The inference rules of recording completion. for History). Additionally, for convenience we give each entry in E, R and H a unique index. Now a history entry is of the form i : s j u k t where i is the index of the entry, j and k are indices of equations or rules, s, u and t are terms, and, may be instantiated with, or. The new inference rules are listed in Figure 3.1. We may use them in the same way as with normal completion. We set H 0 = {i : s i t i t i : s t E 0 }, i.e., initializing the history with a corresponding entry for each equation in E 0. 1 Then we start out with the triple (E 0,, H 0 ) and apply rules from the new inference system repeatedly until we hopefully arrive at some point at (, R n, H n ) where R n is a complete TRS (which we will show to be equivalent to E 0 in Section 3.5). If recording completion was successful, H n contains important information about the intermediate steps which we will need later on to construct an equational logic proof tree from a joining sequence with 1 We write s t as s t t since this allows for a uniform treatment of history entries in the inference rules. 14

27 respect to R n. Let us now take a closer look at each of the new inference rules. deduce (E, R, H) (E {m : s t}, R, H {m : s j u k t}) if s j u k t The deduce rule works in exactly the same way as before only it additionally saves the information where the critical pair yielding the new equation came from. If we find an overlap between rules indexed j and k we not only put the new critical pair s t with fresh index m into E but also save the corresponding information s j u k t associated with the same index m in H. So we are later able to infer where the new equation derived from. Here and in all later rules m is supposed to be a fresh index which is greater than all other indices used so far. This convention will be essential to show the soundness of recording completion. The orient rule is split into two separate rules orient l and orient r. orient l (E {i : s t}, R, H) (E, R {i : s t}, H) if s > t In the simple case if we are able to orient l s t from left to right we do not need to modify the information stored in H. orient r (E {i : s t}, R, H {i : s u t}) (E, R {i : t s}, H {i : t 1 u 1 s}) if t > s In contrast if we orient r s t from right to left the according information also has to be updated in the history H. We just take the corresponding history entry and mirror it. Note that we also have to mirror the relational symbols, in the entry (which is hinted at by putting 1 in the superscript), i.e., becomes and vice versa while is not changed. Likewise simplify is split into two separate rules simplify l and simplify r. simplify l (E {i : s t}, R, H) (E {m : u t}, R, H {m : u l s i t}) simplify r (E {i : s t}, R, H) (E {m : s u}, R, H {m : s i t l u}) if s l u if t l u The simplify rules again work just like in normal completion (simplify l is used to simplify the left-hand side of an equation whereas simplify r simplifies the right-hand side) but additionally the information hidden in the sidecondition is also stored in H. The history entry u l s i t in simplify l just means: There was an equation s t at index i and s was simplified with a rule indexed l to u. Conversely the entry s i t l u in simplify r states: There was an equation s t at index i and t was simplified with a rule indexed l to u. 15

28 3 Recording Completion delete (E {i : s s}, R, H {i : s v s}) (E, R, H) This rule just states, that if we remove a trivial equation we may also discard the corresponding history entry along with it as it is not used anymore. compose (E, R {i : s t}, H) (E, R {m : s u}, H {m : s i t j u}) if t j u Again application of the compose rule does the same as in normal completion. The history entry s i t j u which is generated in addition means: We had a rule s t at index i and we reduced t with rule j to u. collapse (E, R {i : s t}, H) (E {m : u t}, R, H {m : u j s i t}) if s j u Finally, the new version of collapse also does the same as before and just additionally stores the history entry u j s i t which means: We had a rule s t at index i and we reduced s with rule j to u. Having established this new inference system we now can make use of it to arrive at equational logic proof trees automatically (if completion is successful). We can do this in four steps: 1. We use recording completion and if we get a successful run we know by Lemma 2.3 that we have found a complete TRS R constituting a decision procedure for the word problem for E and a set of corresponding history entries H. 2. In the next phase we check if the equation under investigation really is a consequence of E. If this is the case we have a joining sequence with respect to R at our disposal. 3. In the third phase we expand this joining sequence with help of H until only steps due to oriented equations from E are left. 4. Finally in the last phase we build an equational logic proof tree from this fully expanded sequence. The next four sections explain these phases in more detail by means of an example. We have given expressive names to the different phases which should help to remember them easily Record Phase The first phase is called record phase because it makes use of recording completion. We want to complete the set of equations So we start out with the triple: E = {ff f, ggf g}. 16

29 3.1 Record Phase E 0 R 0 H 0 1: ff f 1: ff 1 f 1 f 2: ggf g 2: ggf 2 g 2 g We use orient l twice to orient equations 1 and 2 into rules 1 and 2 from left to right and arrive at: E 2 R 2 H 2 1: ff f 1: ff 1 f 1 f 2: ggf g 2: ggf 2 g 2 g So far this looks pretty much the same as with normal completion. But now, as we use deduce to get a critical pair from an overlap between the left-hand sides of rules 2 and 1, we also save the corresponding entry in our history H. E 3 R 3 H 3 3: ggf gf 1: ff f 1: ff 1 f 1 f 2: ggf g 2: ggf 2 g 2 g 3: ggf 1 ggff 2 gf Next we simplify l equation 3 by application of rule 2 and again save the corresponding history entry. E 4 R 4 H 4 4: g gf 1: ff f 1: ff 1 f 1 f 2: ggf g 2: ggf 2 g 2 g 3: ggf 1 ggff 2 gf 4: g 2 ggf 3 gf We use orient r to orient equation 4 into a rule and also mirror the corresponding history entry. E 5 R 5 H 5 1: ff f 1: ff 1 f 1 f 2: ggf g 2: ggf 2 g 2 g 4: gf g 3: ggf 1 ggff 2 gf 4: gf 3 ggf 2 g Now we find a new overlap between rules 2 and 4 and deduce equation 5 (and save the corresponding history entry). 17

30 3 Recording Completion E 6 R 6 H 6 5: gg g 1: ff f 1: ff 1 f 1 f 2: ggf g 2: ggf 2 g 2 g 4: gf g 3: ggf 1 ggff 2 gf 4: gf 3 ggf 2 g 5: gg 4 ggf 2 g Orient l from left to right on equation 5 gives us rule 5. E 7 R 7 H 7 1: ff f 1: ff 1 f 1 f 2: ggf g 2: ggf 2 g 2 g 4: gf g 3: ggf 1 ggff 2 gf 5: gg g 4: gf 3 ggf 2 g 5: gg 4 ggf 2 g We use rule 5 to collapse the left-hand side of rule 2 which gives us equation 6 and the corresponding history entry. E 8 R 8 H 8 6: gf g 1: ff f 1: ff 1 f 1 f 4: gf g 2: ggf 2 g 2 g 5: gg g 3: ggf 1 ggff 2 gf 4: gf 3 ggf 2 g 5: gg 4 ggf 2 g 6: gf 5 ggf 2 g At this point we may use rule 4 to simplify l equation 6 yielding equation 7 together with its history entry. E 9 R 9 H 9 7: g g 1: ff f 1: ff 1 f 1 f 4: gf g 2: ggf 2 g 2 g 5: gg g 3: ggf 1 ggff 2 gf 4: gf 3 ggf 2 g 5: gg 4 ggf 2 g 6: gf 5 ggf 2 g 7: g 4 gf 6 g But this trivial equation 7 together with the corresponding history entry disappears by an immediate application of delete. 18

31 3.2 Compare Phase E 10 R 10 H 10 1: ff f 1: ff 1 f 1 f 4: gf g 2: ggf 2 g 2 g 5: gg g 3: ggf 1 ggff 2 gf 4: gf 3 ggf 2 g 5: gg 4 ggf 2 g 6: gf 5 ggf 2 g Finally, since there is no rule with index 6 in R 10 we may also discard the corresponding history entry, for it will never be used. So we arrive at the complete TRS R n equivalent to E 0 (see Section 3.5) and history H n (here n = 10). E n R n H n 1: ff f 1: ff 1 f 1 f 4: gf g 2: ggf 2 g 2 g 5: gg g 3: ggf 1 ggff 2 gf 4: gf 3 ggf 2 g 5: gg 4 ggf 2 g We will continue our example in the next section Compare Phase Suppose we want to know if the equation fgf fgg holds with respect to our E (which we of course already know since Section 2.3). To get an answer we have to compare the left- and right-hand sides with respect to R n, i.e., we have to rewrite them both to normal form with respect to R n. We can rewrite the left-hand side using rule 4 at position 1 to fg and the right-hand side using rule 5 at position 1 also to fg. This gives us the joining sequence fgf 4 fg 5 fgg which we will expand in the next phase Recall Phase This phase is called recall phase because we will recall the history of our joining sequence from the previous phase step by step. This works as follows: While there are rewrite steps with indices not in E 0, in the joining sequence we replace them with the corresponding history entry from H n. This process terminates since a history entry of index m only refers to strictly smaller indices. Because we started to build our history only from equations in E 0, by doing so we will eventually arrive at a conversion which only contains rewrite steps with indices 19

32 3 Recording Completion in E 0. The order in which we replace rewrite steps with history entries does not matter. To simplify matters we just always start out at the far left and search to the right for steps to replace, i.e., if we did just replace one step we again start out at the far left. If we arrive at the right end of the sequence at some point we are done. What is also important to notice is that there may be some context and some substitution involved in a rewrite step. If this is the case we also have to integrate this into the replacing history entry. Let us resume the running example. We start out with the joining sequence from the last section fgf 4 fg 5 fgg. We will also need to look at E 0, R n and H n : E 0 R n H n 1: ff f 1: ff f 1: ff 1 f 1 f 2: ggf g 4: gf g 2: ggf 2 g 2 g 5: gg g 3: ggf 1 ggff 2 gf 4: gf 3 ggf 2 g 5: gg 4 ggf 2 g We start out at the far left in our joining sequence and see that 4 is not an index in E 0 and that to get from fgf to fg with rule 4 we need to take the context f into account. Thus we replace the rewrite step involving rule 4 in the sequence with the corresponding history entry after application of context f and arrive at the new sequence fgf 3 fggf 2 fg 5 fgg. The first step in this sequence is done with rule 3 which again is not an index in E 0. Moreover this time we not only have to consider a context f but also have to mirror the history entry because rule 3 was applied from right to left. We obtain fgf 2 fggff 1 fggf 2 fg 5 fgg. Now we have to search until the last step to find an index not in E 0. Again our context is f and we have to mirror the history entry because the rule was applied from right to left. This yields fgf 2 fggff 1 fggf 2 fg 2 fggf 4 fgg. All indices are in E 0 except 4 and the context is fg this time. according to history entry 4 we obtain Replacing fgf 2 fggff 1 fggf 2 fg 2 fggf 3 fgggf 2 fgg. Finally there is a last application of rule 3 (which is not in E 0 ) left. The context is fg and the rule was applied from right to left so we have to mirror the history entry and arrive at the final sequence fgf 2 fggff 1 fggf 2 fg 2 fggf 2 fgggff 1 fgggf 2 fgg. This conversion we can use in the next phase to build an equational logic proof tree out of it. 20

33 3.4 Plant & Grow Phase ggff 2 [a] gf fggff 2 [c] fgf [s] ff 1 f gff 1 [c] gf ggff 1 [c] ggf [c] ggf 2 g [c] ggf 2 g fggf 2 [c] fg fgf 2 fggff fggff 1 fggf fggf 2 fg fg 2 fggf fggf 2 fgggff fgggff 1 fgggf fgggf 2 fgg [t] fgf fggf [t] fgf fg [t] fgf fggf [t] fgf fgggff [t] fgf fgggf [t] fgf fgg [s] ggff 2 [a] gf gggff 2 [c] ggf fgggff 2 [c] fggf Figure 3.2.: Result of plant & grow phase. [s] ff 1 f gff 1 [c] gf ggff 1 [c] ggf gggff 1 [c] gggf [c] ggf 2 g gggf 2 gg [c] [c] 3.4. Plant & Grow Phase In the final phase we take the conversion from the last section. Because this sequence does only contain rewrite steps using rules with indices in E 0 we may consider the single steps as applications of equations from E 0. But we will keep the arrows because they show in which direction the equation was applied. What we want to do now is to apply the inference system for equational logic. The rules from this system work on single equations (not sequences of equations) so we first have to expand the sequence from the previous section to a sequence of single equations. So we get from to fgf fgf 2 fggff 1 fggf 2 fg 2 fggf 2 fgggff 1 fgggf 2 fgg 2 fggff, fggff 1 fggf, fggf 2 fg, fg 2 fggf, fggf 2 fgggff, fgggff 1 fgggf, fgggf 2 fgg. This may be seen as a slice plane through the equational logic proof tree we are looking for. What remains is on one hand to plant the tree and on the other to grow its branches. The former can be achieved by starting out at the far left and using the [t] rule of equational logic with the first two equations. This gives us a new equation, consisting of the left-hand side of the first equation and the right-hand side of the second one, which we directly give into another application of [t] together with the next equation in the sequence and so on, until we arrive at the root of the tree. Now we have the part of the tree from the root to the slice plane. From there we have to grow the tree until all of its branches end in leaves. This can be done using the other rules of equational logic. For each remaining proof obligation the [s] rule is applied if an equation was used right-to-left. Afterwards common contexts are stripped of with the [c] rule until the equation is in E 0 or an application of [a] finishes the proof. The final result can be seen in Figure 3.2. In Figure 3.3 the interplay of the four phases of recording completion is visualized. The ES E together with a reduction order > for which we want to decide the word problem is the input to the record phase. Record runs 21

34 3 Recording Completion E > Record s? t R Compare yes no R complete? E s t yes no H s R R t Maybe Recall s E t Plant & Grow... [t] s t. [t] Figure 3.3.: Flow chart for recording completion. exhaustively, i.e., no inference rule is applicable any more. If it terminates it produces outputs R and H if successful. In the compare phase we get two inputs. On one hand the equation s t and on the other the TRS R. If the normal forms of the left- and right-hand sides of s t are different compare is not successful and we take the edge labeled no. Now if the R produced in the record phase is complete we know for sure that s is not equal to t with respect to E. If R is not complete, we are not able to decide the word problem. If the compare phase is successful it produces a join s R R t with respect to R. The recall phase takes the history H produced by the record phase and the join produced by the compare phase and constructs the conversion s E t with respect to E. Finally this conversion is given to the plant & grow phase which outputs an equational logic proof tree for E s t Soundness of Recording Completion In this section we regard an equational system E as TRS whenever convenient (consequently we drop the condition that for a rewrite rule l r all variables of r are contained in l and that l may not be a variable). Also for convenience we assume that the inference rules generate a fresh index such that it is the successor of the maximal index that has occurred so far. For soundness of our approach we have to show the following lemma. Lemma 3.1. Let (E 0,, H 0 ) be transformed into (E n, R n, H n ) by recording completion. Then the recall phase transforms any joining sequence using rules from R n into a conversion using rules from E 0. Proof (Sketch). Let I 0 be the set of all indices of E 0 and I the set of all indices of all E j and R j with 0 j n. We show the slightly stronger claim that any conversion over I is transformed into a conversion over I 0. 22

35 3.6 Optimization Now, consider a conversion i t 1 i 1 t2 2 i m 1 t m where indices i 1,..., i m 1 are in I, i.e., i if and only if i or i is in R j or s i t or t i s is in E j for some 0 j n. Let S be the multiset of indices in this conversion, S 0 the multiset of indices from E 0, and S = S \ S 0. We show the claim by induction on the multiset S where multisets are ordered by the multiset extension of > N. In the base case S is empty and hence t 1 I 0 t m which shows the result. In the step case there exists an index l S such that t 1 I t j 1 l t j I t j+1 I t m for some 1 j < m. Now assume that l does not appear in H n. Then it must have been deleted by the rule delete and hence t j 1 = t j. This case is finished by applying the induction hypothesis to t 1 I t j 1 I t j+1 I t m. In the other case let l : s p u k t H n. Since l > N p, k (by construction of the inference rules of recording completion) the induction hypothesis applies to t 1 I t j 1 p k tj I t j+1 I t m which concludes the proof. Note that Lemma 3.1 does not require a successful run of recording completion. A fact which will be exploited in the next section Optimization The procedure described in this chapter so far is not optimal. We want to discuss at least two aspects where we see potential for optimization: An increase in power for the whole procedure by overcoming unnecessary limitations and a reduction in size of the resulting proof trees Compare on Demand On the one hand it seems rather restrictive to only advance to the compare phase if we have a successful run. What if already one of our intermediate R i is all we need to show two terms equal in the compare phase? To this end we suggest to modify the four phases of recording completion as described in the following (cf. Figure 3.4): We do not exhaustively apply the record phase. Instead each time an inference rule of recording completion yields a new R i we immediately go to the compare phase and try to find a joining sequence with respect to this R i. If 23

36 3 Recording Completion E > s? t Record R i Compare no R i complete? no yes Recall yes E s t E s t Figure 3.4.: Flow chart for completion on demand. this succeeds we can go to the recall phase as above. Otherwise we have to check if R i is complete. If this is the case then we know that s is not equal to t with respect to E. Alternatively we go back to the record phase and apply some more inference rules until we arrive at the next R i+1 then we again go to the compare phase to find a joining sequence with respect to this new R i+1 and so on. Following this approach record and compare are no longer separate phases of recording completion but rather one interweaved record n compare phase Proof Tree Size Contrariwise we could try to somehow shrink the size of generated proof trees. For one possibility to do this look for example at the conversion generated in the recall phase of the above example: fgf 2 fggff 1 fggf 2 fg 2 fggf 2 fgggff 1 fgggf 2 fgg. Here the two underlined terms are clearly the same. So the information inbetween those two does not gain us anything. But because the length of the conversion is longer as necessary the final proof tree will also be bigger than necessary the superfluous details bloat our proof. But we want to have short proofs. Thankfully such situations are easily detectable and then we can just cut out the superfluous part. If we do this in the above conversion we arrive at the following one fgf 2 fggff 1 fggf 2 fgggff 1 fgggf 2 fgg where only one occurrence of the term fggf is present. Now if we expand this for the plant & grow phase we get the following sequence of equations: fgf 2 fggff, fggff 1 fggf, fggf 2 fgggff, fgggff 1 fgggf, fgggf 2 fgg which ultimately yields the proof tree depicted in Figure 3.5 which is much smaller than the one in Figure

37 3.7 Chapter Notes ggff 2 [a] gf fggff 2 [c] fgf [s] ff 1 f [c] gff 1 gf ggff 1 [c] ggf [c] ggff 2 [a] gf gggff 2 [c] ggf fgggff 2 [c] fggf [s] ff 1 f gff 1 [c] gf ggff 1 [c] ggf gggff 1 [c] gggf [c] ggf 2 g gggf 2 gg fgf 2 fggff fggff 1 fggf fggf 2 fgggff fgggff 1 fgggf fgggf 2 fgg [t] fgf fggf [t] fgf fgggff [t] fgf fgggf [t] fgf fgg Figure 3.5.: Optimized proof tree. [c] [c] 3.7. Chapter Notes This chapter described in detail the main theoretical topic of this thesis: Recording completion. To this end the inference rules for standard completion were augmented with a history component and then the four phases of recording completion were explained. First the history of the completion process is recorded. In a second step the two sides of a conjecture equation are compared and a joining sequence is generated if possible. The recall phase uses the history to transform the joining sequence with respect to the rewrite system into a conversion with respect to the initial set of equations. Finally from that conversion an equational logic proof tree is built automatically. A somehow similar technique, called rewriting traces for ordered completion works on first-order multi-sorted logic with equality and is described in [5]. Here an annotated version of the inference rules of ordered completion is used. These rules are able to build so called traces for a formal proof management system like Coq [22]. Roughly speaking the inference rules may be divided into rules which simplify equations or rules and those which create new facts. Application of simplification rules is recorded in the rewritten term itself as part of its history and new facts contain the step by which they were created as a trace. So in this approach equations are pairs of terms with a trace. Every term t has a current version t, an original version t 0, and a history, i.e., a sequence of rewrite steps to get from t 0 to t. This means the history is integrated into the term structure itself and not kept separately. For proof generation they use a list of lemmas mirroring the computation of critical pairs, which is comparable to the method described in Section on page 41. Also related is the work described in [8] and [7] which is part of the tool WALDMEISTER, an implementation of unfailing Knuth-Bendix completion with refinements towards ordered completion. Here a so called collected history is used in concert with unfailing completion. In the next chapter we will investigate model finding for counter models using matrix interpretations. 25

38 4. Model Finding We can only decide the word problem for equational logic if recording completion was successful. But in the negative case we still may try to find a model for our given theory E, which does not model s t. There are numerous ways to search for models. In this chapter we will see how matrix interpretations may be used to find counter models for equational logic. We will first introduce matrix interpretations and then give two examples of how to generate counter models from abstract interpretations with the help of an SMT solver Matrix Interpretations A matrix interpretation M(d) with a fixed dimension d N \ {0} is a special kind of algebra over vectors in N d. It maps each n-ary function symbol f F to a linear polynomial f M(d) over n variables x 1,..., x n with coefficients F i Z d d and some constant part f 0 Z d. Thus we have f M(d) (x 1,..., x n ) = n F i x i + f 0 = F 1 x F n x n + f 0 i=1 To obtain a counter model for E = s t we need to find an interpretation for the function symbols such that the left- and right-hand sides of all equations in E are equal but the interpretation of s is not equal to the interpretation of t. That is L 1 x L n x n + l 0 = R 1 x R n x n + r 0 for all l r E and S 1 x S n x n + s 0 T 1 x T n x n + t 0 for s t. Here the matrices L i, R i, S i, and T i are existentially quantified whereas the variables x i are universally quantified. To compare two linear polynomials we just have to compare their coefficients. So we have: M(d) = s t i S i = T i s 0 = t 0 ( ) In our case the coefficients are matrices which we compare entry-by-entry. If all of the matrix entries for one coefficient are the same as the corresponding 26

39 4.1 Matrix Interpretations entries on the other side the coefficients are equal. If just one entry differs the coefficients are not equal. To find counter models we will start with an abstract matrix interpretation, establish the comparisons between all left- and right-hand sides and generate an SMT problem out of them. This we let check by an SMT solver to hopefully arrive at a result. Consider our running example E = {ff f, ggf g} and let s assume for a moment that recording completion was not successful. We now want to check whether E f g. For the sake of simplicity we fix an abstract matrix interpretation of dimension 1 for f and g. f M(1) (x 1 ) = f 0 x 1 + f 1 g M(1) (x 1 ) = g 0 x 1 + g 1 In this case the coefficients are just integers. If we apply this interpretation to the left- and right-hand sides of the equations in E and to f g we get: f 0 (f 0 x 1 + f 1 ) + f 1 = f 0 x 1 + f 1 g 0 (g 0 (f 0 x 1 + f 1 ) + g 1 ) + g 1 = g 0 x 1 + g 1 f 0 x 1 + f 1 g 0 x 1 + g 1 Which, using ( ), yields the following SMT problem: f 0 f 0 = f 0 f 0 f 1 = 0 g 0 g 0 f 0 = g 0 g 0 g 0 f 1 + g 0 g 1 = 0 (f 0 g 0 f 1 g 1 ) This formula is for example satisfied by the following assignment f 0 = g 0 = g 1 = 0 f 1 = 1 which yields the following concrete interpretations for f and g. f M(1) (x 1 ) = 1 g M(1) (x 1 ) = 0 Now our three comparisons look like follows: ff f 1 = 1 ggf g 0 = 0 f g 1 0 which indeed is a counter model for our conjecture E f g. 27

40 4 Model Finding To also see a less trivial example, suppose we are looking for a matrix interpretation of dimension 2. Again we first fix the abstract matrix interpretation for f and g, which in this case is of dimension 2: ( ) ( ) f M(2) f0 f (x 1 ) = 1 f4 x f 2 f f 5 ( ) ( ) g M(2) g0 g (x 1 ) = 1 g4 x g 2 g g 5 Next we apply the above interpretation to the left- and right-hand sides of the equations in E and to f g which yields: ( ) (( ) ( )) ( ) ( ) ( ) f0 f 1 f0 f 1 f4 f4 f0 f 1 f4 x f 2 f 3 f 2 f = x 3 f 5 f 5 f 2 f f 5 ( ) (( ) (( ) ( )) ( )) ( ) ( ) ( ) g0 g 1 g0 g 1 f0 f 1 f4 g4 g4 g0 g 1 g4 x g 2 g 3 g 2 g 3 f 2 f = x 3 f 5 g 5 g 5 g 2 g g 5 ( ) ( ) ( ) ( ) f0 f 1 f4 g0 g 1 g4 x f 2 f 1 + x 3 f 5 g 2 g g 5 If we simplify further and compare the coefficients component-wise we get the following SMT problem: f 0 f 0 + f 1 f 2 = f 0 f 0 f 1 + f 1 f 3 = f 1 f 2 f 0 + f 3 f 2 = f 2 f 2 f 1 + f 3 f 3 = f 3 f 0 f 4 + f 1 f 5 + f 4 = f 4 f 2 f 4 + f 3 f 5 + f 5 = f 5 g 0 g 0 f 0 + g 0 g 1 f 2 + g 1 g 2 f 0 + g 1 g 3 f 2 = g 0 g 0 g 0 f 1 + g 0 g 1 f 3 + g 1 g 2 f 1 + g 1 g 3 f 3 = g 1 g 2 g 0 f 0 + g 2 g 1 f 2 + g 3 g 2 f 0 + g 3 g 3 f 2 = g 2 g 2 g 0 f 1 + g 2 g 1 f 3 + g 3 g 2 f 1 + g 3 g 3 f 3 = g 3 g 0 g 0 f 4 + g 0 g 1 f 5 + g 0 g 4 + g 1 g 2 f 4 + g 1 g 3 f 5 + g 1 f 4 + g 4 = g 4 g 2 g 0 f 4 + g 2 g 1 f 5 + g 2 g 4 + g 3 g 2 f 4 + g 2 g 3 f 5 + g 3 f 4 + g 5 = g 5 (f 0 g 0 f 1 g 1 f 2 g 2 f 3 g 3 f 4 g 4 f 5 g 5 ) This formula is satisfiable and one possible solution would be f 0 = f 1 = f 2 = f 3 = g 0 = g 1 = g 2 = g 3 = g 5 = 0 f 4 = f 5 = 1 g 4 = 2 28

41 4.2 Chapter Notes which gives us the following interpretations for f and g: ( ) ( ) f M(2) (x 1 ) = x ( ) ( ) g M(2) (x 1 ) = x Finally our comparisons look like follows: ff f ggf g f g ( ) ( ) 1 1 = 1 1 ( ) ( ) 2 2 = 0 0 ( ) ( ) And again we have found a counter model for our initial conjecture Chapter Notes In this chapter we have briefly introduced matrix interpretations. Then we showed how we may use them to find counter models for word problem instances. To this end we first fix an abstract matrix interpretation with a certain dimension and then generate an SMT problem from the comparisons between all left- and right-hand sides of the instance with respect to this interpretation. This we give to an SMT solver to find the actual matrix interpretations for the counter model. Matrix interpretations were first introduced in [6] to prove the termination of term rewrite systems. In our case the models we work with are infinite (models over the naturals or the integers). In general a variety of methods could be used to search for finite models of unsorted first-order logic clause sets. These methods can be roughly divided into MACE-style and SEM-style methods. The first of these two got its name from McCune s tool Models And CounterExamples [15]. In this approach the first-order logic clause sets together with a fixed domain size are first transformed into propositional logic. After that various enhanced techniques may be applied to reduce the problem size on one hand and prune the search space on the other. Finally the problem is given to a SAT solver. The other group of methods is called SEM-style after Zhang and Zhang s tool System for Enumerating Models [26]. When using this method the problem is not transformed into a simpler logic but rather a backtracking search together with methods to exploit equality is used to search for interpretations directly. Another technique called symmetry reduction is utilized to avoid the search for isomorphic models. A comparison of MACE-style and SEM-style methods and some improvements upon the former can be found in [4]. 29

42 4 Model Finding Finally we expect that SMT where the theory is uninterpreted functions over the integers or reals (UFNIA/UFLIA or UFNRA/UFLRA in SMT-Comp) might be useful for counter model generation in the future. Currently the tools do not (yet) seem very powerful on the problems emerging from equational logic. The following chapter presents the completion tool KBCV and how recording completion was implemented in it. 30

43 5. Automation On the theoretical side the aim of this thesis was to study the relationships between completion and equational logic. The practical aspect comprises the task to implement a procedure to find equational proof trees automatically with the help of recording completion (if successful, and otherwise try to give a counter example). To this end we extended the Knuth-Bendix completion visualizer (KBCV) [19] in which completion was already implemented. In this chapter we will give a short survey of the tool KBCV and then describe in some more detail how recording completion and model finding were implemented in it Implementation The tool is implemented in Scala 2.9.1, 1 an object-functional programming language which compiles to Java Byte Code. Therefore the tool runs on every system where a Java Virtual Machine is available. Originally KBCV was developed as an interactive completion tool to help students get hold of the basics of completion without worrying about all the book-keeping and computations of overlaps. To accommodate recording completion in KBCV several changes and improvements had to be done. To begin with we split KBCV 1.0 into two packages. The Scala termlib is now available as a separate jar-package. In this package all of the basics like terms, substitutions, positions, rewriting, LPO, completion, etc. are implemented. We changed the data type of TRSs and ESs from List to HashMap to get indices for rules and equations. On top of that we built in the new inference rules of recording completion along with a data structure for history entries and some internal caches to boost the performance. The termlibpackage comprises about 1700 lines of Scala code and is currently available in version 1.2. The actual KBCV resides now in its own package which depends on the termlib-package. Also here we overhauled the whole code base. The main work was to improve the automatic completion procedure so that it is actually able to complete useful systems, e.g., group theory. If the completion procedure is successful KBCV can output a proof tree for an equation if it exists and otherwise conclude, that the given equation is not a consequence of the completed system. If completion is not successful and KBCV is not able (based on the current R i ) to find a proof tree for an equation with respect to the given ES it uses an SMT solver which tries to find a counter model

44 5 Automation (VAR x y) (RULES f(x,c) -> x f(x,g(y)) -> g(f(x,y)) ) <?xml version="1.0" encoding="utf-8"?> <trs> <rules> <rule> <lhs> <funapp> <name>f</name> <arg><var>x</var></arg> <arg><funapp><name>c</name></funapp></arg> </funapp> </lhs> <rhs><var>x</var></rhs> </rule> </rules> </trs> (a) The old TRS-format. (b) The new XML-format. cnf(name,role,(f(x,c) = x)). cnf(name,role,(f(x,g(y)) = (g(f(x,y)))). cnf(name,conjecture,(f(x) = (g(x))). (c) The TPTP-format. Figure 5.1.: Supported file formats for KBCV. Besides the stand-alone version of KBCV there also is a Java-Applet version available online. The stand-alone version has three different modes: The text mode where one can interact with KBCV via the console, the graphic mode using a graphical user interface (cf. Appendix B) implemented in java.swing, 2 and the hybrid mode where the text mode and the graphic mode are combined. In text mode typing help yields a list of all available commands (cf. Appendix A), whereas in graphic (and hybrid) mode or the Java-Applet you can select Help User Manual to get a description of the user interface. The stand-alone version of KBCV is able to call third party termination checkers whereas the Java-Applet version is limited to the internal LPO (Section 5.1.3) for termination proofs. Also calls to MiniSmt [25] to find counter models are only possible in the stand-alone version Input Formats As input KBCV supports the XML-format for TRSs 3 and also a subset of the older TRS-format. 4 (Only one VAR and one RULES section are allowed in this order. No theory or strategy annotations are supported.) In both cases rules are interpreted as equations. A third option is a subset of the TPTP-format. 5 (The names and formula roles are ignored and annotations are not allowed. Only one conjecture at the

45 5.1 Implementation end of the file is allowed.) Examples for the three supported formats are given in Figure 5.1. In Figure 5.1(c) the third line is a conjecture which is loaded into KBCV to be checked with the new equational logic facilities. In addition KBCV supports another file format for the export and import of command logs to save and load user specific settings of KBCV. This format lists all executed commands within KBCV in a human readable form, like: load../examples/gene.trs orient > 1 simplify... Saving the current command log is done via (File Export Command Log) and loading works alike (File Load Command Log). We call this format the KCL-format (for KBCV Command Log). For the corresponding commands in text mode see Appendix A. Command logs saved in the file.kbcvinit are loaded automatically on program start-up. If you want to start KBCV with another KCL-file (for example logfile.kcl) just invoke it like this:./kbcv logfile.kcl Here the file ending kcl is essential, otherwise KBCV would not know what to do with the file Installation The current version of KBCV is 1.8 and it is available under the GNU Lesser General Public Licence version 3 on the following website: To use KBCV just download it from the link above. Also make sure you have Scala already installed on your system. Change in your download directory and type: tar xfz kbcv-1.8.tar.gz cd kbcv-1.8./kbcv If you want to put the process running KBCV in the background use the -g flag, like:./kbcv -g & A bundled version which comes with the termination tool T T 2 [13], the SMT solver MiniSmt [25] and the needed Scala libraries included is also available on the website. If you do not want to install anything you can use the applet version of the tool online at: 33

46 5 Automation Termination Checks By default KBCV uses an internal implementation of LPO to check for termination of rewrite rules. In graphic mode one can select Settings Termination Method Termination Prover to tell KBCV to call an external termination checker for the needed termination checks during the completion process. This will open a new dialog window where one may enter the name of an external tool and command line arguments which should be given to it. The default here is to call T T T 2 like follows:./ttt2 -cpf xml - 1 In text mode you can set e.g. T T T 2 as termination tool by the following command: kbcv> set term tool./ttt2 -cpf xml - 1 Note however that even if the termination method is set to use an external tool KBCV (for efficiency reasons) will always try to first show termination by its internal LPO and only if this is not successful call the external tool. Internal LPO The implementation of LPO in KBCV checks the cases (1)-(3) described on page 8 in the following order: (1), (3), and last (2). Note that backtracking is only employed for single rules (for efficiency reasons). Because of the restricted backtracking our implementation of LPO is not complete. This is shown by the following example: Example 5.1. Because of the above order the rule f(a) b will always yield the precedence a b and hence our implementation of LPO cannot handle the TRS 1: f(a) b 2: b a since the first rule yields a b and the second one b a. Clearly an implementation with full backtracking would yield the precedence f b a where f(a) b is oriented by (2). However for the system 1: b a 2: f(a) b KBCV finds the desired precedence since the first rule gives b a and now case (3) is not successful on f(a) b and hence case (2) is considered. As you can see from this example the order in which the internal LPO considers rules has an effect on the result. Internally rules are ordered by their index. Additionally KBCV features a flag incremental for LPO. If set (which is the default) this flag changes the behavior of the internal LPO as follows: If in an 34

47 5.2 Features earlier stage of the completion process the precedence between some function symbols was fixed then in later stages this information is allowed for and the new precedence builds upon the old one. This is mainly useful if one wants to fix the precedence for some symbols beforehand and only after that wants KBCV to compute the rest. To do this a user may enter some precedence to be used by LPO manually (by typing set lpoprec x > y, z > w,... in text mode or go to the field labeled LPO Precedence enter some precedence and press Enter in graphic mode). Depending on the incremental flag KBCV may or may not find a fitting precedence for a certain system. To switch to the non-incremental version of LPO type unset inc in text mode or go to Settings Termination Method LPO. When using automatic mode LPO is always set to non-incremental (see also the previous section) Features The reason for KBCV to be implemented in the first place was the need for an interactive completion tool to help students learn about completion in a straightforward way. So this interactive completion procedure is the central feature of the tool. Since then we also improved the automatic completion procedure a lot so that now it can compete with state-of-the-art completion tools (cf. Chapter 6). A new feature is to find equational logic proofs automatically with the help of the newly implemented recording completion. If KBCV is not able to establish a proof then it calls MiniSmt to search for counter models (see Chapter 4 to maybe refute the initial conjecture. Finally all completion based proofs and disproofs are exportable in a certifiable format. These four features are described in the following sections Completion The graphical user interface of KBCV offers two views for completion, namely the normal view (Section 5.2.1) and the expert view (Section 5.2.1). The user can change the view via the menu entry View at any time (see Figure 5.2). Irregardless of the chosen view, termination checks are performed automatically, following the recent approach from [24]. This means we actually do not work on triples (E, R, H) in KBCV but rather on quadruples (E, R, C, H). In Chapter 3 we ignored the C component because it is not important to understand recording completion and would blow up the inference rules needlessly. By default, an incremental LPO is constructed and maintained by the tool but also external termination tools are supported (this option is not available in the applet version). For convenience KBCV stores a command history (not to be confused with the history described on page 14) that allows to step backwards (and forwards again) in interactive completion proofs. The tool may also be used in automatic mode, i.e., directly from the command line. A possible call would look like follows: 35

48 5 Automation (a) Expert view (b) Normal view Figure 5.2.: Button layout for the two different views../kbcv -a -x -p -xsl -s 60 -m ttt2 test.xml Here the -a flag initiates automatic mode. The next flag -x suppresses the answer from KBCV (which would be YES, NO, or MAYBE). The third flag -p tells KBCV to output the certifiable completion proof, if completion is successful. The option -xsl includes the link to the standard xsl-file in the proof output, and -s 60 and -m ttt2 are the timeout in seconds and the termination-check method to use (in this case calls to T T 2 ) respectively. The filename at the end is the input equational system. The automatic completion works as described in the subsequent paragraph Automatic Completion. For a full list of all command line flags type:./kbcv -h Normal View In normal view the user stepwise executes the inference rules from Figure 2.2 in a fixed order. He can switch between efficient and simple completion. The efficient procedure executes all inference rules from Figure 2.2, while the simple procedure considers a subset only. Efficient Completion The efficient completion procedure (following Huet [9], see Figure 5.3) takes a set of equations E as input and has three possible outcomes: It may terminate successfully, it may loop indefinitely, or it may fail because an equation could not be oriented into a rewrite rule. While E the user chooses an equation s t from E. The terms in this equation are simplified to normal form by using SIMPLIFY exhaustively. In the next step the equation is deleted if it was trivial and if so the next iteration of the loop starts. Otherwise (following the transition labeled NO) the user suggests the orientation of the equation into a rule and ORIENT performs the necessary termination check. Here the procedure might fail if the equation cannot be 36

49 5.2 Features DELETE NO ORIENT failed SIMPLIFY to NF complete failure to NF COMPOSE choose s t YES E = new CPs DEDUCE to NF COLLAPSE Figure 5.3.: Flow chart for the efficient completion procedure. oriented (in either direction) with the used termination technique. But if the orientation succeeds the inferred rule is used to reduce the right-hand sides of (other) rules to normal form (COMPOSE) while COLLAPSE rewrites the lefthand sides of rules, which transforms rules into equations that go back to E. In this way the set of rules in R is kept as small as possible at all times. Afterwards DEDUCE is used to compute (all) critical pairs (between the new rule and the old rules and between the new rule and itself). If still E the next iteration of the loop begins and otherwise the procedure terminates successfully yielding the terminating and confluent (complete) TRS R equivalent to the input system E. Simple Completion The simple procedure (following the basic completion procedure [1, Figure 7.1]) makes no use of COMPOSE and COLLAPSE, which means that the inference rule DEDUCE immediately follows ORIENT. Hence although correct, this procedure is not particularly efficient. Expert View In the expert view the user can select the equations and rewrite rules on which the desired inference rules from Figure 2.2 should be applied on. If no equations/rules are selected explicitly then all equations/rules are considered. For efficiency reasons DEDUCE does only add critical pairs emerging from overlaps that have not yet been considered. KBCV notifies the user if a complete R equivalent to the input E is obtained. Automatic Completion At any stage of the process the user can press the button Completion which triggers the automatic mode of KBCV where it applies the inference rules according to the loop in Figure 5.4. Pressing the button again (during the completion attempt; the label has changed to Stop ) stops the automatic mode and shows 37

50 5 Automation DELETE E = choose s t ORIENT YES SIMPLIFY to NF complete to NF COMPOSE new CPs to NF DEDUCE COLLAPSE Figure 5.4.: Flow chart for the automatic mode. the current state (of the selected thread, see below). It is also possible to specify an upper limit on the loops performed in Figure 5.4 (Settings Automatic Completion). This is especially useful to step through a completion proof with limit 1. Automatic completion is also started if KBCV is invoked with the -a flag like described in Section In Figure 5.4 the rules SIMPLIFY and DELETE operate on all equations and are applied exhaustively. If E = then R is locally confluent (since the previous DEDUCE considered all remaining critical pairs) and the procedure successfully terminates. Note that in contrast to the completion procedure from Figure 5.3 the automatic mode postpones the choice of the equation s t. Hence KBCV can choose an equation of minimal length after simplification (which is typically beneficial for the course of completion) for the rule ORIENT. To maximize power, KBCV executes two threads in parallel which have different behavior for ORIENT. The first thread prefers to orient equations from left-to-right and if this is not possible it tries a right-to-left orientation (the second thread behaves dually). If this also fails another equation is selected in the next turn. (Note that it is possible that some later equation can be oriented which then simplifies the problematic equation such that it can be oriented or deleted.) A thread fails if no equation in E can be oriented in the ORIENT step Equational Logic Proofs One of the newest features of KBCV is the automatic generation of equational proofs and disproofs. Here we implemented the theory described in Chapter 3 together with the optimization described in Section If you want to check if an equation s t is a consequence of the initial set of equations E 0 the course of action is as follows: In graphic mode select File Equational Proof. A new dialog window will open where you can input the conjecture you want to check. In text mode just input kbcv> apel t = s 38

51 5.2 Features Figure 5.5.: Equational Proof dialog window showing counter model. where apel stands for automatic proof in equational logic. Now KBCV will use the current system of rules R and try to find a joining sequence between s and t with respect to it. If this is successful a linearized version of the found equational logic proof tree will be shown in the lower half of the dialog window. One can also just have a look at the underlying conversion by pressing the Conversion -tab. In text mode KBCV will just announce a proof tree was found for equation s = t and you can issue the command kbcv> showpt to have a look at the current proof tree. On the other hand if KBCV is not able to find a proof tree and R is not complete it searches for counter models as described in Section If this does not succeed KBCV will announce It could not be determined if the equation is a consequence of the initial set of equations and leave it at that. Otherwise it will show the found counter model in the lower half of the dialog window in graphic mode (see Figure 5.5). In text mode it will just print out a counter model was found! and you have to type the command kbcv> showcm to have a look at the found counter model Counter Models In KBCV we implemented the search for counter models as described in Chapter 4. The procedure is depicted in Figure 5.6. It takes as inputs E 0 and the conjecture s t and constructs an abstract matrix interpretation for M(d) = E 0 and M(d) = s t of dimension d = 1. This interpretation is translated into an SMT problem and given to MiniSmt. KBCV invokes MiniSmt using the following parameters: 39

Equational Logic. Chapter 4

Equational Logic. Chapter 4 Chapter 4 Equational Logic From now on First-order Logic is considered with equality. In this chapter, I investigate properties of a set of unit equations. For a set of unit equations I write E. Full first-order

More information

Theorem 4.18 ( Critical Pair Theorem ) A TRS R is locally confluent if and only if all its critical pairs are joinable.

Theorem 4.18 ( Critical Pair Theorem ) A TRS R is locally confluent if and only if all its critical pairs are joinable. 4.4 Critical Pairs Showing local confluence (Sketch): Problem: If t 1 E t 0 E t 2, does there exist a term s such that t 1 E s E t 2? If the two rewrite steps happen in different subtrees (disjoint redexes):

More information

Cheat Sheet Equational Logic (Spring 2013) Terms. Inductive Construction. Positions: Denoting Subterms TERMS

Cheat Sheet Equational Logic (Spring 2013) Terms. Inductive Construction. Positions: Denoting Subterms TERMS TERMS Cheat Sheet Equational Logic (Spring 2013) The material given here summarizes those notions from the course s textbook [1] that occur frequently. The goal is to have them at hand, as a quick reminder

More information

Termination Tools in Automated Reasoning

Termination Tools in Automated Reasoning Termination Tools in Automated Reasoning dissertation by Sarah Winkler submitted to the Faculty of Mathematics, Computer Science and Physics of the University of Innsbruck in partial fulfillment of the

More information

Equational Reasoning and Completion. Dominik Klein

Equational Reasoning and Completion. Dominik Klein Equational Reasoning and Completion by Dominik Klein submitted to Japan Advanced Institute of Science and Technology in partial fulfillment of the requirements for the degree of Doctor of Philosophy Supervisor:

More information

Outline. Overview. Introduction. Well-Founded Monotone Algebras. Monotone algebras. Polynomial Interpretations. Dependency Pairs

Outline. Overview. Introduction. Well-Founded Monotone Algebras. Monotone algebras. Polynomial Interpretations. Dependency Pairs Overview Lecture 1: Introduction, Abstract Rewriting Lecture 2: Term Rewriting Lecture 3: Combinatory Logic Lecture 4: Termination Lecture 5: Matching, Unification Lecture 6: Equational Reasoning, Completion

More information

A Polynomial Algorithm for Uniqueness of Normal Forms of Linear Shallow Term Rewrite Systems 1. Julian Zinn 2 and Rakesh Verma

A Polynomial Algorithm for Uniqueness of Normal Forms of Linear Shallow Term Rewrite Systems 1. Julian Zinn 2 and Rakesh Verma A Polynomial Algorithm for Uniqueness of Normal Forms of Linear Shallow Term Rewrite Systems 1 Julian Zinn 2 and Rakesh Verma Computer Science Department University of Houston Houston, TX, 77204, USA http://www.cs.uh.edu

More information

Equational Logic. Chapter Syntax Terms and Term Algebras

Equational Logic. Chapter Syntax Terms and Term Algebras Chapter 2 Equational Logic 2.1 Syntax 2.1.1 Terms and Term Algebras The natural logic of algebra is equational logic, whose propositions are universally quantified identities between terms built up from

More information

A New and Formalized Proof of Abstract Completion

A New and Formalized Proof of Abstract Completion A New and Formalized Proof of Abstract Completion Nao Hirokawa 1, Aart Middeldorp 2, and Christian Sternagel 2 1 JAIST, Japan hirokawa@jaist.ac.jp 2 University of Innsbruck, Austria {aart.middeldorp christian.sternagel}@uibk.ac.at

More information

07 Equational Logic and Algebraic Reasoning

07 Equational Logic and Algebraic Reasoning CAS 701 Fall 2004 07 Equational Logic and Algebraic Reasoning Instructor: W. M. Farmer Revised: 17 November 2004 1 What is Equational Logic? Equational logic is first-order logic restricted to languages

More information

Seminaar Abstrakte Wiskunde Seminar in Abstract Mathematics Lecture notes in progress (27 March 2010)

Seminaar Abstrakte Wiskunde Seminar in Abstract Mathematics Lecture notes in progress (27 March 2010) http://math.sun.ac.za/amsc/sam Seminaar Abstrakte Wiskunde Seminar in Abstract Mathematics 2009-2010 Lecture notes in progress (27 March 2010) Contents 2009 Semester I: Elements 5 1. Cartesian product

More information

IAENG International Journal of Computer Science, 42:3, IJCS_42_3_13

IAENG International Journal of Computer Science, 42:3, IJCS_42_3_13 Lazy Evaluation Schemes for Efficient Implementation of Multi-Context Algebraic Completion System ChengCheng Ji, Haruhiko Sato and Masahito Kurihara Abstract Lazy evaluation is a computational scheme which

More information

Jacques Fleuriot. Automated Reasoning Rewrite Rules

Jacques Fleuriot. Automated Reasoning Rewrite Rules Automated Reasoning Rewrite Rules Jacques Fleuriot Lecture 8, page 1 Term Rewriting Rewriting is a technique for replacing terms in an expression with equivalent terms useful for simplification, e.g. given

More information

First-Order Logic First-Order Theories. Roopsha Samanta. Partly based on slides by Aaron Bradley and Isil Dillig

First-Order Logic First-Order Theories. Roopsha Samanta. Partly based on slides by Aaron Bradley and Isil Dillig First-Order Logic First-Order Theories Roopsha Samanta Partly based on slides by Aaron Bradley and Isil Dillig Roadmap Review: propositional logic Syntax and semantics of first-order logic (FOL) Semantic

More information

Hierarchic Superposition: Completeness without Compactness

Hierarchic Superposition: Completeness without Compactness Hierarchic Superposition: Completeness without Compactness Peter Baumgartner 1 and Uwe Waldmann 2 1 NICTA and Australian National University, Canberra, Australia Peter.Baumgartner@nicta.com.au 2 MPI für

More information

Lecture Notes on Inductive Definitions

Lecture Notes on Inductive Definitions Lecture Notes on Inductive Definitions 15-312: Foundations of Programming Languages Frank Pfenning Lecture 2 September 2, 2004 These supplementary notes review the notion of an inductive definition and

More information

Model Evolution with Equality Revised and Implemented

Model Evolution with Equality Revised and Implemented Model Evolution with Equality Revised and Implemented Peter Baumgartner 1 NICTA and The Australian National University, Canberra, Australia Björn Pelzer Institute for Computer Science, Universität Koblenz-Landau,

More information

Lecture Notes on Inductive Definitions

Lecture Notes on Inductive Definitions Lecture Notes on Inductive Definitions 15-312: Foundations of Programming Languages Frank Pfenning Lecture 2 August 28, 2003 These supplementary notes review the notion of an inductive definition and give

More information

Knuth-Bendix Completion with Modern Termination Checking, Master's Thesis, August 2006

Knuth-Bendix Completion with Modern Termination Checking, Master's Thesis, August 2006 Washington University in St. Louis Washington University Open Scholarship All Computer Science and Engineering Research Computer Science and Engineering Report Number: WUCSE-2006-45 2006-01-01 Knuth-Bendix

More information

Modal and temporal logic

Modal and temporal logic Modal and temporal logic N. Bezhanishvili I. Hodkinson C. Kupke Imperial College London 1 / 83 Overview Part II 1 Soundness and completeness. Canonical models. 3 lectures. 2 Finite model property. Filtrations.

More information

The Complexity of Post s Classes

The Complexity of Post s Classes Institut für Informationssysteme Universität Hannover Diplomarbeit The Complexity of Post s Classes Henning Schnoor 7. Juli 2004 Prüfer: Prof. Dr. Heribert Vollmer Prof. Dr. Rainer Parchmann ii Erklärung

More information

The non-logical symbols determine a specific F OL language and consists of the following sets. Σ = {Σ n } n<ω

The non-logical symbols determine a specific F OL language and consists of the following sets. Σ = {Σ n } n<ω 1 Preliminaries In this chapter we first give a summary of the basic notations, terminology and results which will be used in this thesis. The treatment here is reduced to a list of definitions. For the

More information

Mathematical Foundations of Logic and Functional Programming

Mathematical Foundations of Logic and Functional Programming Mathematical Foundations of Logic and Functional Programming lecture notes The aim of the course is to grasp the mathematical definition of the meaning (or, as we say, the semantics) of programs in two

More information

An Algebraic View of the Relation between Largest Common Subtrees and Smallest Common Supertrees

An Algebraic View of the Relation between Largest Common Subtrees and Smallest Common Supertrees An Algebraic View of the Relation between Largest Common Subtrees and Smallest Common Supertrees Francesc Rosselló 1, Gabriel Valiente 2 1 Department of Mathematics and Computer Science, Research Institute

More information

Tree sets. Reinhard Diestel

Tree sets. Reinhard Diestel 1 Tree sets Reinhard Diestel Abstract We study an abstract notion of tree structure which generalizes treedecompositions of graphs and matroids. Unlike tree-decompositions, which are too closely linked

More information

Innermost Reductions Find All Normal Forms on Right-Linear Terminating Overlay TRSs

Innermost Reductions Find All Normal Forms on Right-Linear Terminating Overlay TRSs Innermost Reductions Find All Normal Forms on Right-Linear Terminating Overlay TRSs Masahiko Sakai,Kouji Okamoto,Toshiki Sakabe Graduate School of Information Science, Nagoya University, Furo-cho, Chikusa-ku,

More information

Associative-commutative rewriting via flattening

Associative-commutative rewriting via flattening Associative-commutative rewriting via flattening Jean-Pierre Jouannaud LIX, École Polytechnique 91400 Palaiseau, France Email: jouannaud@lix.polytechnique.fr http://www.lix.polytechnique.fr/labo/jouannaud

More information

Johns Hopkins Math Tournament Proof Round: Automata

Johns Hopkins Math Tournament Proof Round: Automata Johns Hopkins Math Tournament 2018 Proof Round: Automata February 9, 2019 Problem Points Score 1 10 2 5 3 10 4 20 5 20 6 15 7 20 Total 100 Instructions The exam is worth 100 points; each part s point value

More information

Mathematical Foundations of Programming. Nicolai Kraus. Draft of February 15, 2018

Mathematical Foundations of Programming. Nicolai Kraus. Draft of February 15, 2018 Very short lecture notes: Mathematical Foundations of Programming University of Nottingham, Computer Science, module code G54FOP, Spring 2018 Nicolai Kraus Draft of February 15, 2018 What is this? This

More information

Deciding Confluence of Certain Term Rewriting Systems in Polynomial Time

Deciding Confluence of Certain Term Rewriting Systems in Polynomial Time Deciding Confluence of Certain Term Rewriting Systems in Polynomial Time Ashish Tiwari SRI International 333 Ravenswood Ave Menlo Park, CA, U.S.A tiwari@csl.sri.com Abstract We present a polynomial time

More information

Warm-Up Problem. Is the following true or false? 1/35

Warm-Up Problem. Is the following true or false? 1/35 Warm-Up Problem Is the following true or false? 1/35 Propositional Logic: Resolution Carmen Bruni Lecture 6 Based on work by J Buss, A Gao, L Kari, A Lubiw, B Bonakdarpour, D Maftuleac, C Roberts, R Trefler,

More information

Modularity of Confluence: A Simplified Proof

Modularity of Confluence: A Simplified Proof 1 Modularity of Confluence: A Simplified Proof Jan Willem Klop 1,2,5,6 Aart Middeldorp 3,5 Yoshihito Toyama 4,7 Roel de Vrijer 2 1 Department of Software Technology CWI, Kruislaan 413, 1098 SJ Amsterdam

More information

GRAPHIC REALIZATIONS OF SEQUENCES. Under the direction of Dr. John S. Caughman

GRAPHIC REALIZATIONS OF SEQUENCES. Under the direction of Dr. John S. Caughman GRAPHIC REALIZATIONS OF SEQUENCES JOSEPH RICHARDS Under the direction of Dr. John S. Caughman A Math 501 Project Submitted in partial fulfillment of the requirements for the degree of Master of Science

More information

Introduction to Turing Machines. Reading: Chapters 8 & 9

Introduction to Turing Machines. Reading: Chapters 8 & 9 Introduction to Turing Machines Reading: Chapters 8 & 9 1 Turing Machines (TM) Generalize the class of CFLs: Recursively Enumerable Languages Recursive Languages Context-Free Languages Regular Languages

More information

What are the recursion theoretic properties of a set of axioms? Understanding a paper by William Craig Armando B. Matos

What are the recursion theoretic properties of a set of axioms? Understanding a paper by William Craig Armando B. Matos What are the recursion theoretic properties of a set of axioms? Understanding a paper by William Craig Armando B. Matos armandobcm@yahoo.com February 5, 2014 Abstract This note is for personal use. It

More information

Non- -overlappings TRSs are UN. Stefan Kahrs and Connor Smith University of Kent

Non- -overlappings TRSs are UN. Stefan Kahrs and Connor Smith University of Kent Non- -overlappings TRSs are UN Stefan Kahrs and Connor Smith University of Kent This is about: When is the equational theory of a TRS consistent (CON), when does it have unique normal forms (UN), How can

More information

Introduction to Logic in Computer Science: Autumn 2006

Introduction to Logic in Computer Science: Autumn 2006 Introduction to Logic in Computer Science: Autumn 2006 Ulle Endriss Institute for Logic, Language and Computation University of Amsterdam Ulle Endriss 1 Plan for Today Today s class will be an introduction

More information

First-Order Theorem Proving and Vampire

First-Order Theorem Proving and Vampire First-Order Theorem Proving and Vampire Laura Kovács 1,2 and Martin Suda 2 1 TU Wien 2 Chalmers Outline Introduction First-Order Logic and TPTP Inference Systems Saturation Algorithms Redundancy Elimination

More information

Introduction to Logic in Computer Science: Autumn 2006

Introduction to Logic in Computer Science: Autumn 2006 Introduction to Logic in Computer Science: Autumn 2006 Ulle Endriss Institute for Logic, Language and Computation University of Amsterdam Ulle Endriss 1 Plan for Today The first part of the course will

More information

How to Pop a Deep PDA Matters

How to Pop a Deep PDA Matters How to Pop a Deep PDA Matters Peter Leupold Department of Mathematics, Faculty of Science Kyoto Sangyo University Kyoto 603-8555, Japan email:leupold@cc.kyoto-su.ac.jp Abstract Deep PDA are push-down automata

More information

ALGEBRA. 1. Some elementary number theory 1.1. Primes and divisibility. We denote the collection of integers

ALGEBRA. 1. Some elementary number theory 1.1. Primes and divisibility. We denote the collection of integers ALGEBRA CHRISTIAN REMLING 1. Some elementary number theory 1.1. Primes and divisibility. We denote the collection of integers by Z = {..., 2, 1, 0, 1,...}. Given a, b Z, we write a b if b = ac for some

More information

3 Propositional Logic

3 Propositional Logic 3 Propositional Logic 3.1 Syntax 3.2 Semantics 3.3 Equivalence and Normal Forms 3.4 Proof Procedures 3.5 Properties Propositional Logic (25th October 2007) 1 3.1 Syntax Definition 3.0 An alphabet Σ consists

More information

Confluence Modulo Equivalence with Invariants in Constraint Handling Rules

Confluence Modulo Equivalence with Invariants in Constraint Handling Rules Confluence Modulo Equivalence with Invariants in Constraint Handling Rules Daniel Gall and Thom Frühwirth Institute of Software Engineering and Programming Languages, Ulm University, 89069 Ulm, Germany

More information

New Minimal Weight Representations for Left-to-Right Window Methods

New Minimal Weight Representations for Left-to-Right Window Methods New Minimal Weight Representations for Left-to-Right Window Methods James A. Muir 1 and Douglas R. Stinson 2 1 Department of Combinatorics and Optimization 2 School of Computer Science University of Waterloo

More information

Opleiding Informatica

Opleiding Informatica Opleiding Informatica Tape-quantifying Turing machines in the arithmetical hierarchy Simon Heijungs Supervisors: H.J. Hoogeboom & R. van Vliet BACHELOR THESIS Leiden Institute of Advanced Computer Science

More information

Gödel s Incompleteness Theorem. Overview. Computability and Logic

Gödel s Incompleteness Theorem. Overview. Computability and Logic Gödel s Incompleteness Theorem Overview Computability and Logic Recap Remember what we set out to do in this course: Trying to find a systematic method (algorithm, procedure) which we can use to decide,

More information

CSCI3390-Assignment 2 Solutions

CSCI3390-Assignment 2 Solutions CSCI3390-Assignment 2 Solutions due February 3, 2016 1 TMs for Deciding Languages Write the specification of a Turing machine recognizing one of the following three languages. Do one of these problems.

More information

1 Basic Combinatorics

1 Basic Combinatorics 1 Basic Combinatorics 1.1 Sets and sequences Sets. A set is an unordered collection of distinct objects. The objects are called elements of the set. We use braces to denote a set, for example, the set

More information

Final Exam Comments. UVa - cs302: Theory of Computation Spring < Total

Final Exam Comments. UVa - cs302: Theory of Computation Spring < Total UVa - cs302: Theory of Computation Spring 2008 Final Exam Comments < 50 50 59 60 69 70 79 80 89 90 94 95-102 Total 2 6 8 22 16 16 12 Problem 1: Short Answers. (20) For each question, provide a correct,

More information

Injectivity of Composite Functions

Injectivity of Composite Functions Injectivity of Composite Functions Kim S. Larsen Michael I. Schwartzbach Computer Science Department, Aarhus University Ny Munkegade, 8000 Aarhus C, Denmark Present address: Department of Mathematics and

More information

Characterization of Semantics for Argument Systems

Characterization of Semantics for Argument Systems Characterization of Semantics for Argument Systems Philippe Besnard and Sylvie Doutre IRIT Université Paul Sabatier 118, route de Narbonne 31062 Toulouse Cedex 4 France besnard, doutre}@irit.fr Abstract

More information

Mathematics 114L Spring 2018 D.A. Martin. Mathematical Logic

Mathematics 114L Spring 2018 D.A. Martin. Mathematical Logic Mathematics 114L Spring 2018 D.A. Martin Mathematical Logic 1 First-Order Languages. Symbols. All first-order languages we consider will have the following symbols: (i) variables v 1, v 2, v 3,... ; (ii)

More information

185.A09 Advanced Mathematical Logic

185.A09 Advanced Mathematical Logic 185.A09 Advanced Mathematical Logic www.volny.cz/behounek/logic/teaching/mathlog13 Libor Běhounek, behounek@cs.cas.cz Lecture #1, October 15, 2013 Organizational matters Study materials will be posted

More information

Tutorial on Mathematical Induction

Tutorial on Mathematical Induction Tutorial on Mathematical Induction Roy Overbeek VU University Amsterdam Department of Computer Science r.overbeek@student.vu.nl April 22, 2014 1 Dominoes: from case-by-case to induction Suppose that you

More information

Representations of Boolean Functions in Constructive Type Theory

Representations of Boolean Functions in Constructive Type Theory Saarland University Faculty of Natural Sciences and Technology I Department of Computer Science Bachelor s Program in Computer Science Bachelor s Thesis Representations of Boolean Functions in Constructive

More information

A Tableau Calculus for Minimal Modal Model Generation

A Tableau Calculus for Minimal Modal Model Generation M4M 2011 A Tableau Calculus for Minimal Modal Model Generation Fabio Papacchini 1 and Renate A. Schmidt 2 School of Computer Science, University of Manchester Abstract Model generation and minimal model

More information

COMPLETION OF PARTIAL LATIN SQUARES

COMPLETION OF PARTIAL LATIN SQUARES COMPLETION OF PARTIAL LATIN SQUARES Benjamin Andrew Burton Honours Thesis Department of Mathematics The University of Queensland Supervisor: Dr Diane Donovan Submitted in 1996 Author s archive version

More information

Number of Voronoi-relevant vectors in lattices with respect to arbitrary norms

Number of Voronoi-relevant vectors in lattices with respect to arbitrary norms Fakultät für Elektrotechnik, Informatik und Mathematik Arbeitsgruppe Codes und Kryptographie Number of Voronoi-relevant vectors in lattices with respect to arbitrary norms Master s Thesis in Partial Fulfillment

More information

Chapter 2. Mathematical Reasoning. 2.1 Mathematical Models

Chapter 2. Mathematical Reasoning. 2.1 Mathematical Models Contents Mathematical Reasoning 3.1 Mathematical Models........................... 3. Mathematical Proof............................ 4..1 Structure of Proofs........................ 4.. Direct Method..........................

More information

Topological Convergence in Infinitary Abstract Rewriting

Topological Convergence in Infinitary Abstract Rewriting Topological Convergence in Infinitary Abstract Rewriting Master s Thesis Cognitive Artificial Intelligence 60 ECTS Jochem Bongaerts Universiteit Utrecht August 2011 x First supervisor: Prof. dr. Vincent

More information

Lecture 2: Syntax. January 24, 2018

Lecture 2: Syntax. January 24, 2018 Lecture 2: Syntax January 24, 2018 We now review the basic definitions of first-order logic in more detail. Recall that a language consists of a collection of symbols {P i }, each of which has some specified

More information

Notes on ordinals and cardinals

Notes on ordinals and cardinals Notes on ordinals and cardinals Reed Solomon 1 Background Terminology We will use the following notation for the common number systems: N = {0, 1, 2,...} = the natural numbers Z = {..., 2, 1, 0, 1, 2,...}

More information

Set theory. Math 304 Spring 2007

Set theory. Math 304 Spring 2007 Math 304 Spring 2007 Set theory Contents 1. Sets 2 1.1. Objects and set formation 2 1.2. Unions and intersections 3 1.3. Differences 4 1.4. Power sets 4 1.5. Ordered pairs and binary,amscdcartesian products

More information

Section Summary. Relations and Functions Properties of Relations. Combining Relations

Section Summary. Relations and Functions Properties of Relations. Combining Relations Chapter 9 Chapter Summary Relations and Their Properties n-ary Relations and Their Applications (not currently included in overheads) Representing Relations Closures of Relations (not currently included

More information

CSC 5170: Theory of Computational Complexity Lecture 4 The Chinese University of Hong Kong 1 February 2010

CSC 5170: Theory of Computational Complexity Lecture 4 The Chinese University of Hong Kong 1 February 2010 CSC 5170: Theory of Computational Complexity Lecture 4 The Chinese University of Hong Kong 1 February 2010 Computational complexity studies the amount of resources necessary to perform given computations.

More information

Notes on Proving Arithmetic Equations

Notes on Proving Arithmetic Equations Massachusetts Institute of Technology Course Notes 1 6.844, Spring 03: Computability Theory of and with Scheme February 10 Prof. Albert Meyer revised February 25, 2003, 1300 minutes Notes on Proving Arithmetic

More information

1.3. BASIC COMPUTER SCIENCE PREREQUISITES 15

1.3. BASIC COMPUTER SCIENCE PREREQUISITES 15 1.3. BASIC COMPUTER SCIENCE PREREQUISITES 15 For example, if L is reducible to L and L P then L P. A decision problem is NP-hard if every problem in NP is polynomial time reducible to it. A decision problem

More information

Modular Termination Proofs for Rewriting Using Dependency Pairs

Modular Termination Proofs for Rewriting Using Dependency Pairs J. Symbolic Computation (2002) 34, 21 58 doi:10.1006/jsco.2002.0541 Available online at http://www.idealibrary.com on Modular Termination Proofs for Rewriting Using Dependency Pairs JÜRGEN GIESL, THOMAS

More information

Modern Algebra Prof. Manindra Agrawal Department of Computer Science and Engineering Indian Institute of Technology, Kanpur

Modern Algebra Prof. Manindra Agrawal Department of Computer Science and Engineering Indian Institute of Technology, Kanpur Modern Algebra Prof. Manindra Agrawal Department of Computer Science and Engineering Indian Institute of Technology, Kanpur Lecture 02 Groups: Subgroups and homomorphism (Refer Slide Time: 00:13) We looked

More information

VAUGHT S THEOREM: THE FINITE SPECTRUM OF COMPLETE THEORIES IN ℵ 0. Contents

VAUGHT S THEOREM: THE FINITE SPECTRUM OF COMPLETE THEORIES IN ℵ 0. Contents VAUGHT S THEOREM: THE FINITE SPECTRUM OF COMPLETE THEORIES IN ℵ 0 BENJAMIN LEDEAUX Abstract. This expository paper introduces model theory with a focus on countable models of complete theories. Vaught

More information

Enhancing Active Automata Learning by a User Log Based Metric

Enhancing Active Automata Learning by a User Log Based Metric Master Thesis Computing Science Radboud University Enhancing Active Automata Learning by a User Log Based Metric Author Petra van den Bos First Supervisor prof. dr. Frits W. Vaandrager Second Supervisor

More information

2. Prime and Maximal Ideals

2. Prime and Maximal Ideals 18 Andreas Gathmann 2. Prime and Maximal Ideals There are two special kinds of ideals that are of particular importance, both algebraically and geometrically: the so-called prime and maximal ideals. Let

More information

Herbrand Theorem, Equality, and Compactness

Herbrand Theorem, Equality, and Compactness CSC 438F/2404F Notes (S. Cook and T. Pitassi) Fall, 2014 Herbrand Theorem, Equality, and Compactness The Herbrand Theorem We now consider a complete method for proving the unsatisfiability of sets of first-order

More information

CHAPTER 1 INTRODUCTION TO BRT

CHAPTER 1 INTRODUCTION TO BRT CHAPTER 1 INTRODUCTION TO BRT 1.1. General Formulation. 1.2. Some BRT Settings. 1.3. Complementation Theorems. 1.4. Thin Set Theorems. 1.1. General Formulation. Before presenting the precise formulation

More information

CS 173: Induction. Madhusudan Parthasarathy University of Illinois at Urbana-Champaign. February 7, 2016

CS 173: Induction. Madhusudan Parthasarathy University of Illinois at Urbana-Champaign. February 7, 2016 CS 173: Induction Madhusudan Parthasarathy University of Illinois at Urbana-Champaign 1 Induction February 7, 016 This chapter covers mathematical induction, and is an alternative resource to the one in

More information

Equality of P-partition Generating Functions

Equality of P-partition Generating Functions Bucknell University Bucknell Digital Commons Honors Theses Student Theses 2011 Equality of P-partition Generating Functions Ryan Ward Bucknell University Follow this and additional works at: https://digitalcommons.bucknell.edu/honors_theses

More information

The efficiency of identifying timed automata and the power of clocks

The efficiency of identifying timed automata and the power of clocks The efficiency of identifying timed automata and the power of clocks Sicco Verwer a,b,1,, Mathijs de Weerdt b, Cees Witteveen b a Eindhoven University of Technology, Department of Mathematics and Computer

More information

Context-free grammars and languages

Context-free grammars and languages Context-free grammars and languages The next class of languages we will study in the course is the class of context-free languages. They are defined by the notion of a context-free grammar, or a CFG for

More information

Completeness Proof Strategies for Euler Diagram Logics

Completeness Proof Strategies for Euler Diagram Logics Completeness Proof Strategies for Euler Diagram Logics Jim Burton, Gem Stapleton, and John Howse Visual Modelling Group, University of Brighton, UK {j.burton,g.e.stapleton,john.howse}@brighton.ac.uk Abstract.

More information

KE/Tableaux. What is it for?

KE/Tableaux. What is it for? CS3UR: utomated Reasoning 2002 The term Tableaux refers to a family of deduction methods for different logics. We start by introducing one of them: non-free-variable KE for classical FOL What is it for?

More information

Alan Bundy. Automated Reasoning LTL Model Checking

Alan Bundy. Automated Reasoning LTL Model Checking Automated Reasoning LTL Model Checking Alan Bundy Lecture 9, page 1 Introduction So far we have looked at theorem proving Powerful, especially where good sets of rewrite rules or decision procedures have

More information

2.5.2 Basic CNF/DNF Transformation

2.5.2 Basic CNF/DNF Transformation 2.5. NORMAL FORMS 39 On the other hand, checking the unsatisfiability of CNF formulas or the validity of DNF formulas is conp-complete. For any propositional formula φ there is an equivalent formula in

More information

3 The language of proof

3 The language of proof 3 The language of proof After working through this section, you should be able to: (a) understand what is asserted by various types of mathematical statements, in particular implications and equivalences;

More information

The Integers. Peter J. Kahn

The Integers. Peter J. Kahn Math 3040: Spring 2009 The Integers Peter J. Kahn Contents 1. The Basic Construction 1 2. Adding integers 6 3. Ordering integers 16 4. Multiplying integers 18 Before we begin the mathematics of this section,

More information

Reachability Analysis with State-Compatible Automata

Reachability Analysis with State-Compatible Automata Reachability nalysis with State-Compatible utomata Bertram Felgenhauer and René Thiemann Institute of Computer Science, University of Innsbruck, Innsbruck, ustria {bertram.felgenhauer,rene.thiemann}@uibk.ac.at

More information

STRUCTURES WITH MINIMAL TURING DEGREES AND GAMES, A RESEARCH PROPOSAL

STRUCTURES WITH MINIMAL TURING DEGREES AND GAMES, A RESEARCH PROPOSAL STRUCTURES WITH MINIMAL TURING DEGREES AND GAMES, A RESEARCH PROPOSAL ZHENHAO LI 1. Structures whose degree spectrum contain minimal elements/turing degrees 1.1. Background. Richter showed in [Ric81] that

More information

Supplementary Notes on Inductive Definitions

Supplementary Notes on Inductive Definitions Supplementary Notes on Inductive Definitions 15-312: Foundations of Programming Languages Frank Pfenning Lecture 2 August 29, 2002 These supplementary notes review the notion of an inductive definition

More information

The Strong Largeur d Arborescence

The Strong Largeur d Arborescence The Strong Largeur d Arborescence Rik Steenkamp (5887321) November 12, 2013 Master Thesis Supervisor: prof.dr. Monique Laurent Local Supervisor: prof.dr. Alexander Schrijver KdV Institute for Mathematics

More information

DIMACS Technical Report March Game Seki 1

DIMACS Technical Report March Game Seki 1 DIMACS Technical Report 2007-05 March 2007 Game Seki 1 by Diogo V. Andrade RUTCOR, Rutgers University 640 Bartholomew Road Piscataway, NJ 08854-8003 dandrade@rutcor.rutgers.edu Vladimir A. Gurvich RUTCOR,

More information

CHAPTER 3: THE INTEGERS Z

CHAPTER 3: THE INTEGERS Z CHAPTER 3: THE INTEGERS Z MATH 378, CSUSM. SPRING 2009. AITKEN 1. Introduction The natural numbers are designed for measuring the size of finite sets, but what if you want to compare the sizes of two sets?

More information

Automata on linear orderings

Automata on linear orderings Automata on linear orderings Véronique Bruyère Institut d Informatique Université de Mons-Hainaut Olivier Carton LIAFA Université Paris 7 September 25, 2006 Abstract We consider words indexed by linear

More information

Turing Machines Part III

Turing Machines Part III Turing Machines Part III Announcements Problem Set 6 due now. Problem Set 7 out, due Monday, March 4. Play around with Turing machines, their powers, and their limits. Some problems require Wednesday's

More information

Topics in Model-Based Reasoning

Topics in Model-Based Reasoning Towards Integration of Proving and Solving Dipartimento di Informatica Università degli Studi di Verona Verona, Italy March, 2014 Automated reasoning Artificial Intelligence Automated Reasoning Computational

More information

7 RC Simulates RA. Lemma: For every RA expression E(A 1... A k ) there exists a DRC formula F with F V (F ) = {A 1,..., A k } and

7 RC Simulates RA. Lemma: For every RA expression E(A 1... A k ) there exists a DRC formula F with F V (F ) = {A 1,..., A k } and 7 RC Simulates RA. We now show that DRC (and hence TRC) is at least as expressive as RA. That is, given an RA expression E that mentions at most C, there is an equivalent DRC expression E that mentions

More information

Lecture 6: Greedy Algorithms I

Lecture 6: Greedy Algorithms I COMPSCI 330: Design and Analysis of Algorithms September 14 Lecturer: Rong Ge Lecture 6: Greedy Algorithms I Scribe: Fred Zhang 1 Overview In this lecture, we introduce a new algorithm design technique

More information

Rewriting, Explicit Substitutions and Normalisation

Rewriting, Explicit Substitutions and Normalisation Rewriting, Explicit Substitutions and Normalisation XXXVI Escola de Verão do MAT Universidade de Brasilia Part 1/3 Eduardo Bonelli LIFIA (Fac. de Informática, UNLP, Arg.) and CONICET eduardo@lifia.info.unlp.edu.ar

More information

A Semantic Criterion for Proving Infeasibility in Conditional Rewriting

A Semantic Criterion for Proving Infeasibility in Conditional Rewriting A Semantic Criterion for Proving Infeasibility in Conditional Rewriting Salvador Lucas Raúl Gutiérrez DSIC, Universitat Politècnica de València, Spain 6 th International Workshop on Confluence, IWC 2017

More information

3 The Semantics of the Propositional Calculus

3 The Semantics of the Propositional Calculus 3 The Semantics of the Propositional Calculus 1. Interpretations Formulas of the propositional calculus express statement forms. In chapter two, we gave informal descriptions of the meanings of the logical

More information

Proving Completeness for Nested Sequent Calculi 1

Proving Completeness for Nested Sequent Calculi 1 Proving Completeness for Nested Sequent Calculi 1 Melvin Fitting abstract. Proving the completeness of classical propositional logic by using maximal consistent sets is perhaps the most common method there

More information