Kirsten Lackner Solberg. Dept. of Math. and Computer Science. Odense University, Denmark

Size: px
Start display at page:

Download "Kirsten Lackner Solberg. Dept. of Math. and Computer Science. Odense University, Denmark"

Transcription

1 Inference Systems for Binding Time Analysis Kirsten Lackner Solberg Dept. of Math. and Computer Science Odense University, Denmark June 21, 1993

2 Contents 1 Introduction 4 2 Review of Binding Time Analysis Well-formedness of types : : : : : : : : : : : : : : : : : : : : : : : : : : : : Well-formedness of expressions : : : : : : : : : : : : : : : : : : : : : : : : : The algorithms for binding time analysis : : : : : : : : : : : : : : : : : : : An algorithm for binding time analysis of types : : : : : : : : : : : An algorithm for binding time analysis of terms : : : : : : : : : : : 10 3 A Constraint based Binding Time Analysis Types and their well-formedness : : : : : : : : : : : : : : : : : : : : : : : : Expressions and their well-formedness : : : : : : : : : : : : : : : : : : : : : The assumption list : : : : : : : : : : : : : : : : : : : : : : : : : : : The well-formedness of expressions : : : : : : : : : : : : : : : : : : Properties of the well-formedness relation : : : : : : : : : : : : : : : 20 4 Incorporating [up] and [down] [up] and [down] on function types : : : : : : : : : : : : : : : : : : : : : : : [up] and [down] on non-function types : : : : : : : : : : : : : : : : : : : : The [up-down]-rule : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : Making the [up-down]-rule implicit : : : : : : : : : : : : : : : : : : : : : : 26 5 Generating the Constraint Set Properties of the algorithms : : : : : : : : : : : : : : : : : : : : : : : : : : 36 6 Solving the Constraint Set Properties of the algorithms : : : : : : : : : : : : : : : : : : : : : : : : : : 43 7 Recursion, Constants and Lists Recursion and constants : : : : : : : : : : : : : : : : : : : : : : : : : : : : A constraint based binding time analysis : : : : : : : : : : : : : : : Incorporating [up] and [down] : : : : : : : : : : : : : : : : : : : : : 47 1

3 7.1.3 Generating the constraint set : : : : : : : : : : : : : : : : : : : : : Lists : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : A constraint based binding time analysis : : : : : : : : : : : : : : : Incorporating [up] and [down] : : : : : : : : : : : : : : : : : : : : : Generating the constraint set : : : : : : : : : : : : : : : : : : : : : 53 8 Conclusion 56 A Proofs 58 A.1 Proofs from Section 3 : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 58 A.1.1 Lemma 3 : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 58 A.1.2 Lemma 4 : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 58 A.1.3 Lemma 6 : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 59 A.1.4 Lemma 7 : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 61 A.1.5 Proportion 8 : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 61 A.1.6 Proportion 9 : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 63 A.2 Proofs from Section 4 : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 65 A.2.1 Lemma 11 : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 65 A.2.2 Lemma 12 : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 66 A.2.3 Lemma 13 : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 68 A.2.4 Lemma 14 : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 68 A.2.5 Lemma 15 : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 69 A.2.6 Lemma 16 : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 71 A.2.7 Lemma 17 : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 72 A.2.8 Lemma 18 : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 73 A.3 Proofs from Section 5 : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 74 A.3.1 Lemma 23 : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 74 A.3.2 Lemma 24 : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 77 A.4 Proofs from Section 6 : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 82 A.4.1 Fact used in the proofs of Lemma 29 and 30 : : : : : : : : : : : : : 82 A.4.2 Lemma 29 : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 82 A.4.3 Lemma 30 : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 83 2

4 B A Miranda implementation of the algorithms 84 B.1 The T2 types : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 84 B.2 The E2 terms : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 85 B.3 Collecting the constraints : : : : : : : : : : : : : : : : : : : : : : : : : : : 85 B.3.1 Substitutions : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 87 B.3.2 Unication : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 89 B.3.3 The algorithm L : : : : : : : : : : : : : : : : : : : : : : : : : : : : 91 B.4 Solving the constraints : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 93 B.5 An example : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 96 3

5 1 Introduction We consider the problem of introducing a distinction between binding times (e.g. compiletime and run-time) into functional languages. It is well-known that such a distinction is important for the ecient implementation of imperative languages [1] and more recent results show that the performance of functional languages may be improved by using binding time information (e.g. [11, 6]). There are several approaches to the specication of binding time analysis. Some approaches are based on variants of abstract interpretation (e.g. [2, 3, 5]), others are based on projection analysis (e.g. [7]) and yet others (e.g. [9, 10]) use non-standard type systems and develop corresponding type inference algorithms. In this paper we shall take a logical approach and aim at constructing an algorithm for generating a set of constraints to be solved. In this way we will be able to make full use of substitutions as in ordinary type inference [8] this is contrary to other algorithms (e.g. [9]) where extra recursive calls have to be performed. The starting point for our work is the inference system for binding times of the simply typed -calculus as specied in [9]. This is reviewed in Section 2. However, we shall reformulate it in a style motivated by the inference system for linear types given in [13] using the so-called \use" types. This is described in Section 3. We construct an algorithm for binding time analysis from this inference system. This algorithm is O (n 4 ) where n is the size of the given term where the algorithm of [9] is exponential in the size of the term. We proceed in a couple of stages. First we get rid of the two rules [up] and [down] to get a simpler inference system. This is done in Section 4. In Section 5 we present an algorithm for nding the constraints that has to be fullled in order to turn a 1-level term into a term in the 2-level -calculus and in Section 6 we solve the constraints. In Section 7 we extend the types with lists and the terms with recursion and constants. Section 8 concludes. 4

6 [A] `0 A i : r [] `0 t 1 : r `0 t 2 : r `0 t 1 t 2 : r [!] `0 t 1 : r `0 t 2 : r `0 t 1! t 2 : r [A] `0 A i : c [] `0 t 1 : c `0 t 2 : c `0 t 1 t 2 : c [!] `0 t 1 : c `0 t 2 : c `0 t 1! t 2 : c [up] `0 t 1! t 2 : r `0 t 1! t 2 : c Figure 1: Well-formedness of the 2-level types 2 Review of Binding Time Analysis In this section we review the binding time analysis of Nielson and Nielson ([9]). In a 2-level -calculus the binding times are explicitly marked on each construction. For us a type of the 2-level -calculus is either a base type, a function type, or a product type t 2 T2 t ::= A i j A i j t t j t t j t! t j t! t where the A i 's are the base types. Here the underlined constructions are those of run-time kind and the non-underlined are those of compile-time kind. The terms are e 2 E2 e ::= he, ei j he, ei j fst e j fst e j snd e j snd e j x.e j x.e j e ( e ) j e ( e ) j x Notice that there is only one sort of variable, x. The overall binding time of a variable is determined by the -binding of it. 2.1 Well-formedness of types We rst introduce rules for annotating types. First we say that a type t is well-formed of binding time b where b is either r or c denoting run-time and compile-time, respectively, if `0 t : b. This well-formedness relation is given in Figure 1. A run-time function type can be thought of as a piece of code. The compiler, which generates code, can manipulate this piece of code. Therefore a run-time function type can be both of run-time kind and compile-time kind. This fact is expressed by the rule [up], which allows us to turn a runtime function type of kind run-time into a run-time function type of kind compile-time. Only the [up] rule allows us to transform a run-time type into a compile-time type and furthermore this is only possible for function types. 5

7 Example 1 An example of using Figure 1 is to show that the type ((A! A)! (A! A))! ((A! A)! (A! A)) is well-formed of compile-time kind for some base type A. First we have that `0 from [A]. From [!] we get `0 A : r `0 A : r `0 A! A : r A : r (1) Applying [!] to two copies of (1) we get `0 A! A : r `0 A! A : r `0 ((A! A)! (A! A)) : r Now we apply [up] to get the binding time c `0 ((A! A)! (A! A)) : r `0 ((A! A)! (A! A)) : c (2) From (1) we get by applying [up] `0 A! A : r `0 A! A : c (3) Now apply [!] to two copies of (3) to get `0 A! A : c `0 A! A : c `0 (A! A)! (A! A): c (4) The result is now obtained by using [!] to combine (2) and (4) `0 ((A! A)! (A! A)) : c `0 ((A! A)! (A! A)) : c `0 ((A! A)! (A! A))! ((A! A)! (A! A)) : c Well-formedness of expressions Next we say that the term e has type t and binding time b under the assumptions tenv if tenv `0 e : t : b. The type environment tenv is a function from variables to 2-level types and binding times. That is tenv x = (t, b) where t is the type of the variable x and b is the binding time of x. Given tenv then the ( function tenv[(t, b)/x] is dened by (t, b), if x = y (tenv[(t, b)/x]) y = tenv y, otherwise The well-formedness relation for 2-level terms is given in Figure 2. Basically we have two copies of the traditional inference system for typing the -calculus, one for the run-time 6

8 level and one for the compile-time level. Furthermore we have the two rules [up] and [down] allowing the two binding times to mix. The idea behind [up] is that in order to turn a compile-time term into a run-time term (i.e. to allow it to be evaluated at runtime) it has to express some computation, i.e. it must have a run-time function type. In order to turn a run-time term into a compile-time term (i.e. to talk about its evaluation at compile-time) its type must not only be a run-time function type, but the term is also not allowed to reference \free" run-time objects. Example 2 As an example of using Figure 2 we show that the term x.y.x ( y ) with type ((A! A)! (A! A))! ((A! A)! (A! A)) is well-formed of compile-time kind for some base type A. Let tenv be given by tenv x = (((A! A)! (A! A)), c) tenv y = (A! A, c) tenv z = undened if z 6= x and z 6= y From [x] we get tenv `0 x : ((A! A)! (A! A)) : c and tenv `0 y : A! A : c. Applying [down] on both of them gives tenv `0 x : ((A! A)! (A! A)) : c tenv `0 x : ((A! A)! (A! A)) : r and Now we can apply [()] to get tenv `0 y : A! A : c tenv `0 y : A! A : r tenv `0 x : ((A! A)! (A! A)) : r tenv `0 y : A! A : r tenv `0 x ( y ) : A! A : r Since tenv only contains variables of compile-time kind and the type of x ( y ) is a run-time function type of run-time kind it is possible to apply [up] to obtain tenv `0 x ( y ) : A! A : r tenv `0 x ( y ) : A! A : c After applying [] two times we obtain the desired result tenv 00 tenv 0 tenv `0 x ( y ) : A! A : c `0 y.x ( y ) : (A! A)! (A! A) : c `0 x.y.x ( y ) : ((A! A)! (A! A))! ((A! A)! (A! A)) : c 7

9 [x] tenv `0 x i : t : b, if tenv x i = (t, b) ^ `0 t : b [hi] tenv `0 e 1 : t 1 : r tenv `0 e 2 : t 2 : r tenv `0 h e 1, e 2 i : t 1 t 2 : r [hi] tenv `0 e 1 : t 1 : c tenv `0 e 2 : t 2 : c tenv `0 h e 1, e 2 i : t 1 t 2 : c [fst] tenv `0 e : t 1 t 2 : r tenv `0 fst e : t 1 : r [fst] tenv `0 e : t 1 t 2 : c tenv `0 fst e : t 1 : c [snd] tenv `0 e : t 1 t 2 : r tenv `0 snd e : t 2 : r [snd] tenv `0 e : t 1 t 2 : c tenv `0 snd e : t 2 : c [] tenv[(t 2, r)/x] `0 e : t 1 : r tenv `0 x.e : t 2! t 1 : r, if `0 t 2 : r [] tenv[(t 2, c)/x] `0 e : t 1 : c tenv `0 x.e : t 2! t 1 : c, if `0 t 2 : c [()] tenv `0 e 1 : t 2! t 1 : r tenv `0 e 2 : t 2 : r tenv `0 e 1 ( e 2 ) : t 1 : r [()] tenv `0 e 1 : t 2! t 1 : c tenv `0 e 2 : t 2 : c tenv `0 e 1 ( e 2 ) : t 1 : c [down] tenv `0 e : t : c tenv `0 e : t : r, if `0 t : r [up] tenv `0 e : t : r tenv `0 e : t : c, if `0 t : c ^ 8i: tenv x i = (t i, c) Figure 2: Well-formedness of the 2-level -calculus 8

10 where tenv 0 and tenv 00 are given by and tenv 0 x = (((A! A)! (A! A)), c) tenv 0 z = undened if z 6= x tenv 00 z = undened for all variables This inference system is part of the one used in [9] to construct a binding time analysis The algorithms for binding time analysis In [9] the algorithm for binding time analysis is in two parts, one for types and one for terms An algorithm for binding time analysis of types The algorithm T BTA for binding time analysis of types presented in [9] calculates an annotated type t and its overall binding time b (r or c) given a type t 0 and the overall binding time b 0 of the type. The calculated type is the type with as few underlined constructions as possible and it is well-formed of kind b (i.e. `0 t : b can be inferred from Figure 1). This annotation expresses that as many computations as possible are performed at compile-time. For base types: if the type is underlined and the overall binding time is r or if the type is not underlined and the overall binding time is c, then the type is allready well-formed. For an underlined type and overall binding time c or a non-underlined type and overall binding time r the well-formed type is A i with overall binding time r. This can be summarized by: T BTA [[ A i : c ]] = A i : c T BTA [[ A i : r ]] = A i : r T BTA [[ A i : r ]] = A i : r T BTA [[ A i : c ]] = A i : r Writing A i b we mean A i if b is c and A i if b is r. The set fr, cg forms a partially ordered set with the partial order given by r c. Now T BTA for base types can be written as T BTA [[ A b 1 i : b 2 ]] = let b = b 1 u b 2 in A b i : b where u is the meet operation on (fr, cg, ). For product types it is a bit more complicated: if the two subtypes do not have the same overall binding time after the recursive call, then a new recursive call will be performed. Writing b we mean if b is c and if b is r. 9

11 T BTA [[ t 1 b1 t 2 : b 2 ]] = let t 0 1 : b 0 1 = T BTA [[ t 1 : b 1 u b 2 ]] t 0 2 : b 0 2 = T BTA [[ t 2 : b 1 u b 2 ]] b = b 0 1 u b 0 2 in if b 0 = then 1 b0 2 t t 0 b0 2 b0 1 else T BTA [[ t 0 1 b t 0 2 : b ]] For function types the denition is analogous to product types, except that the [up]-rule of Figure 1 has to be taken into account. Writing! b we mean! if b is c and! if b is r. T BTA [[ t 1! b1 t 2 : b 2 ]] = let t 0 1 : b 0 1 = T BTA [[ t 1 : b 1 u b 2 ]] t 0 2 : b 0 2 = T BTA [[ t 2 : b 1 u b 2 ]] b = b 0 u 1 b0 2 in if b 0 = then 1 b0 2 t 0 1! b0 1 t 0 2 : b 2 else T BTA [[ t 0!b 1 t 0 : b 2 2 ]] The dierence between product types and function types is in the result; for product types the overall binding time of the two subtypes and the product is the same, whereas for function types the subtypes has the same overall binding time, but this need not be the overall binding time of the function type. This corresponds to the application of the rule [up]. It is shown in [9] that for all types t in T2 and binding times b `0 T BTA [[ t : b ]] can be inferred in Figure 1 and furthermore (T BTA [[ t : b ]]) t : b holds and (T BTA [[ t : b ]]) is the biggest among those t 0 : b 0 satisfying t 0 : b 0 t : b. Here the partial order is extended to pairs t : b by dening t 0 : b 0 t : b to mean t 0 t ^ b 0 b. On types, is dened as t 0 t if and only if t 0 and t are the same type if we ignore the underlinings and every underlining in t is also in t An algorithm for binding time analysis of terms The algorithm E BTA for binding time analysis of terms presented in [9] calculates an annotated term e, its type t, and its overall binding time b given a term e 0, a type t 0 and an overall binding time b 0. The annotated term is the term with as few underlined constructions as possible and it is well-formed of type t and binding time b (i.e. `0 e : t : b can be inferred from Figure 2). 10

12 When doing the analysis the variables will be annotated with their types and binding times to make the formulation easier. Writing x b we mean x if b is c and x if b is r. For pairs the analysis is straight forward: perform recusive calls on the subterms and if they do not have the same overall binding time, then try with a \smaller" binding time b 0. Writing h b, i we mean h, i if b is c and h, i if b is r. E BTA [[ h b 1 e 1, e 2 i : t 1 b 2 t 2 : b 0 ]] = let in e 0 1 : t 0 1 : b 0 1 = E BTA [[ e 1 : t 1 : b 1 u b 2 u b 0 ]] e 0 2 : t 0 2 : b 0 2 = E BTA [[ e 2 : t 2 : b 1 u b 2 u b 0 ]] b 0 = b 0 u 1 b0 2 if b 0 1 = b 0 2 then h b0 1 e 0 1, e 0 2i : t 0 1 b0 2 t 0 2 : b 0 0 else E BTA [[h e b0 0 1, e 0 2i : t 0 1 t b0 0 2 : b 0 ]] For the projections we rst have to get the missing type-information. We know the type of one component and have to infer the other. E BTA [[ fst b1 e : t 0 : b 0 ]] = let t 0 : = T 0 b0 0 BTA [[ t 0 : b 0 ]] t 1 be given by e has type t 0 t 1 when e and t 0 have no underlinings e 0 : t 00 0 b0 t 00 1 : b 0 = E BTA [[ e : t 0 0 b 1 t 1 : b 1 ]] in UD (fst b0 e 0 : t 00 0 : b 0, b 0 0) E BTA [[ snd b1 e : t 0 : b 0 ]] = let t 0 0 : b 0 0 = T BTA [[ t 0 : b 0 ]] t 1 be given by e has type t 1 t 0 when e and t 0 have no underlinings e 0 : t 00 1 b0 t 00 0 : b 0 = E BTA [[ e : t 1 b 1 t 0 0 : b 1 ]] in UD (snd b0 e 0 : t 00 0, b b0 0 ) 0 The function UD is used to check the applicability of the rules [up] and [down] in Figure 2. UD (e : t : b, b 0 ) = if (b = c) ^ (b 0 = r) ^ (`0 t : b 0 ) then e : t : b 0 else if (b = r) ^ (b 0 = c) ^ (`0 t : b 0 ) ^ all variables x in e have the form x c then e : t : b 0 else e : t : b down up For variables the type annotation on the variable and the type have to be the same. Again UD is used because [up] or [down] may be applicable. 11

13 E BTA [[ x b1 [t 1 ] : t 0 : b 0 ]] = let t 0 0 : b 0 0 = T BTA t 0 : b 0 t 0 1 : b 0 1 = T BTA (t 1 : b 1 ) u (t 0 0 : c) in UD (x b0 1 [t 0 1] : t 0 1 : b 0 1, b 0 0) For -abstractions all the occurrences of x in e have to have the same type and binding time annotation. Writing b. we mean. if b is c and. if b is r. E BTA [[ b 1 x[t 1 ].e t 2! b 2 t 3 : b 0 ]] = let t 0 2! b0 2 t 0 3 : b 0 0 = T BTA [[ t 2! b 2 t 3 : b 0 ]] e 0 : t 0 : b 0 = E BTA [[ e : t 0 : 3 b0 2 ]] Y = f(t 1, b 1 ), (t 0, 2 )g [ b0 f(t, b) x b [t] occurs in e 0 g t : b = u Y in if all members of Y is the same then UD ( b1 x[t 1 ].e 0 t 1! b1 t 0 : b 1, b 0 0) else E BTA [[ b x[t].[x b [t]/x]e 0 t! b t 0 : b 0 0 ]] For application we have to infer the missing type information for the operand. We start with a type containing no underlinings, then we use a recursive call to get more information about the type. By doing this we cannot throw away the information we have gained, therefore we need the auxiliary function E BTA which do not throw it away. Writing ( b ) we mean ( ) if b is c and ( ) if b is r. E BTA [[ e 1 ( b 1 e 2 ) : b 0 ]] = let t 1 be given by e 2 has type t 1 when e 2 has no underlinings in E BTA [[ e 1 ( e 2 ) : t 1! b 1 t 0 : b 0 ]] E BTA [[ e 1 ( e 2 ) : t 1! b 1 t 2 : b 0 ]] = let t 0 : = T 2 b0 0 BTA [[ t 2 : b 0 ]] e 0 1 : t 0 1! b0 1 t 00 2 = E BTA [[ e 1 : t 1! b 1 t 0 2 : b 1 ]] e 0 2 : t 00 1 : b 0 2 = E BTA [[ e 2 : t 1 : b 1 ]] t 0 : b 0 = (t 0 1, b 0 1) u (t 00 1, b 0 2) in if (t 0 1, b 0 1) = (t 00 1, b 0 2) then UD (e e 0 ) : :, ) (b0 2 t00 2 b0 1 b0 0 else E BTA [[ e 0 1 ( e 0 2 ) : t 0! t 00 2 : b 0 0 ]] Let Fe be a function from all the variables in e to T2fr, cg satisfying that for all x if Fe x = (t, b), then `0 t : b can be infered. It is shown in [9] that if e has type t, then Fe `0 E BTA [[ e : t : b ]] 12

14 can be inferred in Figure 2 and furthermore (E BTA [[ e : t : b ]]) e : t : b holds and (E BTA [[ e : t : b ]]) is the greatest among those e 0 : t 0 : b 0 satisfying e 0 : t 0 : b 0 e : t : b provided that Fe 0 `0 e 0 : t 0 : b 0 can be infered. Here the partial order is extended to triples by dening e 0 : t 0 : b 0 e : t : b to mean e 0 e ^ t 0 t ^ b 0 b. On terms is dened as e 0 e if and only if e 0 and e are the same term if we ignore the underlinings and every underlining in e is also in e 0 This algorithm is exponential in the size of the term. 13

15 W(! b A i, p) = [b = p] W(! b (U V), p) = [W(U, b), W(V, b), b = p] W(! b (U ( V), p) = [W(U, b), W(V, b), b p] Figure 3: Well-formedness for types. 3 A Constraint based Binding Time Analysis Before dening the new well-formedness relation, we shall write the types and terms in a dierent way, so that they correspond more closely to the types and terms of linear logic. In this way we can write the rule for e.g. [hi] and [hi] as one rule. The new system we get in this section corresponds in a one-to-one manner to the analysis of Section Types and their well-formedness The new types are U 2 T U ::=! b A i j! b (U U) j! b (U ( U) where b is r (for run-time) or c (for compile-time) or it is a so-called binding time variable. The set fr, cg forms a partially ordered set with the partial order given by r c. The types of T and T2 correspond to one another: T T2! r A i A i! c A i A i! r (U U) t t! c (U U) t t! r (U ( U) t! t! c (U ( U) t! t Just as the T2 types have to be well-formed the new types have to be well-formed. We do this by means of constraints on the values a binding time variable can take. The constraints are a list of inequalities between binding times of the form p = q, p < q, or p q; later we shall also allow constraints of other forms. The constraints can be solved if there exists a mapping from all the binding time variables to fr, cg such that all the inequalities are satised. From this follows that the constraint set is unsolvable if its transitive closure contains inequalities of the form c r, c = r, r < r, c < c, or c < r. The function W(U, p), dened in Figure 3, is used to determine constraints so that the type U with overall binding time p is well-formed. A base type! b A i is well-formed of kind p if p = b. A product type! b (U V) is well-formed of kind p if U and V are well-formed of kind b and b = p. The same should be true for function types but a run-time function 14

16 type can be of both run-time kind and compile-time kind. The relation between the function W in Figure 3 and the well-formedness relation for types `0 in Figure 1 is given by Lemma 3 and 4. The proofs of the lemmas are in Appendix A on page 58. Lemma 3 If `0 t : b (Figure 1) and U is the type corresponding to t, then W(U, b) (Figure 3) is solvable. 2 Lemma 4 If W(U, b) (Figure 3) is solvable by M and t is the type corresponding to U M, then `0 t : M b (Figure 1) can be derived; U M is the type U where all binding time variables b i are replaced by M b i. 2 Example 5 As an example of using W we will nd the constraints for! b 1 (! b 2 (! b 3 (! b 4 A (! b 5 A) (! b 8 (! b 6 A (! b 7 A)) (! b 9 (! b 10 (! b 11 A (! b 12 A) (! b 13 (! b 14 A (! b 15 A))) to be well-formed of binding time p. We calculate W(! b 1 (! b 2 (! b 3 (! b 4 A (! b 5 A) (! b 8 (! b 6 A (! b 7 A)) (! b 9 (! b 10 (! b 11 A (! b 12 A) (! b 13 (! b 14 A (! b 15 A))), p) = [W(! b 1 (! b 2 (! b 3 (! b 4 A (! b 5 A) (! b 8 (! b 6 A (! b 7 A)),b 1 ), W(! b 9 (! b 10 (! b 11 A (! b 12 A) (! b 13 (! b 14 A (! b 15 A)), b 1 ), b 1 p] = [W(! b 3 (! b 4 A (! b 5 A), b 2 ), W(! b 8 (! b 6 A (! b 7 A), b 2 ), b 2 b 1, W(! b 10 (! b 11 A (! b 12 A), b 9 ), W(! b 13 (! b 14 A (! b 15 A), b 9 ), b 9 b 1, b 1 p] = [W(! b 4 A, b 3 ), W(! b 5 A, b 3 ), b 3 b 2, W(! b 6 A, b 8 ), W(! b 7 A, b 8 ), b 8 b 2, b 2 b 1, W(! b 11 A, b 10 ), W(! b 12 A, b 10 ), b 10 b 9, W(! b 14 A, b 13 ), W(! b 15 A, b 13 ), b 13 b 9, b 9 b 1, b 1 p] = [b 4 = b 3, b 5 = b 3, b 3 b 2, b 6 = b 8, b 7 = b 8, b 8 b 2, b 2 b 1, b 11 = b 10, b 12 = b 10, b 10 b 9 ], b 14 = b 13, b 15 = b 13, b 13 b 9, b 9 b 1, b 1 p] = [b 3 = b 4 = b 5, b 3 b 2, b 6 = b 7 = b 8, b 8 b 2, b 2 b 1, b 10 = b 11 = b 12, b 10 b 9, b 13 = b 14 = b 15, b 13 b 9, b 9 b 1, b 1 p] This means that the type has the form! b 1 (! b 2 (! b 3 (! b 3 A (! b 3 A) (! b 6 (! b 6 A (! b 6 A)) (! b 9 (! b 10 (! b 10 A (! b 10 A) (! b 13 (! b 13 A (! b 13 A))) with binding time p and constraints [b 3 b 2, b 6 b 2, b 2 b 1, b 10 b 9, b 13 b 9, b 9 b 1, b 1 p] 15

17 b 1 b 2 b 3 b 6 b 9 b 10 b 13 p r r r r r r r r r r r r r r r c c r r r r r r c c r r r c r r c c r r r c r c c c r r r c c r c c r r r c c c c c c r r r r r c c c r r c r r c c c r r c r c c c c r r c c r c c c r r c c c c c c r c r r r c c c r c c r r c b 1 b 2 b 3 b 6 b 9 b 10 b 13 p c c r c c r c c c c r c c c r c c c r c c c c c c c c r r r r c c c c r c r r c c c c r c r c c c c c r c c r c c c c r c c c c c c c c r r r c c c c c c r r c c c c c c r c c c c c c c c r c c c c c c c c c Figure 4: Solutions to the constraints in Example 5 All the solutions to the constraints are listed in Figure 4. Notice that the constraints have more than one solution. This means that if we want to nd all the well-formed annotations of a given T2 type, then we rst assign binding time variables to the corresponding T type. Now we can nd the constraints using W, and solving them give all the solutions which can be translated back to T2. That we have found all the possible annotations of the T2 type follows from Lemma 3 and Expressions and their well-formedness The terms are now u 2 E u ::= ( b u, u) j fst b u j snd b u j b x.u j ( b u u) j x where b is r or c or a binding time variable. Note that there is no annotations on variables and that the binding time of a variable is determined by its -binding. The terms correspond to the E2 terms in a one-to-one manner: E ( r u, u) ( c u, u) fst r u fst c u snd r u snd c u r x.u c x.u ( r u u) ( c u u) x E2 he, ei he, ei fst e fst e snd e snd e x.e x.e e ( e ) e ( e ) x 16

18 [weakening] [contraction] A:I `1 u : U : p [C] A:I, x : t : b `1 u : U : p [C, W(t, b)] A:I, x : V : p, y : V : q `1 u : U : b [C] A:I, z : V : p `1 u[z/x, z/y] : U : b [C, p = q] and x not in A [exchange] A:I, x : V 1 : p, y : V 2 : q, B:J `1 u : U : b [C] A:I, y : V 2 : q, x : V 1 : p, B:J `1 u : U : b [C] and z not in A Figure 5: The structural rules The assumption list An assumption list has the form x 1 : t 1 : b 1, : : :, x n : t n : b n where the x i 's are variables, the t i 's are the T type of the i'th variable, and the b i 's are the binding time of the i'th variable. We shall assume throughout that all the x i 's are distinct. This means that when combining two assumption lists they have to be disjoint as opposed to the type environment in Figure 2 which is only extended for -abstraction. When making a proof in the inference system of Figure 2 we start with a \big" type environment whereas here we start with only one variable in the assumption list. When the assumption list is written in the form A:I, then I is the list of all the binding times of all the variables, and A is the list of all variables and their types. To combine two assumption lists A:I and B:J we use the notation A:I, B:J. When two lists of all the binding times I and J are to be combined we write (I, J). A list of containing only one binding time b is written as (b). A list of all the binding times containing b as the rst element and I as the rest is written as (b, I). One may note that [weakening] allows us to extend an assumption list, [contraction] allows us to identify variables in an assumption list, and [exchange] allows us to reorder the assumption list. The three structural rules [weakening], [contraction], and [exchange] in Figure 5 for handling the assumption list have no analogy in Figure 2 because there the type environment is a function The well-formedness of expressions Now the well-formedness relation has the form A:I `1 u : U : b [C] and says that the term u has type U and binding time b under the assumptions A:I, and provided that the constraint set C can be solved. 17

19 [(I] [id] x : U : b `1 x : U : b [W(U, b)] A:I, x : U : p `1 v : V : q [C] A:I `1 p x.v :! p (U ( V) : p [C, p = q] [(E] A:I `1 v :! b (U ( V) : p [C] B:J `1 u : U : q [D] A:I, B:J `1 ( b v u) : V : b [C, D, p = q = b] [I] A:I `1 u : U : p [C] B:J `1 v : V : q [D] A:I, B:J `1 ( p u, v) :! p (U V) : p [C, D, p = q] [E 1 ] [E 2 ] [down] [up] A:I `1 u :! p (U V) : q [C] A:I `1 fst p u : U : p [C, p = q] A:I `1 v :! p (U V) : q [C] A:I `1 snd p v : V : p [C, p = q] A:I `1 u : U : p [C] A:I `1 u : U : q [C, D(U, p, q)] A:I `1 u : U : p [C] A:I `1 u : U : q [C, U(U, p, I, q)] D(! b A i, p, q) = [r = c] D(! b (U V), p, q) = [r = c] D(! b (U ( V), p, q) = [q < p, b q] U(! b A i, p, I, q) = [r = c] U(! b (U V), p, I, q) = [r = c] U(! b (U ( V), p, I, q) = [p < q, I q, b p] Figure 6: The well-formedness relation for the 2-level -calculus 18

20 The [id]-rule of Figure 6 says that with the assumption that the variable x has type U and binding time b, then x has type U and kind b if U is well-formed of kind b. This rule is much the same as [x]. The [(I]-rule says that if with the assumption list A:I, x : U : p the term v has type V and binding time q, and constraints C, then with the assumption list A:I the term p x.v has type! p (U ( V) with binding time p and constraints C and [p = q]. By comparing this rule with [!] and [!] in Figure 2 the rules say that the variable and the body of the abstraction have to have the same binding time and that the -abstraction itself has exactly this binding time. The rule [(E] says that if with the assumption list A:I a term v has type! b (U ( V) and binding time p and constraints C, and with the assumption list B:J a term u has type U and binding time q and constraints D, then with the assumption list A:I and B:J the term ( b v u) has type V and binding time b and constraints C and D and [p = q = b]. By comparing this rule with [()] and [()] in Figure 2 the rules say that if with the same type environment the two terms have the same binding time, then the new term has this binding time. The dierence between the two inference systems is that in Figure 6 the assumption lists are dierent and in Figure 2 the type environments are equal. It is here the rules [weakening], [contraction], and [exchange] are to be applied. The rules [I], [E 1 ], and [E 2 ] can be explained and compared with Figure 2 in much the same way. The [down]-rule is used to transform a term of run-time function type of kind compile-time into a term of run-time function type of kind run-time. The function D is used to generate the constraints to ensure the correct use of the [down]-rule. To explain the denition of the function consider the [down]-rule of Figure 2. We have to change the binding time from c to r, this is achieved by the constraint [q < p]; the only solution to this is q = r and p = c. It is required that the rule is only applied on run-time function types and not to compile-time function types; that is we must ensure that b = r is the only possible solution for b. This is achieved by the constraint [b q] (= [b r]). The side condition of [down] in Figure 2 says that the type has to be well-formed of the new binding time q (= r); this can be ensured by the constraints generated by W(! b (U ( V), q). But we know that! b (U ( V) is well-formed of binding time p (= c) and that the type is a run-time function type. Then we also know that! b (U ( V) is well-formed of binding time r. So we can omit to generate the constraints W(! b (U ( V), q). It should only be possible to apply [down] in the case where the term has a function type and therefore D will generate unsolvable [r = c] constraints in all other cases. The [up]-rule is used to transform a term of run-time function type of kind run-time into a term of run-time function type of kind compile-time. The function U is used to generate constraints to ensure the correct use of the [up]-rule. Again we have to ensure that the binding time is changed from r to c, this is done by the constraint [p < q], which has one solution, p = r and q = c. Furthermore in the assumption list all the binding times have to be compile-time and this is ensured by the constraints [I q] because q = c. The operation is point-wise on I and can be written like this [(b, I) q] = [q b, I q] [I q] = [ ] if I is empty 19

21 But we will write it as [I q] for simplicity. We have to ensure that the rule is only applied on run-time function types; that is we must ensure that b is r: here the constraint [b p] (= [b r]) will do. We have to ensure that the type is well-formed of the new binding time q (= c); this can be ensured by the constraints generated by W(! b (U ( V), q). Again we use the fact that! b (U ( V) is well-formed of binding time p (= r) and that it is a run-time function type and therefore the type is also well-formed of binding time c. Finally, in the cases where the term does not have a function type we let U generate an unsolvable constraint [r = c] Properties of the well-formedness relation All the types constructed in `1 are well-formed. This property is given by Lemma 6. The proof of the lemma is in Appendix A on page 59. Lemma 6 If then A:I `1 u : U : b [C] (Figure 6) and the constraints C are solvable by M W(U, b) is solvable by M and W(V, p) is solvable by M for all x : V : p 2 A:I. 2 A variable can be renamed. The proof of the lemma is in Appendix A on page 61. Lemma 7 If A:I, x : V : b `1 u : U : p [C] and y 62A:I, then A:I, y : V : b `1 u[y/x] : U : p [C, W(V, b)] and C solvable by M, [C, W(V, b), b = b] solvable by M. 2 The relation between the well-formedness relation in Figure 2 and the one dened in Figure 6 is given by Proportion 8 and 9. The proofs of the proportions are in Appendix A on page 61 and 63. Proportion 8 If A:I `1 u : U : b [C] (Figure 6) and the constraints C are solvable by M and e is the term corresponding to u M and t is the type corresponding to U M, then there exists tenv so that tenv `0 e : t : M b (Figure 2) and if x i : U i : b i 2 A:I, then tenv x i = (t i, M b i ) and t i is the type corresponding to U M i ; u M and U M are term u and the type U, respectively, where all binding time variables b i are replaced by M b i. 2 20

22 b 1 = p 1 = p 2 b 2 = b 3 = b 4 b 5 = b 6 = b 7 r r r c c r c c c Figure 7: Solutions to constraints in Example 10 Proportion 9 If tenv `0 e : t : b (Figure 2), then there exists constraints C and A:I such that A:I `1 u : U : b [C] (Figure 6) and C is solvable and t is the type corresponding to U, and e is the term corresponding to u. Whenever tenv x i = (t i, b i ) and `0 t i : b i, then x i : U i : b i 2 A:I provided t i is the type corresponding to U i. 2 Example 10 As an example of using the system in Figure 6 we use the same example as for Figure 2. We want to show that the term c x. c y.( r x y) with type! c (! r (! r (! r A (! r A) (! r (! r A (! r A)) (! c (! r (! r A (! r A) (! r (! r A (! r A))) and binding time c is well-formed for some base type! r A. In doing this we rst place binding time variables everywhere we can to make the proof more general. The question is now how can we annotate the term b 1 x. b 2 y.( b 3 x y) given that it has binding time p? We start by using [id] twice and we have and x : Ux : p 1 `1 x : Ux : p 1 [W(Ux, p 1 )] y : Uy : p 2 `1 y : Uy : p 2 [W(Uy, p 2 )] where Ux is! b 1 (! b 2 (U y (! b 5 (! b 6 A (! b 7 A)) and Uy =! b 2 (! b 3 A (! b 4 A). Applying [(E] we get x : Ux : p 1, y : Uy : p 2 `1 ( b 1 x y) :! b 5 (! b 6 A (! b 7 A) : b 1 [W(Ux, p 1 ), W(Uy, p 2 ), p 1 = p 2 = b 1 ] Applying [(I] we get x : Ux : p 1 `1 p 2 y.( b 1 x y) :! p 2 (Uy (! b 5 (! b 6 A (! b 7 A)) : p 2 [W(Ux, p 1 ), W(Uy, p 2 ), p 1 = p 2 = b 1, p 2 = b 1 ] Applying [(I] once more we get to ; `1 p 1 x. p 2 y.( b 1 x y) :! p 1 (Ux (! p 2 (Uy (! b 5 (! b 6 A (! b 7 A))) : p 1 [W(Ux, p 1 ), W(Uy, p 2 ), p 1 = p 2 = b 1, p 2 = b 1, p 1 = p 2 ] The constraints from W(Ux, p 1 ) are [b 3 = b 2, b 4 = b 2, b 2 b 1, b 6 = b 5, b 7 = b 5, b 5 b 1, b 1 p 1 ] and the constraints from W(Uy, p 2 ) are [b 3 = b 2, b 4 = b 2, b 2 p 2 ]. 21

23 All the constraints are [b 3 = b 2, b 4 = b 2, b 2 b 1, b 6 = b 5, b 7 = b 5, b 5 b 1, b 1 p 1, b 3 = b 2, b 4 = b 2, b 2 p 2, p 1 = p 2 = b 1, p 2 = b 1, p 1 = p 2 ]. The solutions to the constraints are in Figure 7 and the term is b 1 x. b 1 y. ( b 1 x y) with type! b 1 (! b 1 (! b 2 (! b 2 A (! b 2 A) (! b 5 (! b 5 A (! b 5 A)) (! b 1 (! b 2 (! b 2 A (! b 2 A) (! b 5 (! b 5 A (! b 5 A))) and binding time b 1. None of the solutions are the one we are looking for. In the proof we did not apply the rule [up] and [down] as we did in the proof in example 2. Now we try to copy what we did in example 2. Again we start by using [id] twice as above, but before we apply [(E] we apply [down] on the results from [id]. Then we get and x : Ux : p 1 `1 x : Ux : p 1 [W(Ux, p 1 )] x : Ux : p 1 `1 x : Ux : p 2 [W(Ux, p 1 ), D(Ux, p 1, p 2 )] y : Uy : p 3 `1 y : Uy : p 3 [W(Uy, p 3 )] y : Uy : p 3 `1 y : Uy : p 4 [W(Uy, p 3 ), D(Uy, p 3, p 4 )] Again Ux is! b 1 (! b 2 (U y [(E] to get (! b 5 (! b 6 A (! b 7 A)) and Uy =! b 2 (! b 3 A (! b 4 A). Now we apply x : Ux : p 1, y : Uy : p 3 `1 ( b 1 x y) :! b 5 (! b 6 A (! b 7 A) : b 1 [W(Ux, p 1 ), D(Ux, p 1, p 2 ), W(Uy, p 3 ), D(Uy, p 3, p 4 ), p 2 = p 4 = b 1 ] Now we apply [up] and get x : Ux : p 1, y : Uy : p 3 `1 ( b 1 x y) :! b 5 (! b 6 A (! b 7 A) : p 5 [W(Ux, p 1 ), D(Ux, p 1, p 2 ), W(Uy, p 3 ), D(Uy, p 3, p 4 ), p 2 = p 4 = b 1, U(! b 5 (! b 6 A (! b 7 A), b 1, I, p 5 )] where I = (p 1, p 3 ). Finally we apply [(I] twice to get x : Ux : p 1 `1 p 3 y.( b 1 x y) :! p 3 (Uy (! b 5 (! b 6 A (! b 7 A)) : p 3 [W(Ux, p 1 ), D(Ux, p 1, p 2 ), W(Uy, p 3 ), D(Uy, p 3, p 4 ), p 2 = p 4 = b 1, U(! b 5 (! b 6 A (! b 7 A), b 1, I, p 5 ), p 3 = p 5 ] ; `1 p 1 x. p 3 y.( b 1 x y) :! p 1 (Ux (! p 3 (Uy (! b 5 (! b 6 A (! b 7 A))) : p 1 [W(Ux, p 1 ), D(Ux, p 1, p 2 ), W(Uy, p 3 ), D(Uy, p 3, p 4 ), p 2 = p 4 = b 1, U(! b 5 (! b 6 A (! b 7 A), b 1, I, p 5 ), p 3 = p 5, p 1 = p 3 ] The constraints are now [b 3 = b 2, b 4 = b 2, b 2 b 1, b 6 = b 5, b 7 = b 5, b 5 b 1, b 1 p 1, p 2 < p 1, b 1 p 2, b 3 = b 2, b 4 = b 2, b 2 p 3, p 4 < p 3, b 2 p 4, p 2 = p 4 = b 1, b 1 < p 5, p 5 p 1, p 5 p 3, b 5 b 1, p 3 = p 5, p 1 = p 3 ]. There is one solution which is the one we are looking for (b 1 = p 2 = p 4 = r, b 2 = b 3 = b 4 = r, b 5 = b 6 = b 7 = r, p 1 = p 3 = p 5 = c). 2 22

24 D 0 (! b A i, p, q) = [r = c] D 0 (! b (U V), p, q) = [r = c] D 0 (! b (U ( V), p, q) = [q p, b q] U 0 (! b A i, p, I, q) = [r = c] U 0 (! b (U V), p, I, q) = [r = c] U 0 (! b (U ( V), p, I, q) = [p q, p t I q, b p] Figure 8: [up] and [down] on function types 4 Incorporating [up] and [down] Now we want to build the two rules [up] and [down] into all the other logical rules and the axiom [id]. Then all what is left will be the structural and logical rules. This makes it easier to make a proof in the inference system, since we do not have to think explicitly about using [up] and [down] as we had to do in Example 10. This implies that when making a proof, we just apply the logical rules to get all the solutions with one proof instead of two or even more proofs as in Example 10. To do this we proceed in four stages: Stage 1: Modify the denition of U and D such that [up] and [down] may leave the binding time unchanged (in the case of function types). Stage 2: Modify the denition of U and D such that [up] and [down] may succeed on base types and product types as well as function types. Stage 3: Combine the [up] and [down] rule into one rule called [up-down]. Stage 4: Integrate the [up-down]-rule with all the other rules. Again the new system we get in this section corresponds in a one-to-one manner to the system in Section 3 and therefore also to the analysis in Section [up] and [down] on function types In Stage 1 we shall modify the denitions of U and D such that [up] and [down] may leave the binding time unchanged (in the case of function types). This is done by dening two new functions D 0 and U 0 with this property and use them together with the rules in Figure 6 instead of D and U. This new well-formed relation is called `2. The two functions D 0 and U 0 are dened in Figure 8. The idea is to allow q p instead of q < p in D 0 and p q instead of p < q in U 0. This works ne for D 0 but not for U 0. To see this consider the case where the type is a function type,! b (U ( V) and both p and q are c, that means no change in binding time. Then the constraints generated by U would be 23

25 U 0 (! b (U ( V), p, I, q) = U 0 (! b (U ( V), c, I, c) = [c c, I c, b c] Thus the assumption list is restricted to contain only compile-time variables. To avoid this we allow the constraints to also contain inequalities of the form p t I q which means that the least upper bound of p and every binding time of I has to be greater than or equal to q. It is an abbreviation for [p t (b, I) q] = [p t b q, p t I q] [p t I q] = [ ] if I is empty Instead of using [I q] in the denition of U 0 we use [p t I q]. This solves the problem since c t b is c for all b. If the [up]-rule is going to change the binding time, then p is r (and q is c) and r t b is b for all b and [r t I q] is equivalent to [I q]. In the case of no change in binding time for a compile-time function type the constraints that ensure that the type is a run-time function type, that is [b p] in U 0 and [b q] in D 0, are both solvable because both p and q have to be c since otherwise the type is not well-formed. If both p and q are r, then the type is not well-formed and the constraints are unsolvable because [b p] = [c r] and [b q] = [c r]. The relation between the inference systems of Figure 6 and 8 is given by the next two lemmas. The proofs of the lemmas are in Appendix A on page 65 and 66. Lemma 11 If A:I `1 u : U : b [C] in Figure 6 and the constraints C are solvable by M, then C 0 exists so that A:I `2 u : U : b [C 0 ] in Figure 8 and the constraints C 0 are solvable by M. 2 Lemma 12 If A:I `2 u : U : b [C] in Figure 8 and the constraints C are solvable by M, then C 0 exists so that A:I `1 u : U : b [C 0 ] in Figure 6 and the constraints C 0 are solvable by M [up] and [down] on non-function types In Stage 2 we modifying the [up] and [down] rules to succeed on base type and product type as well as function types. The two new functions D 00 and U 00 are dened in Figure 9 and are used in Figure 6 instead of D and U. The new well-formed relation is now called `3. The constraints generated for a base type or a product type U have to ensure that the binding time is not changed ([p = q]). The relation between the inference systems of Figure 8 and 9 is given by the next two lemmas. The proofs of the lemmas are in Appendix A on page 68. Lemma 13 If A:I `2 u : U : b [C] in Figure 8 and the constraints C are solvable by M, then C 0 exists so that A:I `3 u : U : b [C 0 ] in Figure 9 and the constraints C 0 are solvable by M. 2 24

26 D 00 (! b A i, p, q) = [p = q] D 00 (! b (U V), p, q) = [p = q] D 00 (! b (U ( V), p, q) = [q p, b q] U 00 (! b A i, p, I, q) = [p = q] U 00 (! b (U V), p, I, q) = [p = q] U 00 (! b (U ( V), p, I, q) = [p q, p t I q, b p] Figure 9: [up] and [down] on non-function types [up-down] A:I `4 u : U : p [C] A:I `4 u : U : q [C, UD(U, p, I, q)] UD(! b A i, p, I, q) = [p = q] UD(! b (U V), p, I, q) = [p = q] UD(! b (U ( V), p, I, q) = [b q, b p, p t I q] Figure 10: The [up-down]-rule Lemma 14 If A:I `3 u : U : b [C] in Figure 9 and the constraints C are solvable by M, then C 0 exists so that A:I `2 u : U : b [C 0 ] in Figure 8 and the constraints C 0 are solvable by M The [up-down]-rule In Stage 3 we combine [up] and [down] into one rule [up-down], this can be achieved by having one rule combining [up] and [down] and then generate constraints with a new function UD. The new well-formed relation `4 is dened as in Figure 6 but instead of using the two rules [up] and [down] we use the rule [up-down] dened in Figure 10. For base types and product types there are no change from D 00 and U 00 : we still generate the constraint [p = q]. We want the constraints to be as follows on function types: b p q UD(! b (U ( V), p, I, q) r r r id r r c up r c r down r c c id c r r unsolvable c r c unsolvable c c r unsolvable c c c id 25

27 Here id means that the generated constraints have to be solvable and do not change the binding time. up and down mean that the constraints have to be solvable and they change the binding time according to the application of respectively the rule [up] or [down]. The three rows marked with unsolvable correspond to the fact that a compile-time function type cannot be of run-time kind. Both D 00 and U 00 behaves just like this but for one case each: D 00 cannot cope with the case up and U 00 not with down. The problem comes from the bond between p and q. Clearly we cannot include both q p and p q as then p = q would follow and rule [up-down] would always act as the identity, contrary to what we are aiming for. In the following table the constraints from D 00 and U 00 are summarized. In the last column there is a X if all the constraints are solvable and nothing if they are unsolvable. If the solvablity of the constraints depends on I then this is showed by the constraints involving I. This only happens in the case of up. b p q [b q] [b p] [p t I q] solvable r r r [r r] [r r] [r t I r] X r r c [r c] [r r] [r t I c] [r t I c] r c r [r r] [r c] [c t I r] X r c c [r c] [r c] [c t I c] X c r r [c r] [c r] [r t I r] c r c [c c] [c r] [r t I c] c c r [c r] [c c] [c t I r] c c c [c c] [c c] [c t I c] X Notice that the constraints of UD(! b (U ( V), p, I, q) are those of D(! b (U ( V), p, q) and U(! b (U ( V), p, I, q) except that we have not included q p from D and p q from U. The only case where it is necessary to look at I is when p is r and q is c. This knowledge can be used when solving the constraints, so we collect the constraints in the form p t I q rather than writing it out. The relation between the inference systems of Figure 9 and 10 is given by the next two lemmas. The proofs of the lemmas are in Appendix A on page 69 and 71. Lemma 15 If A:I `3 u : U : b [C] in Figure 9 and the constraints C are solvable by M, then C 0 exists so that A:I `4 u : U : b [C 0 ] in Figure 10 and the constraints C 0 are solvable by M. 2 Lemma 16 If A:I `4 u : U : b [C] in Figure 10 and the constraints C are solvable by M, then C 0 exists so that A:I `3 u : U : b [C 0 ] in Figure 9 and the constraints C 0 are solvable by M Making the [up-down]-rule implicit The new rules can now be formed like [old rule] [up-down] 26

28 [(I] [id] x : U : b `5 x : U : q [W(U, b), UD(U, b, (b), q)] A:I, x : U : p `5 v : V : q [C] A:I `5 p x.v :! p (U ( V) : b [C, UD(! p (U ( V), p, I, b), p = q] [(E] A:I `5 v :! b (U ( V) : p [C] B:J `5 u : U : q [D] A:I, B:J `5 ( b v u) : V : s [C, D, UD(V, b, (I, J), s), p = q = b] [I] A:I `5 u : U : p [C] B:J `5 v : V : q [D] A:I, B:J `5 ( p u, v) :! p (U V) : p [C, D, p = q] [E 1 ] [E 2 ] [weakening] [contraction] A:I `5 u :! p (U V) : q [C] A:I `5 fst p u : U : b [C, UD(U, p, I, b), p = q] A:I `5 v :! p (U V) : q [C] A:I `5 snd p v : V : b [C, UD(V, p, I, b), p = q] A:I `5 u : U : p [C] A:I, x : V : b `5 u : U : p [C, W(V, b)] A:I, x : V : p, y : V : q `5 u : U : b [C] A:I, z : V : p `5 u[z/x, z/y] : U : b [C, p = q] and x not in A [exchange] A:I, x : V 1 : p, y : V 2 : q, B:J `5 u : U : b [C] A:I, y : V 2 : q, x : V 1 : p, B:J `5 u : U : b [C] W(! b A i, p) = [b = p] W(! b (U V), p) = [W(U, b), W(V, b), b = p] W(! b (U ( V), p) = [W(U, b), W(V, b), b p] UD(! b A i, p, I, q) = [p = q] UD(! b (U V), p, I, q) = [p = q] UD(! b (U ( V), p, I, q) = [b q, b p, p t I q] and z not in A Figure 11: The well-formedness relation for the 2-level -calculus without [up] and [down] 27

29 for every logical rule of Figure 6. This is illustrated by the rule [(I]. The new rule [(I] is obtained by [up-down] A:I, x : U : p `5 v : V : q [C] [(I] A:I `5 p x.v :! p (U ( V) : p [C, p = q] A:I `5 p x.v :! p (U ( V) : b [C, UD(! p (U ( V), p, I, b), p = q] Figure 11 denes the well-formed relation with the [up-down]-rule integrated in all the logical rules, and therefore no explicit [up-down] rule. There is one exception in the rule [I] because the type in the conclusion is of product type and it makes no sense to apply the [up-down]-rule. The relation between the inference systems of Figure 10 and 11 is given by the next two lemmas. The proofs of the lemmas are in Appendix A on page 72 and 73. Lemma 17 If A:I `4 u : U : b [C] in Figure 10 and the constraints C are solvable by M, then C 0 exists so that A:I `5 u : U : b [C 0 ] in Figure 11 and the constraints C 0 are solvable by M. 2 Lemma 18 a If A:I `5 u : U : b [C] in Figure 11 and the constraints C are solvable by M, then C 0 exists so that A:I `4 u : U : b [C 0 ] in Figure 10 and the constraints C 0 are solvable by M. 2 From the proportions 8 and 9 and the lemmas 12, 11, 14, 14, 16, 17, and 18 follows: Theorem 19 and If A:I `5 u : U : b [C] (Figure 11) and the constraints C are solvable by M and e is the term corresponding to u M and t is the type corresponding to U M, then there exists tenv so that tenv `0 e : t : M b (Figure 2) and if x i : U i : b i 2 A:I, then tenv x i = (t i, M b i ) and t i is the type corresponding to U M i ; u M and U M are term u and the type U, respectively, where all binding time variables b i are replaced by M b i. If tenv `0 e : t : b (Figure 2), then there exists constraints C and A:I such that A:I `5 u : U : b [C] (Figure 11) and C is solvable and t is the type corresponding to U, and e is the term corresponding to u. Whenever tenv x i = (t i, b i ) and `0 t i : b i, then x i : U i : b i 2 A:I provided t i is the type corresponding to U i. Example 20 As an example of using the inference system in Figure 11 we use the same term as for Example. 10. We want to show that the term c x. c y.( r x y) with type! c (! r (! r (! r A (! r A) (! r (! r A (! r A)) (! c (! r (! r A (! r A) (! r (! r A (! r A))) and binding time c is well-formed for some base type! r A. In doing this we rst place binding time variables everywhere we can to make the proof more general. The question is now how can we annotate the term b 1 x. b 2 y.( b 3 x y) given that it has binding time p? We start with using [id] twice and we have x : Ux : p 1 `5 x : Ux : p 2 [W(Ux, p 1 ), UD(Ux, p 1, (p 1 ), p 2 )] 28 2

in Linear Logic Denis Bechet LIPN - Institut Galilee Universite Paris 13 Avenue Jean-Baptiste Clement Villetaneuse, France Abstract

in Linear Logic Denis Bechet LIPN - Institut Galilee Universite Paris 13 Avenue Jean-Baptiste Clement Villetaneuse, France Abstract Second Order Connectives and roof Transformations in Linear Logic Denis Bechet LIN - Institut Galilee Universite aris 1 Avenue Jean-Baptiste Clement 90 Villetaneuse, France e-mail: dbe@lipnuniv-paris1fr

More information

only nite eigenvalues. This is an extension of earlier results from [2]. Then we concentrate on the Riccati equation appearing in H 2 and linear quadr

only nite eigenvalues. This is an extension of earlier results from [2]. Then we concentrate on the Riccati equation appearing in H 2 and linear quadr The discrete algebraic Riccati equation and linear matrix inequality nton. Stoorvogel y Department of Mathematics and Computing Science Eindhoven Univ. of Technology P.O. ox 53, 56 M Eindhoven The Netherlands

More information

Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space

Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) Contents 1 Vector Spaces 1 1.1 The Formal Denition of a Vector Space.................................. 1 1.2 Subspaces...................................................

More information

Functional Database Query Languages as. Typed Lambda Calculi of Fixed Order. Gerd G. Hillebrand and Paris C. Kanellakis

Functional Database Query Languages as. Typed Lambda Calculi of Fixed Order. Gerd G. Hillebrand and Paris C. Kanellakis Functional Database Query Languages as Typed Lambda Calculi of Fixed Order Gerd G. Hillebrand and Paris C. Kanellakis Department of Computer Science Brown University Providence, Rhode Island 02912 CS-94-26

More information

Linear Algebra (part 1) : Matrices and Systems of Linear Equations (by Evan Dummit, 2016, v. 2.02)

Linear Algebra (part 1) : Matrices and Systems of Linear Equations (by Evan Dummit, 2016, v. 2.02) Linear Algebra (part ) : Matrices and Systems of Linear Equations (by Evan Dummit, 206, v 202) Contents 2 Matrices and Systems of Linear Equations 2 Systems of Linear Equations 2 Elimination, Matrix Formulation

More information

Logical Preliminaries

Logical Preliminaries Logical Preliminaries Johannes C. Flieger Scheme UK March 2003 Abstract Survey of intuitionistic and classical propositional logic; introduction to the computational interpretation of intuitionistic logic

More information

Programming Languages and Types

Programming Languages and Types Programming Languages and Types Klaus Ostermann based on slides by Benjamin C. Pierce Subtyping Motivation With our usual typing rule for applications the term is not well typed. ` t 1 : T 11!T 12 ` t

More information

Lecture 8 - Algebraic Methods for Matching 1

Lecture 8 - Algebraic Methods for Matching 1 CME 305: Discrete Mathematics and Algorithms Instructor: Professor Aaron Sidford (sidford@stanford.edu) February 1, 2018 Lecture 8 - Algebraic Methods for Matching 1 In the last lecture we showed that

More information

Multi Variable Calculus

Multi Variable Calculus Multi Variable Calculus Joshua Wilde, revised by Isabel Tecu, Takeshi Suzuki and María José Boccardi August 3, 03 Functions from R n to R m So far we have looked at functions that map one number to another

More information

1 Matrices and Systems of Linear Equations

1 Matrices and Systems of Linear Equations Linear Algebra (part ) : Matrices and Systems of Linear Equations (by Evan Dummit, 207, v 260) Contents Matrices and Systems of Linear Equations Systems of Linear Equations Elimination, Matrix Formulation

More information

Lecture Notes on Classical Linear Logic

Lecture Notes on Classical Linear Logic Lecture Notes on Classical Linear Logic 15-816: Linear Logic Frank Pfenning Lecture 25 April 23, 2012 Originally, linear logic was conceived by Girard [Gir87] as a classical system, with one-sided sequents,

More information

Programming Languages Fall 2013

Programming Languages Fall 2013 Programming Languages Fall 2013 Lecture 11: Subtyping Prof Liang Huang huang@qccscunyedu Big Picture Part I: Fundamentals Functional Programming and Basic Haskell Proof by Induction and Structural Induction

More information

distinct models, still insists on a function always returning a particular value, given a particular list of arguments. In the case of nondeterministi

distinct models, still insists on a function always returning a particular value, given a particular list of arguments. In the case of nondeterministi On Specialization of Derivations in Axiomatic Equality Theories A. Pliuskevicien_e, R. Pliuskevicius Institute of Mathematics and Informatics Akademijos 4, Vilnius 2600, LITHUANIA email: logica@sedcs.mii2.lt

More information

Contents. 2 Partial Derivatives. 2.1 Limits and Continuity. Calculus III (part 2): Partial Derivatives (by Evan Dummit, 2017, v. 2.

Contents. 2 Partial Derivatives. 2.1 Limits and Continuity. Calculus III (part 2): Partial Derivatives (by Evan Dummit, 2017, v. 2. Calculus III (part 2): Partial Derivatives (by Evan Dummit, 2017, v 260) Contents 2 Partial Derivatives 1 21 Limits and Continuity 1 22 Partial Derivatives 5 23 Directional Derivatives and the Gradient

More information

Contents. 2.1 Vectors in R n. Linear Algebra (part 2) : Vector Spaces (by Evan Dummit, 2017, v. 2.50) 2 Vector Spaces

Contents. 2.1 Vectors in R n. Linear Algebra (part 2) : Vector Spaces (by Evan Dummit, 2017, v. 2.50) 2 Vector Spaces Linear Algebra (part 2) : Vector Spaces (by Evan Dummit, 2017, v 250) Contents 2 Vector Spaces 1 21 Vectors in R n 1 22 The Formal Denition of a Vector Space 4 23 Subspaces 6 24 Linear Combinations and

More information

Notation for Logical Operators:

Notation for Logical Operators: Notation for Logical Operators: always true always false... and...... or... if... then...... if-and-only-if... x:x p(x) x:x p(x) for all x of type X, p(x) there exists an x of type X, s.t. p(x) = is equal

More information

Lecture 14 - P v.s. NP 1

Lecture 14 - P v.s. NP 1 CME 305: Discrete Mathematics and Algorithms Instructor: Professor Aaron Sidford (sidford@stanford.edu) February 27, 2018 Lecture 14 - P v.s. NP 1 In this lecture we start Unit 3 on NP-hardness and approximation

More information

a (b + c) = a b + a c

a (b + c) = a b + a c Chapter 1 Vector spaces In the Linear Algebra I module, we encountered two kinds of vector space, namely real and complex. The real numbers and the complex numbers are both examples of an algebraic structure

More information

How to Pop a Deep PDA Matters

How to Pop a Deep PDA Matters How to Pop a Deep PDA Matters Peter Leupold Department of Mathematics, Faculty of Science Kyoto Sangyo University Kyoto 603-8555, Japan email:leupold@cc.kyoto-su.ac.jp Abstract Deep PDA are push-down automata

More information

Proving Completeness for Nested Sequent Calculi 1

Proving Completeness for Nested Sequent Calculi 1 Proving Completeness for Nested Sequent Calculi 1 Melvin Fitting abstract. Proving the completeness of classical propositional logic by using maximal consistent sets is perhaps the most common method there

More information

Minimum and maximum values *

Minimum and maximum values * OpenStax-CNX module: m17417 1 Minimum and maximum values * Sunil Kumar Singh This work is produced by OpenStax-CNX and licensed under the Creative Commons Attribution License 2.0 In general context, a

More information

Classical Program Logics: Hoare Logic, Weakest Liberal Preconditions

Classical Program Logics: Hoare Logic, Weakest Liberal Preconditions Chapter 1 Classical Program Logics: Hoare Logic, Weakest Liberal Preconditions 1.1 The IMP Language IMP is a programming language with an extensible syntax that was developed in the late 1960s. We will

More information

A Three-Level Analysis of a Simple Acceleration Maneuver, with. Uncertainties. Nancy Lynch. MIT Laboratory for Computer Science

A Three-Level Analysis of a Simple Acceleration Maneuver, with. Uncertainties. Nancy Lynch. MIT Laboratory for Computer Science A Three-Level Analysis of a Simple Acceleration Maneuver, with Uncertainties Nancy Lynch MIT Laboratory for Computer Science 545 Technology Square (NE43-365) Cambridge, MA 02139, USA E-mail: lynch@theory.lcs.mit.edu

More information

CS 6820 Fall 2014 Lectures, October 3-20, 2014

CS 6820 Fall 2014 Lectures, October 3-20, 2014 Analysis of Algorithms Linear Programming Notes CS 6820 Fall 2014 Lectures, October 3-20, 2014 1 Linear programming The linear programming (LP) problem is the following optimization problem. We are given

More information

Stochastic dominance with imprecise information

Stochastic dominance with imprecise information Stochastic dominance with imprecise information Ignacio Montes, Enrique Miranda, Susana Montes University of Oviedo, Dep. of Statistics and Operations Research. Abstract Stochastic dominance, which is

More information

A version of for which ZFC can not predict a single bit Robert M. Solovay May 16, Introduction In [2], Chaitin introd

A version of for which ZFC can not predict a single bit Robert M. Solovay May 16, Introduction In [2], Chaitin introd CDMTCS Research Report Series A Version of for which ZFC can not Predict a Single Bit Robert M. Solovay University of California at Berkeley CDMTCS-104 May 1999 Centre for Discrete Mathematics and Theoretical

More information

Essential facts about NP-completeness:

Essential facts about NP-completeness: CMPSCI611: NP Completeness Lecture 17 Essential facts about NP-completeness: Any NP-complete problem can be solved by a simple, but exponentially slow algorithm. We don t have polynomial-time solutions

More information

Lecture Notes on Sequent Calculus

Lecture Notes on Sequent Calculus Lecture Notes on Sequent Calculus 15-816: Modal Logic Frank Pfenning Lecture 8 February 9, 2010 1 Introduction In this lecture we present the sequent calculus and its theory. The sequent calculus was originally

More information

CHAPTER 1: Functions

CHAPTER 1: Functions CHAPTER 1: Functions 1.1: Functions 1.2: Graphs of Functions 1.3: Basic Graphs and Symmetry 1.4: Transformations 1.5: Piecewise-Defined Functions; Limits and Continuity in Calculus 1.6: Combining Functions

More information

Exhaustive Classication of Finite Classical Probability Spaces with Regard to the Notion of Causal Up-to-n-closedness

Exhaustive Classication of Finite Classical Probability Spaces with Regard to the Notion of Causal Up-to-n-closedness Exhaustive Classication of Finite Classical Probability Spaces with Regard to the Notion of Causal Up-to-n-closedness Michaª Marczyk, Leszek Wro«ski Jagiellonian University, Kraków 16 June 2009 Abstract

More information

Linear Programming Redux

Linear Programming Redux Linear Programming Redux Jim Bremer May 12, 2008 The purpose of these notes is to review the basics of linear programming and the simplex method in a clear, concise, and comprehensive way. The book contains

More information

Linear Regression and Its Applications

Linear Regression and Its Applications Linear Regression and Its Applications Predrag Radivojac October 13, 2014 Given a data set D = {(x i, y i )} n the objective is to learn the relationship between features and the target. We usually start

More information

Lecture 2: Review of Prerequisites. Table of contents

Lecture 2: Review of Prerequisites. Table of contents Math 348 Fall 217 Lecture 2: Review of Prerequisites Disclaimer. As we have a textbook, this lecture note is for guidance and supplement only. It should not be relied on when preparing for exams. In this

More information

Lecture Notes on Certifying Theorem Provers

Lecture Notes on Certifying Theorem Provers Lecture Notes on Certifying Theorem Provers 15-317: Constructive Logic Frank Pfenning Lecture 13 October 17, 2017 1 Introduction How do we trust a theorem prover or decision procedure for a logic? Ideally,

More information

A Quick Introduction to Row Reduction

A Quick Introduction to Row Reduction A Quick Introduction to Row Reduction Gaussian Elimination Suppose we are asked to solve the system of equations 4x + 5x 2 + 6x 3 = 7 6x + 7x 2 + 8x 3 = 9. That is, we want to find all values of x, x 2

More information

3.1 Universal quantification and implication again. Claim 1: If an employee is male, then he makes less than 55,000.

3.1 Universal quantification and implication again. Claim 1: If an employee is male, then he makes less than 55,000. Chapter 3 Logical Connectives 3.1 Universal quantification and implication again So far we have considered an implication to be universal quantication in disguise: Claim 1: If an employee is male, then

More information

Lectures 15: Parallel Transport. Table of contents

Lectures 15: Parallel Transport. Table of contents Lectures 15: Parallel Transport Disclaimer. As we have a textbook, this lecture note is for guidance and supplement only. It should not be relied on when preparing for exams. In this lecture we study the

More information

The Integers. Peter J. Kahn

The Integers. Peter J. Kahn Math 3040: Spring 2009 The Integers Peter J. Kahn Contents 1. The Basic Construction 1 2. Adding integers 6 3. Ordering integers 16 4. Multiplying integers 18 Before we begin the mathematics of this section,

More information

Safety Analysis versus Type Inference

Safety Analysis versus Type Inference Information and Computation, 118(1):128 141, 1995. Safety Analysis versus Type Inference Jens Palsberg palsberg@daimi.aau.dk Michael I. Schwartzbach mis@daimi.aau.dk Computer Science Department, Aarhus

More information

On 3-valued paraconsistent Logic Programming

On 3-valued paraconsistent Logic Programming Marcelo E. Coniglio Kleidson E. Oliveira Institute of Philosophy and Human Sciences and Centre For Logic, Epistemology and the History of Science, UNICAMP, Brazil Support: FAPESP Syntax Meets Semantics

More information

Concurrent Non-malleable Commitments from any One-way Function

Concurrent Non-malleable Commitments from any One-way Function Concurrent Non-malleable Commitments from any One-way Function Margarita Vald Tel-Aviv University 1 / 67 Outline Non-Malleable Commitments Problem Presentation Overview DDN - First NMC Protocol Concurrent

More information

Richard DiSalvo. Dr. Elmer. Mathematical Foundations of Economics. Fall/Spring,

Richard DiSalvo. Dr. Elmer. Mathematical Foundations of Economics. Fall/Spring, The Finite Dimensional Normed Linear Space Theorem Richard DiSalvo Dr. Elmer Mathematical Foundations of Economics Fall/Spring, 20-202 The claim that follows, which I have called the nite-dimensional normed

More information

Equational Reasoning in Algebraic Structures: a Complete Tactic

Equational Reasoning in Algebraic Structures: a Complete Tactic Equational Reasoning in Algebraic Structures: a Complete Tactic Luís Cruz-Filipe 1,2 and Freek Wiedijk 1 1 NIII, University of Nijmegen, Netherlands and 2 CLC, Lisbon, Portugal Abstract We present rational,

More information

Welcome to Math Video Lessons. Stanley Ocken. Department of Mathematics The City College of New York Fall 2013

Welcome to Math Video Lessons. Stanley Ocken. Department of Mathematics The City College of New York Fall 2013 Welcome to Math 19500 Video Lessons Prof. Department of Mathematics The City College of New York Fall 2013 An important feature of the following Beamer slide presentations is that you, the reader, move

More information

ONLINE LINEAR DISCREPANCY OF PARTIALLY ORDERED SETS

ONLINE LINEAR DISCREPANCY OF PARTIALLY ORDERED SETS ONLINE LINEAR DISCREPANCY OF PARTIALLY ORDERED SETS MITCHEL T. KELLER, NOAH STREIB, AND WILLIAM T. TROTTER This article is dedicated to Professor Endre Szemerédi on the occasion of his 70 th birthday.

More information

Restricted b-matchings in degree-bounded graphs

Restricted b-matchings in degree-bounded graphs Egerváry Research Group on Combinatorial Optimization Technical reports TR-009-1. Published by the Egerváry Research Group, Pázmány P. sétány 1/C, H1117, Budapest, Hungary. Web site: www.cs.elte.hu/egres.

More information

2 THE COMPUTABLY ENUMERABLE SUPERSETS OF AN R-MAXIMAL SET The structure of E has been the subject of much investigation over the past fty- ve years, s

2 THE COMPUTABLY ENUMERABLE SUPERSETS OF AN R-MAXIMAL SET The structure of E has been the subject of much investigation over the past fty- ve years, s ON THE FILTER OF COMPUTABLY ENUMERABLE SUPERSETS OF AN R-MAXIMAL SET Steffen Lempp Andre Nies D. Reed Solomon Department of Mathematics University of Wisconsin Madison, WI 53706-1388 USA Department of

More information

usual one uses sequents and rules. The second one used special graphs known as proofnets.

usual one uses sequents and rules. The second one used special graphs known as proofnets. Math. Struct. in omp. Science (1993), vol. 11, pp. 1000 opyright c ambridge University Press Minimality of the orrectness riterion for Multiplicative Proof Nets D E N I S B E H E T RIN-NRS & INRILorraine

More information

Lecture 4: Proof of Shannon s theorem and an explicit code

Lecture 4: Proof of Shannon s theorem and an explicit code CSE 533: Error-Correcting Codes (Autumn 006 Lecture 4: Proof of Shannon s theorem and an explicit code October 11, 006 Lecturer: Venkatesan Guruswami Scribe: Atri Rudra 1 Overview Last lecture we stated

More information

Chapter 0 Introduction Suppose this was the abstract of a journal paper rather than the introduction to a dissertation. Then it would probably end wit

Chapter 0 Introduction Suppose this was the abstract of a journal paper rather than the introduction to a dissertation. Then it would probably end wit Chapter 0 Introduction Suppose this was the abstract of a journal paper rather than the introduction to a dissertation. Then it would probably end with some cryptic AMS subject classications and a few

More information

Resolution for Predicate Logic

Resolution for Predicate Logic Logic and Proof Hilary 2016 James Worrell Resolution for Predicate Logic A serious drawback of the ground resolution procedure is that it requires looking ahead to predict which ground instances of clauses

More information

(Type) Constraints. Solving constraints Type inference

(Type) Constraints. Solving constraints Type inference A (quick) tour of ML F (Graphic) Types (Type) Constraints Solving constraints Type inference Type Soundness A Fully Graphical Presentation of ML F Didier Rémy & Boris Yakobowski Didier Le Botlan INRIA

More information

Notes from Yesterday s Discussion. Big Picture. CIS 500 Software Foundations Fall November 1. Some lessons.

Notes from Yesterday s  Discussion. Big Picture. CIS 500 Software Foundations Fall November 1. Some lessons. CIS 500 Software Foundations Fall 2006 Notes from Yesterday s Email Discussion November 1 Some lessons This is generally a crunch-time in the semester Slow down a little and give people a chance to catch

More information

EXTENDED ABSTRACT. 2;4 fax: , 3;5 fax: Abstract

EXTENDED ABSTRACT. 2;4 fax: , 3;5 fax: Abstract 1 Syntactic denitions of undened: EXTENDED ABSTRACT on dening the undened Zena Ariola 1, Richard Kennaway 2, Jan Willem Klop 3, Ronan Sleep 4 and Fer-Jan de Vries 5 1 Computer and Information Science Department,

More information

Element x is R-minimal in X if y X. R(y, x).

Element x is R-minimal in X if y X. R(y, x). CMSC 22100/32100: Programming Languages Final Exam M. Blume December 11, 2008 1. (Well-founded sets and induction principles) (a) State the mathematical induction principle and justify it informally. 1

More information

3.2 Reduction 29. Truth. The constructor just forms the unit element,. Since there is no destructor, there is no reduction rule.

3.2 Reduction 29. Truth. The constructor just forms the unit element,. Since there is no destructor, there is no reduction rule. 32 Reduction 29 32 Reduction In the preceding section, we have introduced the assignment of proof terms to natural deductions If proofs are programs then we need to explain how proofs are to be executed,

More information

Entailment with Conditional Equality Constraints (Extended Version)

Entailment with Conditional Equality Constraints (Extended Version) Entailment with Conditional Equality Constraints (Extended Version) Zhendong Su Alexander Aiken Report No. UCB/CSD-00-1113 October 2000 Computer Science Division (EECS) University of California Berkeley,

More information

Stagnation proofness and individually monotonic bargaining solutions. Jaume García-Segarra Miguel Ginés-Vilar 2013 / 04

Stagnation proofness and individually monotonic bargaining solutions. Jaume García-Segarra Miguel Ginés-Vilar 2013 / 04 Stagnation proofness and individually monotonic bargaining solutions Jaume García-Segarra Miguel Ginés-Vilar 2013 / 04 Stagnation proofness and individually monotonic bargaining solutions Jaume García-Segarra

More information

Lecture Notes on Linear Logic

Lecture Notes on Linear Logic Lecture Notes on Linear Logic 15-816: Modal Logic Frank Pfenning Lecture 23 April 20, 2010 1 Introduction In this lecture we will introduce linear logic [?] in its judgmental formulation [?,?]. Linear

More information

Lecture Notes: Axiomatic Semantics and Hoare-style Verification

Lecture Notes: Axiomatic Semantics and Hoare-style Verification Lecture Notes: Axiomatic Semantics and Hoare-style Verification 17-355/17-665/17-819O: Program Analysis (Spring 2018) Claire Le Goues and Jonathan Aldrich clegoues@cs.cmu.edu, aldrich@cs.cmu.edu It has

More information

The first order quasi-linear PDEs

The first order quasi-linear PDEs Chapter 2 The first order quasi-linear PDEs The first order quasi-linear PDEs have the following general form: F (x, u, Du) = 0, (2.1) where x = (x 1, x 2,, x 3 ) R n, u = u(x), Du is the gradient of u.

More information

Lebesgue measure and integration

Lebesgue measure and integration Chapter 4 Lebesgue measure and integration If you look back at what you have learned in your earlier mathematics courses, you will definitely recall a lot about area and volume from the simple formulas

More information

Explicit Logics of Knowledge and Conservativity

Explicit Logics of Knowledge and Conservativity Explicit Logics of Knowledge and Conservativity Melvin Fitting Lehman College, CUNY, 250 Bedford Park Boulevard West, Bronx, NY 10468-1589 CUNY Graduate Center, 365 Fifth Avenue, New York, NY 10016 Dedicated

More information

General Recipe for Constant-Coefficient Equations

General Recipe for Constant-Coefficient Equations General Recipe for Constant-Coefficient Equations We want to look at problems like y (6) + 10y (5) + 39y (4) + 76y + 78y + 36y = (x + 2)e 3x + xe x cos x + 2x + 5e x. This example is actually more complicated

More information

Introduction to Metalogic

Introduction to Metalogic Philosophy 135 Spring 2008 Tony Martin Introduction to Metalogic 1 The semantics of sentential logic. The language L of sentential logic. Symbols of L: Remarks: (i) sentence letters p 0, p 1, p 2,... (ii)

More information

Metainduction in Operational Set Theory

Metainduction in Operational Set Theory Metainduction in Operational Set Theory Luis E. Sanchis Department of Electrical Engineering and Computer Science Syracuse University Syracuse, NY 13244-4100 Sanchis@top.cis.syr.edu http://www.cis.syr.edu/

More information

Data Compression Techniques

Data Compression Techniques Data Compression Techniques Part 1: Entropy Coding Lecture 4: Asymmetric Numeral Systems Juha Kärkkäinen 08.11.2017 1 / 19 Asymmetric Numeral Systems Asymmetric numeral systems (ANS) is a recent entropy

More information

Chapter 1. Logic and Proof

Chapter 1. Logic and Proof Chapter 1. Logic and Proof 1.1 Remark: A little over 100 years ago, it was found that some mathematical proofs contained paradoxes, and these paradoxes could be used to prove statements that were known

More information

Axiomatic Semantics. Stansifer Ch 2.4, Ch. 9 Winskel Ch.6 Slonneger and Kurtz Ch. 11 CSE

Axiomatic Semantics. Stansifer Ch 2.4, Ch. 9 Winskel Ch.6 Slonneger and Kurtz Ch. 11 CSE Axiomatic Semantics Stansifer Ch 2.4, Ch. 9 Winskel Ch.6 Slonneger and Kurtz Ch. 11 CSE 6341 1 Outline Introduction What are axiomatic semantics? First-order logic & assertions about states Results (triples)

More information

Introduction to Basic Proof Techniques Mathew A. Johnson

Introduction to Basic Proof Techniques Mathew A. Johnson Introduction to Basic Proof Techniques Mathew A. Johnson Throughout this class, you will be asked to rigorously prove various mathematical statements. Since there is no prerequisite of a formal proof class,

More information

for average case complexity 1 randomized reductions, an attempt to derive these notions from (more or less) rst

for average case complexity 1 randomized reductions, an attempt to derive these notions from (more or less) rst On the reduction theory for average case complexity 1 Andreas Blass 2 and Yuri Gurevich 3 Abstract. This is an attempt to simplify and justify the notions of deterministic and randomized reductions, an

More information

Class Meeting # 1: Introduction to PDEs

Class Meeting # 1: Introduction to PDEs MATH 18.152 COURSE NOTES - CLASS MEETING # 1 18.152 Introduction to PDEs, Spring 2017 Professor: Jared Speck Class Meeting # 1: Introduction to PDEs 1. What is a PDE? We will be studying functions u =

More information

Designing Information Devices and Systems I Spring 2018 Lecture Notes Note Introduction to Linear Algebra the EECS Way

Designing Information Devices and Systems I Spring 2018 Lecture Notes Note Introduction to Linear Algebra the EECS Way EECS 16A Designing Information Devices and Systems I Spring 018 Lecture Notes Note 1 1.1 Introduction to Linear Algebra the EECS Way In this note, we will teach the basics of linear algebra and relate

More information

Math Precalculus I University of Hawai i at Mānoa Spring

Math Precalculus I University of Hawai i at Mānoa Spring Math 135 - Precalculus I University of Hawai i at Mānoa Spring - 2013 Created for Math 135, Spring 2008 by Lukasz Grabarek and Michael Joyce Send comments and corrections to lukasz@math.hawaii.edu Contents

More information

Lecture Notes on Compositional Reasoning

Lecture Notes on Compositional Reasoning 15-414: Bug Catching: Automated Program Verification Lecture Notes on Compositional Reasoning Matt Fredrikson Ruben Martins Carnegie Mellon University Lecture 4 1 Introduction This lecture will focus on

More information

Calculus and linear algebra for biomedical engineering Week 3: Matrices, linear systems of equations, and the Gauss algorithm

Calculus and linear algebra for biomedical engineering Week 3: Matrices, linear systems of equations, and the Gauss algorithm Calculus and linear algebra for biomedical engineering Week 3: Matrices, linear systems of equations, and the Gauss algorithm Hartmut Führ fuehr@matha.rwth-aachen.de Lehrstuhl A für Mathematik, RWTH Aachen

More information

On Computational Interpretations of the Modal Logic S4. Jean Goubault-Larrecq.

On Computational Interpretations of the Modal Logic S4. Jean Goubault-Larrecq. On Computational Interpretations of the Modal Logic S4 II. The evq-calculus Jean Goubault-Larrecq Institut fur Logik, Komplexitat und Deduktionssysteme Universitat Karlsruhe, Am Fasanengarten 5, D-7618

More information

MATH The Chain Rule Fall 2016 A vector function of a vector variable is a function F: R n R m. In practice, if x 1, x n is the input,

MATH The Chain Rule Fall 2016 A vector function of a vector variable is a function F: R n R m. In practice, if x 1, x n is the input, MATH 20550 The Chain Rule Fall 2016 A vector function of a vector variable is a function F: R n R m. In practice, if x 1, x n is the input, F(x 1,, x n ) F 1 (x 1,, x n ),, F m (x 1,, x n ) where each

More information

Prefixed Tableaus and Nested Sequents

Prefixed Tableaus and Nested Sequents Prefixed Tableaus and Nested Sequents Melvin Fitting Dept. Mathematics and Computer Science Lehman College (CUNY), 250 Bedford Park Boulevard West Bronx, NY 10468-1589 e-mail: melvin.fitting@lehman.cuny.edu

More information

Proof Techniques (Review of Math 271)

Proof Techniques (Review of Math 271) Chapter 2 Proof Techniques (Review of Math 271) 2.1 Overview This chapter reviews proof techniques that were probably introduced in Math 271 and that may also have been used in a different way in Phil

More information

Unifying Theories of Programming

Unifying Theories of Programming 1&2 Unifying Theories of Programming Unifying Theories of Programming 3&4 Theories Unifying Theories of Programming designs predicates relations reactive CSP processes Jim Woodcock University of York May

More information

CHAPTER 7: TECHNIQUES OF INTEGRATION

CHAPTER 7: TECHNIQUES OF INTEGRATION CHAPTER 7: TECHNIQUES OF INTEGRATION DAVID GLICKENSTEIN. Introduction This semester we will be looking deep into the recesses of calculus. Some of the main topics will be: Integration: we will learn how

More information

CPSC 320 Sample Solution, Reductions and Resident Matching: A Residentectomy

CPSC 320 Sample Solution, Reductions and Resident Matching: A Residentectomy CPSC 320 Sample Solution, Reductions and Resident Matching: A Residentectomy August 25, 2017 A group of residents each needs a residency in some hospital. A group of hospitals each need some number (one

More information

with the ability to perform a restricted set of operations on quantum registers. These operations consist of state preparation, some unitary operation

with the ability to perform a restricted set of operations on quantum registers. These operations consist of state preparation, some unitary operation Conventions for Quantum Pseudocode LANL report LAUR-96-2724 E. Knill knill@lanl.gov, Mail Stop B265 Los Alamos National Laboratory Los Alamos, NM 87545 June 1996 Abstract A few conventions for thinking

More information

Ordinary Differential Equations Prof. A. K. Nandakumaran Department of Mathematics Indian Institute of Science Bangalore

Ordinary Differential Equations Prof. A. K. Nandakumaran Department of Mathematics Indian Institute of Science Bangalore Ordinary Differential Equations Prof. A. K. Nandakumaran Department of Mathematics Indian Institute of Science Bangalore Module - 3 Lecture - 10 First Order Linear Equations (Refer Slide Time: 00:33) Welcome

More information

Critical Reading of Optimization Methods for Logical Inference [1]

Critical Reading of Optimization Methods for Logical Inference [1] Critical Reading of Optimization Methods for Logical Inference [1] Undergraduate Research Internship Department of Management Sciences Fall 2007 Supervisor: Dr. Miguel Anjos UNIVERSITY OF WATERLOO Rajesh

More information

Math 4606, Summer 2004: Inductive sets, N, the Peano Axioms, Recursive Sequences Page 1 of 10

Math 4606, Summer 2004: Inductive sets, N, the Peano Axioms, Recursive Sequences Page 1 of 10 Math 4606, Summer 2004: Inductive sets, N, the Peano Axioms, Recursive Sequences Page 1 of 10 Inductive sets (used to define the natural numbers as a subset of R) (1) Definition: A set S R is an inductive

More information

Mathematics Research Report No. MRR 003{96, HIGH RESOLUTION POTENTIAL FLOW METHODS IN OIL EXPLORATION Stephen Roberts 1 and Stephan Matthai 2 3rd Febr

Mathematics Research Report No. MRR 003{96, HIGH RESOLUTION POTENTIAL FLOW METHODS IN OIL EXPLORATION Stephen Roberts 1 and Stephan Matthai 2 3rd Febr HIGH RESOLUTION POTENTIAL FLOW METHODS IN OIL EXPLORATION Stephen Roberts and Stephan Matthai Mathematics Research Report No. MRR 003{96, Mathematics Research Report No. MRR 003{96, HIGH RESOLUTION POTENTIAL

More information

Linear Algebra II. 2 Matrices. Notes 2 21st October Matrix algebra

Linear Algebra II. 2 Matrices. Notes 2 21st October Matrix algebra MTH6140 Linear Algebra II Notes 2 21st October 2010 2 Matrices You have certainly seen matrices before; indeed, we met some in the first chapter of the notes Here we revise matrix algebra, consider row

More information

Chapter 1. Comparison-Sorting and Selecting in. Totally Monotone Matrices. totally monotone matrices can be found in [4], [5], [9],

Chapter 1. Comparison-Sorting and Selecting in. Totally Monotone Matrices. totally monotone matrices can be found in [4], [5], [9], Chapter 1 Comparison-Sorting and Selecting in Totally Monotone Matrices Noga Alon Yossi Azar y Abstract An mn matrix A is called totally monotone if for all i 1 < i 2 and j 1 < j 2, A[i 1; j 1] > A[i 1;

More information

Lecture Notes on The Curry-Howard Isomorphism

Lecture Notes on The Curry-Howard Isomorphism Lecture Notes on The Curry-Howard Isomorphism 15-312: Foundations of Programming Languages Frank Pfenning Lecture 27 ecember 4, 2003 In this lecture we explore an interesting connection between logic and

More information

Online Appendixes for \A Theory of Military Dictatorships"

Online Appendixes for \A Theory of Military Dictatorships May 2009 Online Appendixes for \A Theory of Military Dictatorships" By Daron Acemoglu, Davide Ticchi and Andrea Vindigni Appendix B: Key Notation for Section I 2 (0; 1): discount factor. j;t 2 f0; 1g:

More information

Mathematics 102 Fall 1999 The formal rules of calculus The three basic rules The sum rule. The product rule. The composition rule.

Mathematics 102 Fall 1999 The formal rules of calculus The three basic rules The sum rule. The product rule. The composition rule. Mathematics 02 Fall 999 The formal rules of calculus So far we have calculated the derivative of each function we have looked at all over again from scratch, applying what is essentially the definition

More information

1.4 Techniques of Integration

1.4 Techniques of Integration .4 Techniques of Integration Recall the following strategy for evaluating definite integrals, which arose from the Fundamental Theorem of Calculus (see Section.3). To calculate b a f(x) dx. Find a function

More information

COSTLY STATE VERIFICATION - DRAFT

COSTLY STATE VERIFICATION - DRAFT COSTLY STTE VERIFICTION - DRFT SIMCH BRKI I am writing this note in order to explain under what circumstances debt contracts are optimal; more importantly, to explain why debt contracts are optimal. For

More information

Consequence Relations and Natural Deduction

Consequence Relations and Natural Deduction Consequence Relations and Natural Deduction Joshua D. Guttman Worcester Polytechnic Institute September 9, 2010 Contents 1 Consequence Relations 1 2 A Derivation System for Natural Deduction 3 3 Derivations

More information

5. The Logical Framework

5. The Logical Framework 5. The Logical Framework (a) Judgements. (b) Basic form of rules. (c) The non-dependent function type and product. (d) Structural rules. (Omitted 2008). (e) The dependent function set and -quantification.

More information

Tree sets. Reinhard Diestel

Tree sets. Reinhard Diestel 1 Tree sets Reinhard Diestel Abstract We study an abstract notion of tree structure which generalizes treedecompositions of graphs and matroids. Unlike tree-decompositions, which are too closely linked

More information

Chapter 2. Mathematical Reasoning. 2.1 Mathematical Models

Chapter 2. Mathematical Reasoning. 2.1 Mathematical Models Contents Mathematical Reasoning 3.1 Mathematical Models........................... 3. Mathematical Proof............................ 4..1 Structure of Proofs........................ 4.. Direct Method..........................

More information

Expressive Power, Mood, and Actuality

Expressive Power, Mood, and Actuality Expressive Power, Mood, and Actuality Rohan French Abstract In Wehmeier (2004) we are presented with the subjunctive modal language, a way of dealing with the expressive inadequacy of modal logic by marking

More information