Step-indexed models of call-by-name: a tutorial example

Similar documents
Lecture 2: Syntax. January 24, 2018

Step-Indexed Biorthogonality: a Tutorial Example

Denoting computation

A Discrete Duality Between Nonmonotonic Consequence Relations and Convex Geometries

Step-Indexed Logical Relations for Probability

Complete Partial Orders, PCF, and Control

A categorical model for a quantum circuit description language

A Taste of Categorical Logic Tutorial Notes

CS632 Notes on Relational Query Languages I

Axioms of Kleene Algebra

Lecture Notes on Inductive Definitions

CS522 - Programming Language Semantics

Introduction to Metalogic

Realizability Semantics of Parametric Polymorphism, General References, and Recursive Types

Proving languages to be nonregular

Categories, Proofs and Programs

Lecture Notes on Inductive Definitions

The length-ω 1 open game quantifier propagates scales

Notes on Monoids and Automata

Operationally-Based Theories of Program Equivalence

A Taste of Categorical Logic Tutorial Notes

KRIPKE S THEORY OF TRUTH 1. INTRODUCTION

7 RC Simulates RA. Lemma: For every RA expression E(A 1... A k ) there exists a DRC formula F with F V (F ) = {A 1,..., A k } and

Metainduction in Operational Set Theory

Bisimulation for conditional modalities

Mathematics 114L Spring 2018 D.A. Martin. Mathematical Logic

State-Dependent Representation Independence (Technical Appendix)

An Introduction to Logical Relations Proving Program Properties Using Logical Relations

CONSTRUCTION OF THE REAL NUMBERS.

The equivalence axiom and univalent models of type theory.

TR : Possible World Semantics for First Order LP

Denotational Semantics

Higher Order Containers

The Logic of Proofs, Semantically

Beyond First-Order Logic

Metric spaces and metrizability

Marketing Impact on Diffusion in Social Networks

Logicality of Operators

LINDSTRÖM S THEOREM SALMAN SIDDIQI

Part II Logic and Set Theory

Math 541 Fall 2008 Connectivity Transition from Math 453/503 to Math 541 Ross E. Staffeldt-August 2008

1 Differentiable manifolds and smooth maps

Restricted truth predicates in first-order logic

Semantics for algebraic operations

Syntax. Notation Throughout, and when not otherwise said, we assume a vocabulary V = C F P.

On the Effectiveness of Symmetry Breaking

Axiomatisation of Hybrid Logic

EXAMPLES AND EXERCISES IN BASIC CATEGORY THEORY

First-Order Logic. 1 Syntax. Domain of Discourse. FO Vocabulary. Terms

Krivine s Intuitionistic Proof of Classical Completeness (for countable languages)

UNIVERSAL DERIVED EQUIVALENCES OF POSETS

6 Coalgebraic modalities via predicate liftings

Supplementary Notes on Inductive Definitions

Equational Logic. Chapter Syntax Terms and Term Algebras

University of Oxford, Michaelis November 16, Categorical Semantics and Topos Theory Homotopy type theor

Informal Statement Calculus

Prefixed Tableaus and Nested Sequents

Relational Reasoning for Recursive Types and References

arxiv: v1 [cs.pl] 19 May 2016

Universal Algebra for Logics

Reconsidering MacLane. Peter M. Hines

MA5219 LOGIC AND FOUNDATION OF MATHEMATICS I. Lecture 1: Programs. Tin Lok Wong 13 August, 2018

Isomorphisms between pattern classes

Category Theory. Categories. Definition.

Contextual equivalence

Tree sets. Reinhard Diestel

6 Cosets & Factor Groups

ACLT: Algebra, Categories, Logic in Topology - Grothendieck's generalized topological spaces (toposes)

Π 0 1-presentations of algebras

From Constructibility and Absoluteness to Computability and Domain Independence

VAUGHT S THEOREM: THE FINITE SPECTRUM OF COMPLETE THEORIES IN ℵ 0. Contents

Temporal logics and explicit-state model checking. Pierre Wolper Université de Liège

14 Lecture 14: Basic generallities on adic spaces

Theory of Computation

NOTES ON FINITE FIELDS

Math 210B. Artin Rees and completions

Techniques. Contextual equivalence

The nite submodel property and ω-categorical expansions of pregeometries

Löwenheim-Skolem Theorems, Countable Approximations, and L ω. David W. Kueker (Lecture Notes, Fall 2007)

Operational domain theory and topology of sequential programming languages

Computational Tasks and Models

PROPER FORCING REMASTERED

Eilenberg-Steenrod properties. (Hatcher, 2.1, 2.3, 3.1; Conlon, 2.6, 8.1, )

CHAPTER THREE: RELATIONS AND FUNCTIONS

Parametric Polymorphism and Operational Equivalence

Short Introduction to Admissible Recursion Theory

What are Iteration Theories?

General Patterns for Nonmonotonic Reasoning: From Basic Entailments to Plausible Relations

Topology. Xiaolong Han. Department of Mathematics, California State University, Northridge, CA 91330, USA address:

Preliminaries. Introduction to EF-games. Inexpressivity results for first-order logic. Normal forms for first-order logic

Equational Reasoning in Algebraic Structures: a Complete Tactic

A MODEL-THEORETIC PROOF OF HILBERT S NULLSTELLENSATZ

Automorphism groups of wreath product digraphs

On the Semantics of Parsing Actions

Consequence Relations and Natural Deduction

CMSC 631 Program Analysis and Understanding Fall Type Systems

The constructible universe

The non-logical symbols determine a specific F OL language and consists of the following sets. Σ = {Σ n } n<ω

Proving Completeness for Nested Sequent Calculi 1

arxiv:math/ v1 [math.lo] 5 Mar 2007

Transcription:

Step-indexed models of call-by-name: a tutorial example Aleš Bizjak 1 and Lars Birkedal 1 1 Aarhus University {abizjak,birkedal}@cs.au.dk June 19, 2014 Abstract In this tutorial paper we show how to construct a step-indexed logical relation for a call-by-name programming language with recursive types and show that it is complete with respect to contextual equivalence. We then show how the same constructions can be used to define a step-indexed model, in the standard categorical sense, of the language. We hope that this will make step-indexed techniques more readily available for researchers interested in call-by-name or call-by-need based languages (such as Haskell) and make it clear that step-indexed models can indeed be seen as (operationally-based) models in the technical sense (and not only as a reasoning technique for reasoning about contextual equivalence). 1 Introduction In recent years, step-indexed logical relations have proved to be useful for reasoning about contextual equivalence for programming languages with advanced features, such as recursive types, impredicative polymorphism, and general references. The techniques have been almost exclusively developed for call-by-value programming languages, e.g. [3, 1, 2, 4, 8]; we are only aware of one simple application to a call-by-name programming language [10] where the model is unary and used to prove type-soundness. In this tutorial paper we show how to construct a step-indexed model of a call-by-name programming language with recursive types. Moreover, we also show how to define a step-indexed model, in the standard categorical sense, of the language. We hope that this (1) will make step-indexed techniques more readily available for researchers interested in call-by-name or call-by-need based languages (such as Haskell); and (2) will make it clear that step-indexed models can indeed be seen as (operationally-based) models in the technical sense (and not only as a reasoning technique, for reasoning about contextual equivalence). We hope that this tutorial may complement the recent tutorial paper by 1

Pitts [12], who explains how to define a step-indexed model for a call-by-value language (but does not consider categorical models). We now explain the contents of the paper in a bit more detail. We call the programming language PCF µ since it is a variant of PCF extended with recursive types. We use step-indexing both to model fixed points of terms, but also to interpret recursive types. Building on recent ideas of Hoshino [9] for call-by-value, we do not only develop an interpretation of the types of PCF µ using a logical relation, but we actually show how to use step-indexing to define a category with the requisite properties to model PCF µ. Hoshino shows that his model for a call-by-value language is adequate; here we prove not only adequacy, but also that the model is fully abstract. (We discuss why that is in Section 5.) Earlier work [7] has combined step-indexed logical relations with biorthogonality in order to (1) get compleness wrt. contextual equivalence, and (2) to model control features, such as call-cc. (See also the tutorial by Pitts [12].) Here we also use biorthogonality, for completeness (full abstraction), and also because it makes it simpler to define the model, in particular to ensure that we only observe computation to a value of ground type. In Section 2 we define the PCF µ language and state some simple properties of the operational semantics. In Section 3 we define a step-indexed logical relations interpretation of the types of PCF µ and show that it provides a sound and complete method for reasoning about contextual equivalence of programs in PCF µ. In Section 4 we define the category C and prove that it yields (a fully abstract) model. The objects of C are indexed collections of relations on closed terms of PCF µ. Morphisms from X to Y are certain equivalence classes of open terms that, roughly speaking, for any step-index, take related computations in X to related computations in Y. The notion of computation is defined using biorthogonality. The equivalence relation is inspired by the one in [9] and is defined by taking the transitive closure of the quasi-reflexive symmetric interior of the step-indexed relation defining when a term contextually approximates another one. We prove that the category C is cartesian closed and has fixed point of all endomorphisms. Hence it soundly interprets PCF [6]. We prove that it also interprets recursive types. The adequacy of the model is proved straightforwardly, in particular there is no need use another logical relation to relate the model to the operational semantics (as for standard denotational models [13]), since the model is defined using the operational semantics. Finally, we prove full abstraction. The proof follows essentially along the same lines of the proof of full abstraction in [8], and relies on the use of biorthogonality. It is also crucial that the objects of C consists of relations on well-typed terms and that the evaluation contexts used in the definition of biorthogonality are also well-typed. (See the discussion in Section 5.) We conclude the paper with a brief discussion in Section 5. The prerequisites for reading this paper are mild. For the first part on the logical relation, readers are expected to be familiar with call-by-name operational semantics, simple type systems and proving properties of such using 2

t def = x tt ff if (t, t, t) fst t snd t t, t t t λ x. t fold (t) unfold (t) e def = [ ] fst e snd e e t if (e, t, t) unfold (e) C def = [ ] fst C snd C C, t t, C C t t C λ x. C if (C, t, t) if (t, C, t) if (t, t, C) unfold (C) fold (C) Figure 1: Terms and evaluation contexts of PCF µ. standard logical relations arguments. For the second part on the categorical model, readers are expected to understand what a cartesian closed category is and that PCF can be interpreted in such a category with fixed points of all endomorphisms. (The second part can be skipped by readers only interested in understanding how the step-indexed logical relation can be used as a proof method for reasoning about contextual equivalence.) 2 The language We consider a call-by-name variant of PCF with recursive types and with booleans and the unit type as the only ground types. Assuming a countably infinite supply of term variables, represented by the meta-variable x, we define terms t, evaluation contexts e and contexts C using the rules in Figure 1. Observe that there is no type information in the terms and that we don t include an explicit fixed point combinator as we will be able to define it as a typeable term, since we have recursive types. We denote by T the set of closed terms of PCF µ, by E the set of evaluation contexts and by C the set of contexts. Let s [ t / x ] denote the capture-avoiding substitution of term t for variable x in term s (if x is the only free variable in s, we will also write this as s(t)). We define the evaluation relation, T T, and the reduction relation, T T, in Figure 2 ( is the reflexive and transitive closure of ). Let U = { (e[unfold (fold (t))], e[t]) t T, e E }. We call the reductions in U unfold-fold reductions and define 1 def = ( \U) U ( \U) Thus t 1 t if t t and exactly one of the reductions is an unfold-fold reduction. Let N be the set of natural numbers {1, 2, 3,...}. For n N we define the 3

if (tt, t, t ) t if (ff, t, t ) t fst t, t t snd t, t t (λ x. s) t s [ t / x ] unfold (fold (t)) t t t e[t] e[t ] def = n=0 n Figure 2: Evaluation relations. We assume that t, t T, x is a variable, s is a term with at most x free and e E. The relation is deterministic. n-step reduction relation n T T as n 1 n = ( \U) i.e., t n t when t t and at most n 1 reductions are unfold-fold reductions. 1 The reason for counting only unfold-fold reductions in n will become apparent later, but the essence is that this provides some additional freedom which is convenient when working with the logical relation in Section 3 and crucial for establishing some of the properties of the model in Section 4. Assuming a countably infinite supply of type variables, distinct from term variables and denoted by the meta-variable α, we define the types τ in Figure 3, the typing judgment for terms in Figure 4, for evaluation contexts in Figure 5 and for contexts in Figure 6. The typing judgment for evaluation contexts, e τ σ, means that for any closed term t of type τ, e[t] is a closed term of type σ and the judgment C : ( Γ ) ( ) σ τ means that if Γ t : σ then C[t] : τ. We denote by T be the set of closed types. Using these we define in Figure 7 for each closed type τ and context Γ the set of terms of type τ in contexts Γ, T (Γ τ), and the set of closed terms of type τ, T τ. For a pair of closed types τ, σ we define the set of typeable evaluation contexts, E (τ σ), and when σ = 2 we write G (τ) for E (τ σ) (ground evaluation contexts). For a context Γ and a pair of types τ, σ we define a set of closing contexts C (Γ, τ σ) and for a context Γ we define the set of closing substitutions of S (Γ). Let TIR 2 be the set of quadruples { (Γ, t, t, σ) Γ t : σ Γ t : σ }. i=0 1 i 1 The reason for allowing only strictly less than n unfold-fold reductions in t n t is that it slightly simplifies later definitions. We could also allow n unfold-fold reductions, but then the definition of the interpretation of recursive types in Figure 9 would have to be altered so that the Lemma 14 would still hold. 2 Type Iindexed Rrelations 4

τ def = α 1 2 τ τ τ τ µ α.τ Figure 3: Types of the language. x : τ dom Γ Γ x : τ Γ tt : 2 Γ ff : 2 Γ t : 2 Γ s : τ Γ r : τ Γ if (t, s, r) : τ Γ : 1 Γ t : τ σ Γ s : τγ t : σ Γ s : τ Γ t s : σ Γ t, s : σ τ Γ, x : τ t : σ Γ λ x. t : τ σ Γ t : σ τ Γ fst t : σ Γ t : σ τ Γ snd t : τ Γ t : τ [ µ α.τ / α ] Γ fold (t) : µ α.τ Γ t : µ α.τ Γ unfold (t) : τ [ µ α.τ / α ] Figure 4: Standard typing judgment. We assume that all types declared in Γ are closed. τ τ e τ σ ρ fst e τ σ e τ σ ρ snd e τ ρ e τ σ ρ e t τ ρ t : σ e τ 2 t : σ t : σ if (e, t, t ) τ σ e τ µ α.σ unfold (e) τ σ [ µ α.σ / α ] Figure 5: Typing judgment for evaluation contexts. 5

: ( Γ ) ( ) C : ( ) ( ) τ Γ σ δ τ Γ τ fst C : ( ) ( C ) : ( ) ( ) τ Γ σ δ τ Γ σ snd C : ( ) ( ) τ Γ δ C : ( ) ( ) τ Γ σ Γ t : δ C, t : ( ) ( ) τ Γ σ δ C : ( ) ( ) τ Γ δ Γ t : σ t, C : ( ) ( ) τ Γ σ δ C : ( ) ( ) τ Γ δ σ Γ t : δ C t : ( ) ( ) τ Γ σ C : ( τ ) ( Γ δ ) Γ t : δ σ t C : ( ) ( ) τ Γ σ C : (, x : σ τ ) ( Γ, x : σ δ ) λ x. C : (, x : σ τ ) ( Γ σ δ ) C : ( τ ) ( Γ µ α.σ ) unfold (C) : ( τ ) ( Γ σ [ µ α.σ / α ]) C : ( ) ( [ / ]) τ Γ σ µ α.σ α fold (C) : ( ) ( C ) : ( ) ( ) τ Γ 2 Γ t : δ τ Γ µ α.σ if (C, t, t ) : ( ) ( ) τ Γ δ Γ t : δ Γ t : 2 C : ( ) ( ) τ Γ δ Γ t : δ if (t, C, t ) : ( τ ) ( Γ δ ) Γ t : 2 Γ t : δ C : ( τ ) ( Γ δ ) if (t, t, C) : ( τ ) ( Γ δ ) Figure 6: Typing judgment for contexts. The interesting rule is the rule for function abstraction where a variable is bound by the λ. All other rules are straightforward and follow the structure of contexts. 6

T (Γ τ) def = { t Γ t : τ } T τ def = T ( τ) E (τ σ) def = { e E e τ σ } G (τ) def = { e E e τ 2 } S ( ) = { } S (Γ, x : τ) = { γ[x t] γ S (Γ), t Tτ } C (Γ, τ σ) def = { C C C : ( Γ τ ) ( σ )}. Figure 7: The set of terms of type τ in context Γ, T (Γ τ), the set of closed terms of type τ, T τ, the set of evaluation contexts with hole of type τ and result of type σ, E (τ σ), the set of ground evaluation contexts of type τ, G (τ) and the set of closing contexts with a hole of type τ binding variables declared in Γ resulting in term of type σ, C (Γ, τ σ). S (Γ) is the pointwise extension of T to contexts and is defined inductively on Γ. We say that R is a type-indexed relation if R TIR and in this case write Γ t R t : σ for (Γ, t, t, σ) R. A type-indexed relation R is adequate if for all t, t T 2 t R t : 2 implies t tt t tt, reflexive if Γ t : σ implies Γ t R t : σ, transitive if Γ t R t : σ and Γ t R t : σ imply Γ t R t : σ precongruence if it is reflexive, transitive and closed under the rules in Figure 8. We call the rules in Figure 8 compatibility rules and a type-indexed relation satisfying them compatible. The compatibility rules follow the typing rules and are precisely the rules we need and expect for modular reasoning. Following [11] we define two notions of approximation and equivalence, contextual approximation and equivalence and CIU approximation and equivalence. Definition 1 (contextual approximation and equivalence). Γ s ctx t : τ def = C C (Γ, τ 2), C [s] tt C [t] tt Γ s ctx t : τ def = Γ s ctx t : τ Γ t ctx s : τ Definition 2 (CIU approximation and equivalence). Γ s ciu t : τ def = e G (τ), γ S (Γ), e [γ(s)] tt e [γ(t)] tt Γ s ciu t : τ def = Γ s ciu t : τ Γ t ciu s : τ 7

Γ tt R tt : 2 Γ ff R ff : 2 Γ R : 1 x : τ Γ Γ x R x : τ Γ t R t : σ δ Γ fst t R fst t : σ Γ t R t : σ δ Γ snd t R snd t : δ Γ t R t : σ Γ s R s : δ Γ t R t : σ δ Γ s R s : σ Γ t, s R t, s : σ δ Γ t s R t s : δ Γ, x : σ t R t : δ Γ λ x. t R λ x. t : σ δ Γ t R t : µ α.σ Γ unfold (t) R unfold (t ) : σ [ µ α.σ / α ] Γ t R t : σ [ µ α.σ / α ] Γ fold (t) R fold (t ) : µ α.σ Γ p R p : 2 Γ t R t : τ Γ s R s : τ Γ if (p, t, s) R if (p, t, s ) : τ Figure 8: Compatibility rules for a type-indexed relation R. Contextual equivalence says that we consider two terms equivalent if we may safely interchange them when part of any complete program. This is the notion of equivalence we need for proving, for instance, correctness of compiler optimizations. Closed Instantiation of Use equivalence says that we consider two terms equivalent if they behave in the same way when used (put into an evaluation position in a term). Informally, if a term in a context is never used, that is, evaluated, then it can be safely replaced by any other term, whereas if it is used, then it will be used under some evaluation context. Nevertheless, it is not obvious that these two notions of equivalence (or approximation) coincide. We prove this as Theorem 18 by constructing another notion of approximation. The two notions of equivalence are useful for proving different properties. For example, it is easy to see, Lemma 19, that CIU-equivalence is closed under reductions, but seeing that CIU-equivalence is compatible is non-trivial, whereas, for example, it is easy to see that contextual equivalence is a precongruence, but it is very difficult to show directly from the definition that contextual equivalence is closed under reductions. It is not difficult to prove that contextual approximation is an adequate precongruence and in fact it is the largest such. Proposition 3. If R is an adequate precongruence then for all terms t, t T (Γ σ), Γ t R t : σ implies Γ t ctx t : σ. Proof sketch. Let C C (Γ, σ 2). As R is a precongruence, C[t] R C[t ] : 2. The fact that R is also adequate concludes the proof. 8

2.1 Some properties of the syntax and the reduction relations We list here some syntactic properties used later. simple induction on the typing derivation. Lemma 4. If e E (τ σ ρ) then They are all proved by a (e = ρ = τ σ) ( f E (τ ρ), e = f[fst ]) ( f E (σ ρ), e = f[snd ]) Lemma 5. If e E (τ σ ρ) then (e = ρ = τ σ) ( t T τ, f E (σ ρ), e = f[ t]) Lemma 6. If e E (µ α.τ ρ) then (e = ρ = µ α.τ) ( f E ( τ [ µ α.τ / α ] ρ ), e = f[unfold ( )] ) 3 Logical relation Let C 0 be the set of decreasing sequences of binary relations on closed typeable terms, 3 C 0 def = { X τ : N P (T τ T τ ) τ T n N, Xτ (n) X τ (n + 1) }, and let G be the set of decreasing sequences of binary relations on typeable ground evaluation contexts, explicitly G def = { X τ : N P (G (τ) G (τ)) τ T n N, Xτ (n) X τ (n + 1) }. Each element of C 0 and G thus has a type associated with it and it relates only typeable terms or evaluation contexts of that type. We will often omit type indices of elements of C 0 and G when we have no explicit need for them. If X C 0 and (c, c ) X τ (n) we think of c and c as indistinguishable if we only have n computation steps available. The monotonicity condition, X τ (n) X τ (n+1), then states that if we have more computation steps available, we can distinguish more. We define two functions, ( ) : C 0 G and ( ) : G C 0, as follows X τ (n) def = X τ (n) def = { (e, e ) G (τ) G (τ) } i n, (v, v ) X τ (i), e[v] i tt e [v ] tt { } (t, t ) T τ T τ i n, (e, e ) X(i), e[t] i tt e [t ] tt. 3 The reason for subscript 0 in C 0 is that we intend to use this set as the set of object of the category C in Section 4. 9

It is easy to see that the maps are well defined and we denote their composite from C 0 to C 0 by ( ). We refer to related elements in X (n), i.e. (c, c ) X (n), as computations. The quantification over lower indices in the definitions of maps might appear mysterious, but it is there simply to ensure that the constructions stay within C 0 and G, i.e. that the constructed sequence of relations is monotonically decreasing. Intuitively, this is again because we think of terms related at n to be that are indistinguishable if we only use n unfold-fold reductions available to test with (the unfold-fold reductions are the only reduction steps we count). We will often implicitly use the following lemma, expressing the fact that ( ) is a closure operator. Lemma 7. Let X, Y C 0. n N, X(n) X (n). If n N, X(n) Y (n) then n N, X (n) Y (n). ( X ) = X. We define constructions on C 0 which will be used for modeling the types of the language. B 2 (n) def = {(tt, tt), (ff, ff)} (X σ Y τ ) (n) def = { ( c, d, c, d ) (c, c ) Xσ (n) (d, d ) Yτ (n) } ( (Y σ ) )) n λ x. c : τ σ λ x. c (Xτ (n) def = (λ x. c, λ x. : τ σ c ) (d, d ) Xτ (i), (e, e ) Y σ (i), i=1 e[(λ x. c) d] i tt e [(λ x. c ) d ] tt The difference from the constructions used for modeling call-by-value languages is that the canonical elements of the product type in a call-by-name language are pairs of computations, rather than pairs of values. There is also a natural change in that the canonical elements of the function type have to map related computations to related computations, rather than related values to related computations. Having defined these, we define the interpretation of types in Figure 9. The interpretation is defined by induction on the index and type formally τ (n) is defined inductively on the lexicographic ordering of N T. We extend the interpretation of types to contexts as follows = n {(, )} Γ, x : τ = n {(γ[x c], γ [x c ]) (γ, γ ) Γ (n) (c, c ) τ (n)}. Note that related substitutions map variables to computations. This choice is forced by the definition of the interpretation of the arrow type (specifically 10

2 = n {(tt, tt), (ff, ff)} 1 = n {(, )} τ σ = τ σ τ σ = σ τ µ α.τ (1) = T µ α.τ T µ α.τ n µ α.τ (n + 1) = {(fold (c), fold (c )) (c, c ) τ [ µ α.τ / α ] } (i) i=1 Figure 9: Interpretation of types. the proof of compatibility for the case of lambda terms in Proposition 15). This is different from the situation in logical relations for call-by-value languages, where closing substitutions map variables to related values. Finally, we define the logical approximation relation as a subset of TIR as follows Γ t t : σ def = n N, (γ, γ ) Γ (n), (γ(t), γ (t )) σ (n). 3.1 Soundness with respect to contextual approximation We wish to show that Γ t t : σ implies Γ t ctx t : σ but we will show more. We will show that the logical, CIU and contextual approximations all coincide. We first state two lemmas which, although proved rather trivially, are nevertheless important and often used in later proofs. They show that we can do some reductions on related terms so that the terms are still related. Lemma 8. Let n N and (c, c ) X (n). If d c and d 1 c d 2, then (d, d 1 ) X (n) and (d, d 2 ) X (n). Lemma 9. Let n N and (c, c ) X (n). If d 1 c and d 1 c d 2, then (d, d 1 ) X (n + 1) and (d, d 2 ) X (n + 1). Another simple property of computations is that a diverging computation approximates any other computation of the same type. Lemma 10. Let X σ C 0 and t T σ. If t diverges then t T σ, n N, (t, t ) Xσ (n) The next four lemmas show how we can extend related evaluation contexts with elimination forms to form a new pair of related evaluation contexts. They are crucial for showing compatibility of elimination forms in Proposition 15. 11

Lemma 11. For all X C 0, n N, (e, e ) X (n), (c, c ) X(n), (d, d ) X(n), (e[if (, c, d)], e [if (, c, d )]) B (n). Proof. Let i n and (t, t ) B(i) and assume e[if (t, c, d)] i tt. We naturally consider two cases t = t = tt Then e[c] i tt and so, e [c ] tt and so e [if (t, c, d ) tt t = t = ff Then e[d] i tt and so, e [d ] tt and so e [if (t, c, d )] tt. Lemma 12. Let X, Y C 0 and n N. If (e, e ) X (n) then (e[fst ], e [fst ]) (X Y ) (n) and (e[snd ], e [snd ]) (Y X) (n). Proof. Notice that because we are in a call-by-name language, the two conclusions we have to prove are in fact completely symmetric, so we content ourselves in only proving the case for fst. Take i n and ( c, d, c, d ) (X Y ) (i) and assume e[fst c, d ] i tt. This means e[c] i tt. Using the assumptions (e, e ) X (n) X (i) and (c, c ) X (n) X (i), we have e [c ] tt, but this also means e [fst c, d ] tt, and we are done. Lemma 13. Let X, Y C 0 and n N. If (e, e ) Y (n) and (c, c ) X (n) then (e[ c], e [ c ]) ( Y X) (n). Proof. Take i n and (f, f ) ( Y X) (i) and assume e[f c] i tt. By construction i n, (b, b ) X (i) and (g, g ) Y (i), g[f b] i tt g [f b ] tt. If we instantiate this with (e, e ) and (c, c ) we get e [f c ] tt, which is exactly what we need. Lemma 14. If (e, e ) τ [ µ α.τ / α ] (n) then (e[unfold ( )], e [unfold ( )]) µ α.τ (n). Proof. Let 1 < i n and (fold (c), fold (c )) µ α.σ (i). By definition of the interpretation of recursive types this means (c, c ) τ [ µ α.τ / α ] (i 1). Assume e [unfold (fold (c))] i tt. This implies e[c] i-1 tt and so e [c ] tt and so e [unfold (fold (c ))] tt and we are done. For i = 1 the conclusion holds vacuously as for (c, c ) X(1), e[unfold (c)] 1 tt does not hold. 4 Proposition 15. The logical approximation relation is closed under all the rules in Figure 8. In particular, it is reflexive. 4 Here it is crucial that t n t means that t evaluates to t in strictly less than n unfold-fold reductions. 12

Proof. Note first that any relation that is closed under the rules in Figure 8 is necessarily reflexive. Most of the cases in the proof of compatibility for the logical approximation are straightforward. We only show a few representative ones here, to explain how the above lemmas are used for proving compatibility of elimination forms and how the compatibility of introduction forms is shown directly. The case for lambda abstraction. Suppose Γ, x : σ t t : δ. We must show Γ λ x. t λ x. t : σ δ. Let n N, (γ, γ ) Γ (n). We must show (γ(λ x. t), γ (λ x. t )) σ δ (n) but we will do even better and show (γ(λ x. t), γ (λ x. t )) σ δ (n). First, γ(λ x. t) = λ x. γ(t) and γ (λ x. t ) = λ x. γ (t ) so the terms are of the right form. Next, let i n, (c, c ) σ (i) and (e, e ) δ (i) and assume e[(λ x. γ(t)) c] i tt. This implies that e [(γ[x c]) (t)] i tt and as it is easily seen that (γ[x c], γ [x c ]) Γ, x : σ we use the assumption Γ, x : σ t t : δ and instantiate it with i and (γ[x c], γ [x c ]) to get ((γ[x c]) (t), (γ [x c ]) (t )) δ (i). This then implies e [(γ [x c ]) (t )] tt, which implies e [(λ x. γ (t )) c ] tt The case for application. Suppose Γ t t : σ δ and Γ s s : σ. We must show Γ t s t s : δ so take n N and (γ, γ ) Γ (n). We must then show (γ (t s), γ (t s )) δ (n). Unfolding the definitions, let i n, (e, e ) δ (i). Using Lemma 13 and the induction hypothesis we get (e[ γ(s)], e [ γ (s )]) σ δ (i) and also by induction hypothesis we have (γ(t), γ (t )) σ δ (n) σ δ (i). In the end, we have to show e[γ (t s)] i tt e [γ (t s )] tt and this now follows directly from what we showed, as substitution distributes over application. The case for fold. Suppose Γ t t : τ [ µ α.τ / α ]. We must show Γ fold (t) fold (t ) : µ α.τ so let n N, (γ, γ ) Γ (n). By assumption (γ(t), γ (t )) τ [ µ α.τ / α ] (n). Then by construction of µ α.τ we have (fold (γ(t)), fold (γ (t ))) µ α.τ (n+ 1) µ α.τ (n) which is exactly what was needed. The case for unfold. Suppose Γ t t : µ α.τ. We must show Γ unfold (t) unfold (t ) : τ [ µ α.τ / α ] so let n N, (γ, γ ) Γ (n), i n, (e, e ) τ [ µ α.τ / α ] (i). Lemma 14 shows (e[unfold ( )], e [unfold ( )]) µ α.τ (i). Combining this with the induction hypothesis we proceed exactly as in the case for application above. 13

Corollary 16. Let Γ = (x i : τ i ) n i=1 and assume Γ t : σ. Let k N and for i = 1, 2,..., n, (c i, c i ) τ i (k). Then ( t [ / ] n c i xi i=1, t [ ] c i/ n xi i=1) σ (k). Proof. Let γ = [x i c i ] n i=1 and γ = [x i c i ]n i=1. Then (γ, γ ) Γ (k). Proposition 15 shows Γ t t : σ and if we instantiate this with k and (γ, γ ) we get the desired conclusion. Proposition 17. For all terms t, t of type σ in context Γ, we have Γ t t : σ if and only if Γ t ciu t : σ. Proof. Assume Γ t t : σ. Let γ S (Γ) be a closing substitution and e G (σ). Assuming e[γ(t)] tt we have to show e[γ(t )] tt. Proposition 15 shows that n N, (γ, γ) Γ (n) and thus also that n N, (γ(t), γ(t )) σ (n). The same proposition then also shows that n N, (e[γ(t)], e[γ(t )]) 2 (n). The assumption that e[γ(t)] tt implies there exists a natural number k, such that e[γ(t)] k tt. It is easy to see directly from the definition that n N, (, ) 2 (n). Picking an n k we have e[γ(t )] tt, as required. Let n N, (γ, γ ) Γ (n). We have to show (γ(t), γ (t )) σ (n) so let i n, (e, e ) σ (i) and assume e[γ(t)] i tt. Corollary 16 implies (γ(t), γ (t)) σ (n) (reflexivity). Monotonicity implies (γ(t), γ (t)) σ (i) and thus e [γ (t)] tt. Instantiating the assumption that t CIUapproximates t with γ and e we get e [γ (t )] tt. The importance of this proposition is that it establishes that both the CIUapproximation and logical approximation are precongruences. Indeed, CIUapproximation is obviously adequate, reflexive and transitive but it is not obviously compatible. On the other hand, Proposition 15 shows that logical approximation is compatible, which implies that is reflexive, but transitivity is not immediate from the definition and in fact the direct proof quickly fails. Establishing that the two approximation relations coincide shows that both are adequate precongruences. Theorem 18. CIU, contextual and logical approximation relations coincide. Proof. Proposition 3 and Proposition 17 imply that CIU-approximation and logical approximation imply contextual approximation, so we only need to establish that contextual approximation implies CIU-approximation. To that end let Γ = (x i : τ i ) n i=1 and assume Γ t ctx t : σ, let γ = [x i s i ] n i=1 be a closing substitution for t and t and suppose e[γ(t)] tt. This implies that e [((((λ x 1. λ x 2.... λ x n. t) s 1 ) s 2 ) ) s n ] tt (1) and it is easy to see that e [((((λ x 1. λ x 2.... λ x n. ) s 1 ) s 2 ) ) s n ] C (Γ, σ 2). 14

We instantiate Γ t ctx t : σ with the above context and using (1) to get which implies e [γ(t )] tt. e [((((λ x 1. λ x 2.... λ x n. t ) s 1 ) s 2 ) ) s n ] tt The coincidence of these relations enables us to easily prove some simple properties which we will use for establishing properties of the categorical model. Lemma 19. Let b, c T σ. If b c then b ciu c : σ. Lemma 20. Let e G (σ), a T τ σ and b, c T τ. If b c then e[a b] tt if and only if e[a c] tt. Proof. Both directions follow from the same argument. From Lemma 19 we have b ciu c : τ and a ciu a : τ σ. Proposition 17 implies that CIUapproximation satisfies compatibility properties so a b ciu a c : σ. Instantiating this assumption with e and empty substitutions concludes the proof. Proposition 21. If Γ, x : σ t ctx t : τ and s T (Γ σ) then Γ t [ s / x ] ctx t [ s / x ] : τ Proof. CIU-approximation is trivially seen to have this property. We therefore use Theorem 18 to conclude the proof. Proposition 22. If Γ s ciu s : σ and Γ, x : σ t : τ then Γ t [ s / x ] ciu t [ s / x ] : τ. Proof. Take a closing substitution γ for Γ, e G (τ) and assume e [ γ ( t [ s / x ])] tt. As γ is a closing substitution we have e [ γ ( t [ s / x ])] = e [ γ(t) [ γ(s) / x ]] so we have e [(λ x. γ(t))(γ(s))] tt. By Proposition 21 γ(s) ctx γ(s ) : σ and since e [(λ x. γ(t)) ] C (, σ 2) the proof is done. To finish this section we give a simple concrete application of the development to establish the extensionality for terms of the function type. Proposition 23. If f, f T σ τ and c T σ, f c ctx f c : τ then f ctx f : σ τ. Proof. We will show that f CIU-approximates f. Let e G (σ τ) and assume e[f] tt. Lemma 5 shows that e = e [ d] for some d T σ and e G (τ). Instantiating the assumption c T σ, f c ctx f c : τ with d shows e [f d] tt and thus e[f ] tt, as required. 4 The model We define a categorical model of the language and show that it is sound, adequate and complete with respect to the operational notion of observation at base type. Soundness in this case means that the model validates the basic equations arising from the reduction relation, adequacy means, roughly, that if two terms have equal denotations then they are contextually equivalent and completeness (or full abstraction) means that two contextually equivalent terms have equal denotations. 15

Showing that the category is cartesian closed essentially amounts to showing β and η laws for functions and products, which are relatively easy to establish using the logical relation. Technically, some care is required due to the handling of contexts. It is usual in simple categorical models to interpret terms as morphisms from the interpretation of the context to the interpretation of the type, where the interpretation of the context is the product, in the category, of the interpretations of the types. What this amounts to is substituting projections from a single variable for individual variables and to account for this tupleing properly requires some care. Let C be a category with the set of objects C 0. Morphisms from X τ to Y σ are equivalence classes of a certain partial equivalence relation on the set T (x : τ σ) which we now define. To do this we first need a few auxiliary definitions. Given X τ, Y σ C 0 we define the binary relation Xτ,Y σ on T (x : τ σ) exactly as we did the logical approximation relation, the difference being that we only relate terms that have at most one free variable and that the objects are now more general, not just the objects arising as the interpretations of types. Explicitly, the relation X,Y is defined as follows t X,Y s def = n N, (c, c ) X (n), (t(c), s(c )) Y (n). For each pair of objects X, Y C 0 we then define the relation X,Y as the quasi-reflexive symmetric interior of X,Y and then X,Y as the transitive closure of X,Y. Explicitly def X,Y = { (t, t ) t X,Y t t X,Y t t X,Y t t X,Y t } def X,Y = n X,Y. n=1 Denoting the equivalence class of t under X,Y as [t] X,Y we define the morphisms Hom C (X, Y ) = { [t] X,Y t X,Y t }. If t [s] Hom C (X, Y ) we say that t realizes a morphism from X to Y or that t is a realizer for [s]. Composition is by substitution, [t]; [s] def = [s(t)], and the identity morphism is the equivalence class of the term consisting just of the variable x. Rationale for the definition of morphisms We now explain the rationale behind the definition of morphisms, specifically the use of quasi-reflexive symmetric interior and the transitive closure. We explain this by showing how the proof that composition is well-defined proceeds. It is not immediately clear that composition is well-defined. For it to be well-defined means that if t X,Y t and s Y,Z s then s(t) X,Z s (t ). Using the definition of transitive closure we have to prove that if 16

t X,Y t 1 X,Y t 2 X,Y X,Y t n X,Y t (2) and s Y,Z s 1 Y,Z s 2 Y,Z Y,Z s m Y,Z s (3) then there exists a k and r 1, r 2,..., r k such that s(t) X,Z r 1 X,Z r 2 X,Z X,Z s (t ) (4) and the important thing to notice here is that n and m are in general not equal. It is easy to see that if u X,Y u and v Y,Z v then v(u) X,Z v (u ) and it is then immediate that the same holds for the relation. If n and m in (2) and (3) were equal, we could thus easily take k = n and r i = s i (t i ), but in general we cannot expect this to be the case, if the underlying relation is not well-behaved. Taking the transitive closure of only the quasi-reflexive symmetric interior of the original approximation relation is what ensures that we can always extend the chains to be of the same length. In other words, it ensures that n n+1 (see Lemma 24 for why this is the case). In the same way we can transfer other compatibility results that hold for the relation to the relation and thus to constructions on morphisms. For example, if we knew that was compatible for pairing, meaning that if t X,Y t and s X,Z s then t, s X,Y Z t, s we can, in the same way as we did above for composition, prove that t X,Y t and s X,Z s then t, s X,Y Z t, s. We will use this property of the relations to only show properties of the relation, relying on the fact that we can transport results to the, as we outlined in the discussion above. See [5] for more details on this construction. The fact that composition is associative is a consequence of associativity of substitution. In the rest of this tutorial we will, when it will not cause confusion, omit explicit typing requirements and just implicitly assume that when we talk about realizers for morphisms then they have the appropriate type. The following two lemmas expose simple, yet constantly used properties. The first lemma crucially depends on the fact that we have taken the transitive closure of only the quasi-reflexive symmetric interior of the relation. Lemma 24. Let X, Y C 0 and t realize a morphism from X to Y. We have [t] Hom C (X, Y ) t X,Y t. Proof. It is obvious that if t X,Y t then [t] Hom C (X, Y ). For the other direction the premise implies there exists a n N and t 1, t 2,..., t n T, such that t X,Y t 1 X,Y t 2 X,Y X,Y t n X,Y t. It follows from the definition of X,Y that if t X,Y t 1 then t X,Y t which concludes the proof. 17

We now show that the category C is cartesian closed with fixed points of all endomorphisms. Let µ α.α C 0 be the everywhere empty relation; µ α.α (n) def = Proposition 25. is the terminal object in C. The unique morphism from any object X to is given by the equivalence class of Ω = (λ x. unfold (x) x) ( fold (λ x. unfold (x) x) ). Proof. It is clear that C 0 and we immediately observe that for all n N, (n) = G (µ α.α) G (µ α.α). Pick an object X. We first have to show Ω X, Ω and this is simple; take m N and (c, c ) X (n). Then Ω(c) = Ω(c ) = Ω and as Ω 1 Ω, the term Ω diverges. Using Lemma 10 we have our result. Now suppose ϕ X, ϕ is another such morphism. We claim that n N, (c, c ) X (n), e G (µ α.α), e[ϕ(c)] must not converge to tt. Suppose it does for some e. Then, as (n) = G (µ α.α) G (µ α.α), for any evaluation context e G (µ α.α), e [ϕ(c )] must converge to tt, but this is clearly nonsense. We thus use Lemma 10 to get ϕ X, Ω and the other direction also follows using the same reasoning. We thus have [ϕ] = [Ω] and therefore uniqueness. Interpretation of 2 The following two properties will be used when showing that the interpretation of the ground type 2 is sound. Lemma 26. If ϕ X,Y ϕ, ψ X,Y ψ and δ X,B δ then if (δ, ϕ, ψ) X,Y if (δ, ϕ, ψ ). Proof. A simple corollary of Lemma 11. Proposition 27. If [ϕ], [ψ] Hom C (X, Y ) then [if (tt, ϕ, ψ)] Hom C (X, Y ) and [if (tt, ϕ, ψ)] = [ϕ] and [if (ff, ϕ, ψ)] Hom C (X, Y ) and [if (ff, ϕ, ψ)] = [ψ]. Proof. We will only prove the cases for tt, the case for ff being completely symmetric. Take n N, (c, c ) X (n), i n, (e, e ) Y (i) and assume e[if (tt, ϕ(c), ψ(c))] i tt. Then e[ϕ(c)] tt and so e [ϕ(c )] tt. For the other direction take n N, (c, c ) X (n), i n, (e, e ) Y (i) and assume e[ϕ(c)] i tt. Then e [ϕ(c )] tt and so e [if (tt, ϕ(c ), ψ(c ))] tt. The last proposition expresses that 2 is a weak sum of and. Products We now prove that the category has binary products. Proposition 28. For X, Y C 0, X Y is the product object. Projections are morphisms realized by fst and snd and tupleing of morphisms realized by ϕ and ψ is a morphism realized by ϕ, ψ. We will need the following lemma in the proof of the proposition. 18

Lemma 29. Let (e, e ) (X Y ) (n) for some n. Then either there exists an f E, such that e = f[fst ] or there exists an f, such that e = f[snd ] and similarly for e. Proof. Immediate consequence of Lemma 4. Note that this lemma (among others) would not be true, were we to make observations at the product type, for example if we just observed termination at any type. This is an essential difference with the call-by-value language, where observing termination or observing termination at a particular type or observing termination to a particular value of the base type makes no difference, because in a call-by-value language there are many more evaluation contexts, i.e. more test functions. Proof (Proposition 28). Lemma 12 tells us that fst x X Y,X fst x and snd x X Y,Y snd x hold and so fst x and snd x are well defined morphisms. We now show that for any object Z and ϕ Z,X Y ϕ and fst ϕ, snd ϕ Z,X Y ϕ ϕ Z,X Y fst ϕ, snd ϕ. So assume such ϕ and take n N, (c, c ) Z (n), i n, (e, e ) X Y (i) and assume e[ fst ϕ(c), snd ϕ(c) ] i tt. Lemma 29 implies e must be of the form e = f[fst ] or e = f[snd ]. In either case, we have e[ϕ(c)] i tt and so e [ϕ(c )] tt. For the second one, take the same n, c, c and i, e, e and assume e[ϕ(c)] i tt. As ϕ Z,X Y ϕ, we immediately have e [ϕ(c )] tt. Lemma 29 tells us that e = f [fst ] or e = f [snd ] for some f and as f [fst fst ϕ(c ), snd ϕ(c ) ] f [fst ϕ(c )] = e [ϕ(c )] tt and similarly for snd, we are done. We only have to show that tupleing respects the equivalence relation. So take ϕ Z,X ϕ and ψ Z,Y ψ so we need to show ϕ, ψ Z,X Y ϕ, ψ. Take n N, (c, c ) Z (n), i n, (e, e ) (X Y ) (n) and assume e [ ϕ(c), ψ(c) ] i tt. Using the assumptions ϕ Z,X ϕ and ψ Z,Y ψ we get (ϕ(c), ϕ (c )) X (n) and (ψ(c), ψ (c )) Y (n) and so ( ϕ(c), ψ(c), ϕ (c ), ψ (c ) ) (X Y )(n) (X Y ) (n), so we have obtained the necessary property almost trivially. We deal with transitivity as we have outlined in Section 4 to show if [ϕ] = [ϕ ] and [ψ] = [ψ ] then [ ϕ, ψ ] = [ ϕ, ψ ] if [ξ]; [fst x] = [ϕ] and [ξ]; [snd x] = [ψ] then [ξ] = [ ϕ, ψ ]. which establishes the universal property of products. 19

Exponentials We are now ready to prove that the category C has exponentials and is cartesian closed. Most of the proofs are straightforward, only the uniqueness of transposes requires some work. Proposition 30. For X, Y C 0, Y X is the exponential object. Evaluation is given by the equivalence class of ε = (fst x) (snd x) and the transpose of a morphism from Z X to Y realized by ϕ is given by the morphism realized by Λ (ϕ) = λ y. ϕ( x, y ). We first prove some auxiliary lemmas. Lemma 31. Let n N and (c, c ) (X Y ) (n). For all d T, (c, fst c, snd d, snd c ) (X Y ) (n). Proof. Take i n, (e, e ) (X Y ) (i) and assume e[c] i tt. Lemma 29 tells us that e must be of the form e = f [fst ] or e = f [snd ]. In both cases we get what is required by a simple inspection. Lemma 32. Let n N, (e, e ) ( Y X) (n). Then there exist evaluation contexts f, f and closed terms d, d, such that e = f[ d] and e = f [ d ]. Proof. This is an immediate consequence of Lemma 5. Proof (Proposition 30). First we show that ε is well defined. Let n N, (c, c ) ( Y X X ) (n). Lemma 12 gives us that (fst c, fst c ) ( Y X) (n) and (snd c, snd c ) X (n) and then we use Lemma 13 to get what we need. Next we show that transposes are well defined, so assume ϕ Z X,Y ψ and we need to show Λ (ϕ) Z,Y X Λ (ψ). So take n N, (c, c ) Z (n). We will show directly, that (Λ (ϕ) (c), Λ (ψ) c ) ( Y X) (n), which is more than we need to prove. By construction and the fact that c and c are closed terms, we have Λ (ϕ) (c) = λ y. ϕ ( c, y ) and Λ (ψ) (c ) = λ y. ψ ( c, y ), so the terms are of the right form. To show that they are related at step n take i n, (d, d ) X (i) and (e, e ) Y (i) and assume e [(λ y. ϕ ( c, y )) d] i tt. This also means that e [ϕ ( c, d )] i tt and using the fact that ( c, d, c, d ) (Z X)(i) (Z X) (i) and ϕ Z X,Y ψ we also have (ϕ ( c, d ), ψ ( c, d )) Y (i) and so e [ψ ( c, d )] tt and as e [(λ y. ψ ( c, y )) d ] e [ψ ( c, d )] tt we are done. We are thus only left with showing the universal property and we do this by showing that eval induces an inverse to Λ ( ). Let ϕ realize a morphism from Z X to Y. We first show that (fst ( Λ (ϕ) (fst x), snd x )) (snd( Λ (ϕ (fst x)), snd x )) Z X,Y ϕ So take n N, (c, c ) (Z X) (n), i n, (e, e ) Y (i) and assume e [(fst ( Λ (ϕ) (fst c), snd c )) (snd( Λ (ϕ (fst c)), snd c ))] i tt 20

Simplifying a bit, this implies that e[ϕ ( fst c, snd( Λ (ϕ (fst c)), snd c ) )] i tt Using the fact that ( fst c, snd( Λ (ϕ (fst c)), snd c ), fst c, snd c ) (Z X)(i), which follows from Lemma 8 and Lemma 12, and the fact that ϕ is related to itself, we get e [ϕ ( fst c, snd c )] tt. Using Lemma 4 it follows that fst c, snd c ciu c : τ Z τ X, where τ Z and τ X are types associated with the objects Z and X, respectively. Proposition 22 thus shows that ϕ ( fst c, snd c ) ciu ϕ(c ) : τ Y which implies f [ϕ(c )] tt. For the other direction we use Lemma 31. Take the same n, c, c and i, e, e and assume e[ϕ(c)] i tt. Then the lemma tells us that e [ϕ ( fst c, snd Λ (ϕ (fst c )), c )] tt which is, as we have seen before enough. This implies that transposing has a post-inverse and that it maps morphisms appropriately. Suppose now that ϕ realizes a morphism from Z to Y X. We will show ϕ Z,Y X Λ (ε ( ϕ(fst x), snd x )) and Λ (ε ( ϕ(fst x), snd x )) Z,Y X ϕ one at a time. So take n N, (c, c ) Z (n), i n and (e, e ) ( Y X) (i) and assume e[ϕ(c)] i tt. Using Lemma 32 we have e = f[ d] and e = f [ d ] for some f, f E and d, d T. We then have e [Λ (ε ( ϕ(fst x), snd x )) (c )] f [ε ( ϕ(fst x), snd x ) ( c, d )] = f [ε ( ϕ(fst c, d ), snd c, d )] f [(ϕ(fst c, d )) (snd ϕ(fst c, d ), snd c, d )]. Lemma 8 tells us that (ϕ(c), ϕ(fst c, d )) ( Y X) (n) and obviously snd ϕ(fst c, d ), snd c, d d due to the call by name nature of the evaluation relation. Using Lemma 20 we have that if f [(ϕ(fst c, d )) d ] tt then also f [(ϕ(fst c, d )) (snd ϕ(fst c, d ), snd c, d )] tt and the first one reduces to tt because, as mentioned above, (ϕ(c), ϕ(fst c, d )) ( Y X ) (n). For the other approximation again take n N, (c, c ) Z (n), i n and (e, e ) ( Y X) (i) and now assume that e[λ (ε ( ϕ(fst x), snd x ))] i tt. Again, there exist f, f, d and d such that e = f[ d] and e = f [ d ]. The last assumption thus implies f [(ϕ(fst c, d )) (snd ϕ(fst c, d ), snd c, d )] i tt 21

which further implies f [(λ x. (ϕ(fst c, x )) (snd ϕ(fst c, x ), snd c, x )) d] i tt If we can show that ( λ x. (ϕ(fst c, x )) (snd ϕ(fst c, x ), snd c, x ), (5) we can conclude λ x. (ϕ(fst c, x )) (snd ϕ(fst c, x ), snd c, x ) ) ( Y X) (n) (6) f [(λ x. (ϕ(fst c, x )) (snd ϕ(fst c, x ), snd c, x )) d ] tt which by Lemma 20 implies f [(ϕ(fst c, d )) (snd ϕ(fst c, d ), snd c, d )] tt but what we need to conclude is e [ϕ(fst c, d )] tt e [ϕ(c )] tt. It is easy to see that fst c, d ciu c : τ Z and using Proposition 22 we get exactly what is required. We thus only have to show that (5) holds. To this end let i n, (d, d ) X (i), (e, e ) Y (i) and assume e [(λ x. (ϕ(fst c, x )) (snd ϕ(fst c, x ), snd c, x )) d] i tt. This again implies Lemma 8 implies e [(ϕ(fst c, d )) (snd ϕ(fst c, d ), snd c, d )] i tt. (snd ϕ(fst c, d ), snd c, d, snd ϕ(fst c, d ), snd c, d ) X (i) and also (fst cd, fst c d ) Z (n). As ϕ Z,Y X ( ) Y X (n). Lemma 13 then implies ϕ we get (ϕ (fst c, d ), ϕ (fst c, d )) e [(ϕ(fst c, d )) (snd ϕ(fst c, d ), snd c, d )] tt. which implies the required goal. This shows that transposes are unique and that for each X the object mapping Y X Y can be extended to a functor with a left adjoint, so the category is cartesian closed. 22

Fixed points as To model general recursion we define a fixed point combinator Y (f) = (λ z. f (unfold (z) z)) ( fold ((λ z. (f (unfold (z) z)))) ) for any term f. In the special case where x is the only possible free variable in f, we write F(f) = Y (λ x. f) Lemma 33. If X C 0 and ϕ X,X ψ then n N, (F(ϕ), F(ψ)) X (n). Proof. We use well-founded induction to prove this use. So take n N and assume k < n, (F(ϕ), F(ψ)) X (k). Take i n, (e, e ) X (i) and assume e[f(ϕ)] i tt. By construction we have F(ϕ) ϕ ( unfold (fold ((λ z. (λ x. ϕ) (unfold (z) z)))) ( fold ((λ z. ((λ x. ϕ) (unfold (z) z)))) )) and unfold (fold ((λ z. (λ x. ϕ) (unfold (z) z)))) ( fold ((λ z. ((λ x. ϕ) (unfold (z) z)))) ) 1 F(ϕ). By the induction hypothesis (F(ϕ), F(ψ)) X (i 1) and so using Lemma 9 and the fact that unfold (fold ((λ z. (λ x. ψ) (unfold (z) z)))) ( fold ((λ z. ((λ x. ψ) (unfold (z) z)))) ) F(ψ) we have that e [ ψ ( unfold (fold ((λ z. (λ x. ψ) (unfold (z) z)))) ( fold ((λ z. ((λ x. ψ) (unfold (z) z)))) ))] tt and so e [F(ψ)] tt. Using this lemma we can prove the following. Corollary 34. If X C 0 and ϕ X,X ψ then F (ϕ) 1,X F (ψ). And finally we can prove that we do indeed have fixed points. Proposition 35. If X C 0 and ϕ X,X ϕ then F(ϕ) 1,X ϕ (F(ϕ)). To sum up, we have established the validity of the following theorem. Theorem 36. The category C is a cartesian closed category with fixed points of all endomorphisms. 4.1 Interpretation and soundness The interpretation of types is defined in Figure 9. It is exactly the same as the relational interpretation of types used in defining the logical approximation relation in Section 3. In order to define the interpretation of terms we need some additional properties of the interpretation of recursive types. 23

Recursive types The following proposition establishes that the interpretation of a recursive type and an unfolded recursive type are the same and the isomorphism is given by morphisms realized by fold and unfold. Proposition 37. Let µ α.τ be a well-formed closed type. Denote µ α.τ by X and τ [ µ α.τ / α ] by Y. The following hold and so X Y. fold (x) Y,X fold (x) (7) unfold (x) X,Y unfold (x) (8) fold (unfold (x)) X,X x (9) unfold (fold (x)) Y,Y x (10) Proof. The proof follows by simply inspecting the definition of the interpretation of recursive types. Ad (7) Take n N and (c, c ) Y (n). This means (fold (c), fold (c )) X(n + 1) X(n) X (n) so we are done. Ad (8) Take n N, (c, c ) X (n), i n, (e, e ) Y (i) and assume e[unfold (c)] i tt. Lemma 14 then shows that also e [unfold (c )] tt, by the now familiar argument. Ad (9) We only show the approximation fold (unfold (x)) X,X x. The other direction is similar. Let n N, (c, c ) X, i n, (e, e ) X (i) and assume e[fold (unfold (c))] i tt. Lemma 6 implies that there exists an f, such that e = f[unfold ( )]. The assumption on termination then implies f[unfold (c)] i tt which is the same as saying e[c] i tt. The conclusion now follows trivially. Ad (10) This relation follows immediately from Lemma 8. Interpretation of terms In order to define the interpretation of terms we first explicitly define projections from the n-fold product ( (X 1 X 2 ) X n 1 ) X n by induction on n. If i n we define a realizer πi n for a morphism ( (X 1 X 2 ) X n 1 ) X n X i as follows π 1 1(x) = x π n+1 i (x) = { snd x i = n + 1 π n i (fst x) i n The interpretation in the model and its soundness are straightforward and mostly (except for recursive types) follows from standard facts about interpretation of PCF in a CCC with fixed points. The interpretation of Γ t : σ is a 24

Γ x i : τ i = [π n i (x)] for Γ = (x i : τ i ) n i=1 Γ tt : 2 = [tt] Γ ff : 2 = [ff] Γ if (t, s, r) : τ = [if (ϑ, ϕ, ψ)] Γ : 1 = for ϑ Γ t : 2, ϕ Γ s : τ, ψ Γ r : τ Γ λ z. t : τ σ = Λ ( Γ, z : τ t : σ ) Γ t s : σ = Γ t : τ σ, Γ s : τ ; [ε] Γ t, s : σ τ = Γ t : σ, Γ s : τ Γ fst t : σ = Γ t : σ τ ; [fst x] Γ snd t : τ = Γ t : σ τ ; [snd x] Γ fold (t) : µ α.τ = Γ t : τ [ µ α.τ / α ] ; [fold (x)] Γ unfold (t) : τ [ µ α.τ / α ] = Γ t : µ α.τ ; [unfold (x)] Figure 10: Interpretation of terms morphism from Γ to σ, where Γ = ( ( τ 1 τ 2 ) τ n 1 ) τ n, if Γ = (x i : τ i ) n i=1. It is spelled out in Figure 10. Soundness of the interpretation of recursive types follows from Proposition 37. 4.2 Adequacy Lemma 38. If [ϕ] Hom C (, 2) and [ϕ] = [tt] then ϕ(ω) tt and if [ϕ] = [ff] then ϕ(ω) ff. Proof. We first observe that n N, (Ω, Ω) (n) and that n N, (, ) 2 (n). The case of tt is now very simple; we will prove the claim by induction on the length of the transitivity chain. The base case is when tt ϕ. For any n 1 we have tt n tt and so ϕ(ω) tt. Otherwise tt ϕ 1 ϕ 2 ϕ k ϕ k+1. The induction hypothesis shows that ϕ k (Ω) tt in some number of steps. We then instantiate ϕ k ϕ k+1 with (Ω, Ω) and (, ) and crucially use the fact that these are related indefinitely, to get ϕ k+1 (Ω) tt. To show the case for ff, we observe that if (, ff, tt) is related to itself as an evaluation context for any n so we proceed in the same way as in the case for tt. Lemma 39. If Γ t : τ and Γ = (x i : τ i ) n i=1 then t [ π n i x/ x i ] Γ t : τ 25

Proof. We use induction on the typing derivation. The only interesting case is for function abstraction, all the other cases follow straightforwardly from the induction hypothesis Case Γ λ x n+1. t : τ σ We inductively assume that t [ π n+1 i x / ] x i Γ, xn+1 : τ t : σ. By definition of the interpretation function, Γ λ x n+1. t : τ σ is then the equivalence class of λ z. t [ π n+1 i x / ] [ / ] [ x i x, z x = λ z. t π n+1 i x, z / ] x i = λ z. t [ πi n (fst x, z ) / ] [ / ] x i snd x, z xn+1 We claim that the last term is equivalent to λ z. t [ π n i x / x i ] [ z / xn+1 ] To show this let k N, (c, c ) Γ (k). We will show directly that ( λ z. t [ π n i (fst c, z ) / x i ] [ snd x, z / xn+1 ], λ z. t [ π n i c / x i ] [ z / xn+1 ]) τ σ (n). So take i k, (d, d ) τ (i), (e, e ) σ (i). It is easy to see, using Lemma 12, Lemma 8 and induction on n, that for i = 1, 2,..., n, and and that also and (π n i (fst c, d ), π n i c ) τ i (k) (π n i c, π n i (fst c, d )) τ i (k) (snd c, d, d ) τ (i) (d, snd c, d ) τ (i). Using Corollary 16 we get what is required. Theorem 40 (Adequacy). If t : 2 = [tt] then t tt and if t : 2 = [ff] then t ff. Proof. Lemma 39 shows that t t : 2, Lemma 38 then concludes the proof. Corollary 41. If Γ t : σ = Γ s : σ then Γ t ctx s : σ. Proof. Let C C (Γ, σ 2). As the interpretation function is compositional we have C[t] : 2 = C[s] : 2 (formally, this can be proved by a straightforward induction on C). Now soundness ensures that if C[t] tt then C[t] : 2 = tt : 2 = [tt]. Theorem 40 now concludes the proof. 26