Element x is R-minimal in X if y X. R(y, x).

Similar documents
The L Machines are very high-level, in two senses:

Types and Programming Languages (15-814), Fall 2018 Assignment 4: Data Representation (Sample Solutions)

CIS 500 Software Foundations Midterm II Answer key November 17, 2004

Review. Principles of Programming Languages. Equality. The Diamond Property. The Church-Rosser Theorem. Corollaries. CSE 230: Winter 2007

Midterm Exam Types and Programming Languages Frank Pfenning. October 18, 2018

Programming Languages

An Introduction to Logical Relations Proving Program Properties Using Logical Relations

Space-aware data flow analysis

Lecture Notes on Inductive Definitions

CS 4110 Programming Languages & Logics. Lecture 16 Programming in the λ-calculus

CMSC 336: Type Systems for Programming Languages Lecture 10: Polymorphism Acar & Ahmed 19 February 2008

Lecture Notes on Inductive Definitions

CSE 505, Fall 2009, Midterm Examination 5 November Please do not turn the page until everyone is ready.

Exam Computability and Complexity

Inductive Definitions

CSE 505, Fall 2008, Midterm Examination 29 October Please do not turn the page until everyone is ready.

Discrete Mathematics for CS Fall 2003 Wagner Lecture 3. Strong induction

First-Order Logic First-Order Theories. Roopsha Samanta. Partly based on slides by Aaron Bradley and Isil Dillig

Models of Computation,

Classical Program Logics: Hoare Logic, Weakest Liberal Preconditions

SLD-Resolution And Logic Programming (PROLOG)

Lecture Notes on Data Abstraction

Homework 5: Parallelism and Control Flow : Types and Programming Languages Fall 2015 TA: Evan Cavallo

INDUCTIVE DEFINITION

CMSC 631 Program Analysis and Understanding Fall Type Systems

CSE 505, Fall 2005, Midterm Examination 8 November Please do not turn the page until everyone is ready.

Supplementary Notes on Inductive Definitions

Predicate Logic - Undecidability

Programming Languages and Types

NICTA Advanced Course. Theorem Proving Principles, Techniques, Applications

Modal and temporal logic

Introduction to Metalogic

CSC236H Lecture 2. Ilir Dema. September 19, 2018

Induction on Failing Derivations

COMPUTER SCIENCE TRIPOS

Lecture Notes on Software Model Checking

Turing Machines, diagonalization, the halting problem, reducibility

Reading 5 : Induction

Extensions to the Logic of All x are y: Verbs, Relative Clauses, and Only

Gödel s Incompleteness Theorem. Overview. Computability and Logic

Typed Arithmetic Expressions

Reading and Writing. Mathematical Proofs. Slides by Arthur van Goetham

Decidable Subsets of CCS

CSE505, Fall 2012, Final Examination December 10, 2012

Induction; Operational Semantics. Fall Software Foundations CIS 500

TECHNISCHE UNIVERSITEIT EINDHOVEN Faculteit Wiskunde en Informatica. Final exam Logic & Set Theory (2IT61) (correction model)

Lecture Notes: Axiomatic Semantics and Hoare-style Verification

Recitation 2: Binding, Semantics, and Safety : Foundations of Programming Languages

Simply Typed Lambda-Calculi (II)

Decidability: Church-Turing Thesis

MATH 341, Section 001 FALL 2014 Introduction to the Language and Practice of Mathematics

Notes from Yesterday s Discussion. Big Picture. CIS 500 Software Foundations Fall November 1. Some lessons.

Propositional Logic: Part II - Syntax & Proofs 0-0

3 Propositional Logic

1 The decision problem for First order logic

Lecture Notes on Subject Reduction and Normal Forms

EDA045F: Program Analysis LECTURE 10: TYPES 1. Christoph Reichenbach

Lecture 11: Gödel s Second Incompleteness Theorem, and Tarski s Theorem

Handout: Proof of the completeness theorem

Introduction to Metalogic 1

MATH 2200 Final Review

Introduction to Kleene Algebras

Basics of Proofs. 1 The Basics. 2 Proof Strategies. 2.1 Understand What s Going On

Hoare Logic: Part II

CHAPTER 6. Copyright Cengage Learning. All rights reserved.

Complexity Theory VU , SS The Polynomial Hierarchy. Reinhard Pichler

Outline. Complexity Theory EXACT TSP. The Class DP. Definition. Problem EXACT TSP. Complexity of EXACT TSP. Proposition VU 181.

CMPSCI 601: Tarski s Truth Definition Lecture 15. where

Domain theory and denotational semantics of functional programming

Notes on induction proofs and recursive definitions

Peano Arithmetic. CSC 438F/2404F Notes (S. Cook) Fall, Goals Now

CpSc 421 Final Exam December 15, 2006

The exam is closed book, closed calculator, and closed notes except your one-page crib sheet.

Rules and derivations in an elementary logic course arxiv: v1 [cs.lo] 7 Jan 2016

a. (6.25 points) where we let b. (6.25 points) c. (6.25 points) d. (6.25 points) B 3 =(x)[ H(x) F (x)], and (11)

SEQUENCES, MATHEMATICAL INDUCTION, AND RECURSION

Type Inference. For the Simply-Typed Lambda Calculus. Peter Thiemann, Manuel Geffken. Albert-Ludwigs-Universität Freiburg. University of Freiburg

Gödel s Incompleteness Theorem. Overview. Computability and Logic

Lecture 2: Self-interpretation in the Lambda-calculus

Midterm Preparation Problems

The Process of Mathematical Proof

CS411 Notes 3 Induction and Recursion

On some Metatheorems about FOL

A Monadic Analysis of Information Flow Security with Mutable State

Proving Completeness for Nested Sequent Calculi 1

1 Definition of a Turing machine

1 Introduction. 2 Recap The Typed λ-calculus λ. 3 Simple Data Structures

(a) Definition of TMs. First Problem of URMs

Relational Reasoning in Natural Language

On the Progression of Situation Calculus Basic Action Theories: Resolving a 10-year-old Conjecture

Database Theory VU , SS Complexity of Query Evaluation. Reinhard Pichler

Data Structures and Algorithms Winter Semester

CS 5114: Theory of Algorithms. Tractable Problems. Tractable Problems (cont) Decision Problems. Clifford A. Shaffer. Spring 2014

CS154 Final Examination

2.2 Some Consequences of the Completeness Axiom

CS 4700: Foundations of Artificial Intelligence

SET THEORY. Disproving an Alleged Set Property. Disproving an Alleged Set. Example 1 Solution CHAPTER 6

Algebra Exam. Solutions and Grading Guide

CS481F01 Prelim 2 Solutions

Peano Arithmetic. by replacing the schematic letter R with a formula, then prefixing universal quantifiers to bind

Transcription:

CMSC 22100/32100: Programming Languages Final Exam M. Blume December 11, 2008 1. (Well-founded sets and induction principles) (a) State the mathematical induction principle and justify it informally. 1 If P (0) and n N. P (n) P (n + 1) then n N. P (n). Proof: Indirect: If the above is false, then there exists at least one k such that P (k). Pick the smallest such k and call it k 0. Case k 0 = 0: Contradiction with P (0). Case k 0 = k 1 + 1: Since k 0 is minimal, it must be that P (k 1 ), but then we also have P (k 1 + 1), i.e., P (k 0 ), which is another contradiction. If P (0) and n N. P (n) P (n + 1) then n N. P (n). Proof: Indirect: If the above is false, then there exists at least one k such that P (k). Pick the smallest such k and call it k 0. Case k 0 = 0: Contradiction with P (0). Case k 0 = k 1 + 1: Since k 0 is minimal, it must be that P (k 1 ), but then we also have P (k 1 + 1), i.e., P (k 0 ), which is another contradiction. (b) Let the pair (S, R) consists of some set S and a binary relation R over S (i.e., R S S). Given a subset X S, what is a minimal element (with respect to R) of X? Element x is R-minimal in X if y X. R(y, x). Element x is R-minimal in X if y X. R(y, x). (c) When is such a pair (S, R) considered well-founded? X S. X x X. x is minimal in X X S. X x X. x is minimal in X

(d) Consider Z, the set of integers. Give a relation R 1 Z Z such that (Z, R 1 ) is not well-founded. R 1 (x, y) iff x < y under the usual less-than ordering < on integers. R 1 (x, y) iff x < y under the usual less-than ordering < on integers. (e) Give another relation R 2 such that (Z, R 2 ) is well founded. R 1 (x, y) iff x < y (using < on natural numbers). R 1 (x, y) iff x < y (using < on natural numbers). (f) Consider (N N, R) where R is the lexicographic ordering, i.e., R((x 1, y 1 ), (x 2, y 2 )) if and only if either x 1 < x 2 or else x 1 = x 2 and also y 1 < y 2. Show that this set-relation pair is well-founded. Given a set X of pairs (x, y), first pick a minimal element x 0 among the x-components. Then consider the restriction of X to x 0, i.e., {(x 0, y) (x 0, y) X} and pick an element (x 0, y 0 ) such that y 0 is minimal among the y-components within the restriction. (x 0, y 0 ) is minimal in X as otherwise there would either have to be some (x 1, y 1 ) with x 1 < x 0, contradicting the choice of x 0 or an (x 0, y 2 ) with y 2 < y 0, contradicting the choice of y 0. Given a set X of pairs (x, y), first pick a minimal element x 0 among the x-components. Then consider the restriction of X to x 0, i.e., {(x 0, y) (x 0, y) X} and pick an element (x 0, y 0 ) such that y 0 is minimal among the y-components within the restriction. (x 0, y 0 ) is minimal in X as otherwise there would either have to be some (x 1, y 1 ) with x 1 < x 0, contradicting the choice of x 0 or an (x 0, y 2 ) with y 2 < y 0, contradicting the choice of y 0. (g) Given a well-founded set-relation pair (S, R), state the principle of well-founded induction. P. ( x S. ( y S. R(y, x) P (y)) P (x)) x S. P (x)

P. ( x S. ( y S. R(y, x) P (y)) P (x)) x S. P (x) 20pt* (h) (extra credit) Justify the principle of well-founded induction informally. If the principle were not true, then the set of counterexamples E = {x P (x)} would be non-empty. Choose a minimal e E (which exists by well-foundedness). But being minimal in E means that y S. R(y, e) P (y), which would imply P (e) and, therefore, is a contradiction. Thus, E must be empty. If the principle were not true, then the set of counterexamples E = {x P (x)} would be non-empty. Choose a minimal e E (which exists by well-foundedness). But being minimal in E means that y S. R(y, e) P (y), which would imply P (e) and, therefore, is a contradiction. Thus, E must be empty. 2. (Terms, rules, and derivations) Consider the following rules: zero nat z n nat succ(n) nat s m nat add(zero, m, m) add-z add(n, m, s) add(succ(n), m, succ(s)) add-s-l (a) State mathematically that the relation defined by the add(x, y, z) judgment is single-valued in its third argument. If add(x, y, z) and also add(x, y, z ) are derivable, then z = z. If add(x, y, z) and also add(x, y, z ) are derivable, then z = z. 1 (b) Prove the above statement about add being single-valued. By induction on x. Case x = zero: Rule add-z must have been used to derive both judgments, so z = y = z. Case x = succ(ˆx): Rules add-s-l must have been used to derive both judgments, so z = succ(ẑ) and z = succ(ẑ ). Moreover, by inversion of add-z we know add(ˆx, y, ẑ) and also add(ˆx, y, ẑ ). Thus, by IH: ẑ = ẑ from which we get z = succ(ẑ) = succ(ẑ ) = z.

By induction on x. Case x = zero: Rule add-z must have been used to derive both judgments, so z = y = z. Case x = succ(ˆx): Rules add-s-l must have been used to derive both judgments, so z = succ(ẑ) and z = succ(ẑ ). Moreover, by inversion of add-z we know add(ˆx, y, ẑ) and also add(ˆx, y, ẑ ). Thus, by IH: ẑ = ẑ from which we get z = succ(ẑ) = succ(ẑ ) = z. (c) Prove the following Lemma: Lemma 1 If add(n, m, s) is derivable, then so is add(n, succ(m), succ(s)). 1 By induction on n. Case n = zero: By inversion of add-z we have m = s, so succ(m) = succ(s). Thus, by add-z we also have add(zero, succ(m), succ(s)). Case n = succ(n ): By inversion of add-s-l we have s = succ(s ) with add(n, m, s ). By IH, add(n, succ(m), succ(s )), so using add-s-l we find add(n, succ(m), succ(s)). By induction on n. Case n = zero: By inversion of add-z we have m = s, so succ(m) = succ(s). Thus, by add-z we also have add(zero, succ(m), succ(s)). Case n = succ(n ): By inversion of add-s-l we have s = succ(s ) with add(n, m, s ). By IH, add(n, succ(m), succ(s )), so using add-s-l we find add(n, succ(m), succ(s)). (d) Show a rule or an axiom that if it were added to the set of rules and axioms shown above invalidates Lemma 1. Explain how the added rule breaks your proof. Sample rule: add(foo, bar, baz) The addition of the rule adds another case to the case analysis that is the heart of the proof. And, as it turns out, for case n = foo there is no way of concluding that add(foo, succ(bar), succ(baz)).

Sample rule: add(foo, bar, baz) The addition of the rule adds another case to the case analysis that is the heart of the proof. And, as it turns out, for case n = foo there is no way of concluding that add(foo, succ(bar), succ(baz)). (e) We could render Lemma 1 as a rule: add(n, m, s) add(n, succ(m), succ(s)) add-s-r Is this rule admissible or derivable? The rule is admissible, not derivable. (A derivable rule could not be broken by adding some other rule or axiom.) The rule is admissible, not derivable. (A derivable rule could not be broken by adding some other rule or axiom.) 3. (Static and dynamic semantics, safety) Consider the following variant of the Simply Typed λ-calculus (STLC): types: τ : : = nat τ τ values: v : : = zero succ(v) λx : τ.e expressions: e : : = x variables zero succ(e) zero and successor λx : τ.e e e function abstraction and application iter(e, e) iteration The iter(n, f) construct 1 computes the n-fold iteration of a function f. That is, to evaluate iter(e 1, e 2 ) we first evaluate e 1 to a number n and then e 2 to a function f of type τ τ for some τ. The result is another function of type τ τ which behaves like λx : τ. f (f (f... (f }{{} n times x)...)). 20pt (a) Give typing rules for this language. (The rules are the standard STLC rules plus one new rule for handling iter.) 1 whose addition turns this language into a variant of Gödel s T, which is significantly more expressive than plain STLC

Γ(x) = τ Γ x : τ Γ v : nat Γ succ(v) : nat Γ e 1 : τ 2 τ Γ e 2 : τ 2 Γ e 1 e 2 : τ Γ zero : nat Γ[x τ 1 ] e : τ 2 Γ λx : τ 1. e : τ 1 τ 2 Γ e 1 : nat Γ e 2 : τ τ Γ iter(e 1, e 2 ) : τ τ Γ(x) = τ Γ x : τ Γ v : nat Γ succ(v) : nat Γ e 1 : τ 2 τ Γ e 2 : τ 2 Γ e 1 e 2 : τ Γ zero : nat Γ[x τ 1 ] e : τ 2 Γ λx : τ 1. e : τ 1 τ 2 Γ e 1 : nat Γ e 2 : τ τ Γ iter(e 1, e 2 ) : τ τ 20pt (b) Give a small-step structural operational semantics, i.e., show rules for deriving judgments of the relation e 1 e. Again, most of the rules are standard, but you have to handle iter. The trickiest rule, namely the one for iter(succ(n), f), is shown here: (f = λx : τ. e) iter(succ(n), f) 1 λx : τ. (iter(n, f) (f x)) Your task is to show all the other rules for the above language. Make sure the language is call-by-value and evaluates from-left-to-right.

e 1 e 2 1 e 1 e 2 v 1 e 2 1 v 1 e 2 iter(e 1, e 2 ) 1 iter(e 1, e 2 ) iter(v 1, e 2 ) 1 iter(v 1, e 2) (λx : τ. e) v 1 {v/x}e iter(zero, λx : τ. e) 1 λx : τ. x e 1 e 2 1 e 1 e 2 v 1 e 2 1 v 1 e 2 iter(e 1, e 2 ) 1 iter(e 1, e 2 ) iter(v 1, e 2 ) 1 iter(v 1, e 2) (λx : τ. e) v 1 {v/x}e iter(zero, λx : τ. e) 1 λx : τ. x (c) Describe informally (in one sentence or two) what type safety means. Small-step evaluation will not get stuck, i.e., it either reaches a value or keeps going forever. Small-step evaluation will not get stuck, i.e., it either reaches a value or keeps going forever. (d) Which two lemmas, taken together, establish type safety? (What are the commonly used names of these lemmas?) Progess and Preservation Progess and Preservation 20pt (e) State both of these two lemmas for the above language.

Progress: If e : τ and e is not a value, then there exists some e such that e 1 e. Preservation: If e : τ and e 1 e then e : τ. Progress: If e : τ and e is not a value, then there exists some e such that e 1 e. Preservation: If e : τ and e 1 e then e : τ. 1 (f) State the canonical forms lemma for this language. If v : nat then v = zero or v = succ(v ) for some v. If v : τ 1 τ 2 then v = λx : τ 1.e for some x and some e. If v : nat then v = zero or v = succ(v ) for some v. If v : τ 1 τ 2 then v = λx : τ 1.e for some x and some e. (g) (Extra credit) Show all changes to your small-step semantics from question 3b that are necessary to make it evaluate from-right-to-left. 1* e 1 e 2 1 e 1 e 2 e 1 v 2 1 e 1 v 2 iter(e 1, e 2 ) 1 iter(e 1, e 2) iter(e 1, v 2 ) 1 iter(e 1, v 2 ) e 1 e 2 1 e 1 e 2 e 1 v 2 1 e 1 v 2 iter(e 1, e 2 ) 1 iter(e 1, e 2) iter(e 1, v 2 ) 1 iter(e 1, v 2 )

4. (Big-step semantics and substitution) In question 3b we had the luxury (afforded by the small-step nature of the semantics) of being able to unwind iter(n, f) one step at a time. To construct an equivalent big-step semantics we need to come up with a way of generating λx : τ. f (f... (f x)...) from n and f directly. For this we take advantage of the fact that f must have the form λx : τ. e. Roughly, the idea is to substitute e into itself n times, so we define an iterated version of substitution. Let {e /x}e be ordinary substitution of e for x in e. We write {e /x} n e where n is a natural number of the form succ(... (succ(zero))...) to mean the n-times iterated version of substitution. The idea is to hold e constant and to grow e by repeatedly substituting e for x into it. The number of such iterations is given by n. (a) Give an inductive definition for {e /x} n e in equational form. {e /x} zero e = e {e /x} succ(n) e = {e /x} n ({e /x}e) {e /x} zero e = e {e /x} succ(n) e = {e /x} n ({e /x}e) (b) Show the big-step rule for iter(e 1, e 2 ) v using iterated substitution. (Hint: Make sure you don t have any off-by-one errors, i.e., be careful that iter(zero, f) is always the identity function.) e 1 n e 2 λx : τ. e iter(e 1, e 2 ) λx : τ. {e/x} n x e 1 n e 2 λx : τ. e iter(e 1, e 2 ) λx : τ. {e/x} n x

1 (c) Give the remaining rules of the big-step operational semantics for the entire language. zero zero e n succ(e) succ(n) λx : τ.e λx : τ.e e 1 λx : τ.e e 2 v 2 {v 2 /x}e v e 1 e 2 v zero zero e n succ(e) succ(n) λx : τ.e λx : τ.e e 1 λx : τ.e e 2 v 2 {v 2 /x}e v e 1 e 2 v 5. (Machines) Big- and small-step semantics leave certain aspects of the mechanical execution of programs implicit, thereby being somewhat unrealistic when it comes to modelling how programs are actually implemented on real computers. Abstract machines make some of these aspects more explicit. (a) Which part of the semantics is being made explicit by the C machine, and what data structure is used to manifestly represent that part? The C machine makes control explicit by representing the remainder of the computation as a stack of frames. The stack remembers any part of the computation that had to be postponed until after the current expression has been fully evaluated. The C machine makes control explicit by representing the remainder of the computation as a stack of frames. The stack remembers any part of the computation that had to be postponed until after the current expression has been fully evaluated.

(b) Which artificial aspect of the semantics is being eliminated by going from the C machine to the E machine? Which additional data structure is used there, and what does it represent? The E machine eliminates substitutions by representing pending substitutions in a data structure called an environment. Environments are finite mappings from variables to (machine) values. The E machine eliminates substitutions by representing pending substitutions in a data structure called an environment. Environments are finite mappings from variables to (machine) values. (c) Which of the following language features can cause control frames to outlive the time when they are popped off the stack? exceptions (try... ow... and fail) first-class continuations (letcc and throw) higher-order functions mutable state (ref,! and :=) continuations (and nothing else) continuations (and nothing else) 1* 6. (Extra credit) We have seen that many type constructors missing from System F can be simulated via Church-encodings. Examples for this included natural numbers, booleans, general product and sum types, inductive types such as lists or trees, and even existentials. Notably absent from this list are recursive types. Do you think that this is just a coincidental omission, or is there a fundamental reason why general recursive types cannot be Church-encoded in pure System F? If you think that they can be simulated, try to sketch out an encoding. (Your encoding does not need to be complete. For example, you don t have to show the encoding of roll and unroll. Showing the encoding of µα.τ would suffice.) If, on the other hand, you think that this is not possible, give an informal proof of this fact (using your knowledge of other

facts regarding System F). It is not possible to Church-encode recursive types. All programs in pure System F terminate, but the addition of recursive types makes the language Turing-complete. Church-encoded types are still plain System F types, and they cannot raise the expressive power of the language, so recursive types, whose addition does raise the expressive power, are out of reach. It is not possible to Church-encode recursive types. All programs in pure System F terminate, but the addition of recursive types makes the language Turing-complete. Church-encoded types are still plain System F types, and they cannot raise the expressive power of the language, so recursive types, whose addition does raise the expressive power, are out of reach.