Strength; Weakest Preconditions

Similar documents
Proof Rules for Correctness Triples

Loop Convergence. CS 536: Science of Programming, Fall 2018

Deterministic Program The While Program

Hoare Calculus and Predicate Transformers

CSE 331 Software Design & Implementation

CSE 331 Winter 2018 Reasoning About Code I

Weakest Precondition Calculus

Classical Program Logics: Hoare Logic, Weakest Liberal Preconditions

Axiomatic Semantics: Verification Conditions. Review of Soundness and Completeness of Axiomatic Semantics. Announcements

Hoare Logic: Part II

COP4020 Programming Languages. Introduction to Axiomatic Semantics Prof. Robert van Engelen

Axiomatic Semantics. Lecture 9 CS 565 2/12/08

Propositional and Predicate Logic

Propositional and Predicate Logic

Formal Reasoning CSE 331. Lecture 2 Formal Reasoning. Announcements. Formalization and Reasoning. Software Design and Implementation

Predicate Transforms I

CS 341 Homework 16 Languages that Are and Are Not Context-Free

Hoare Logic I. Introduction to Deductive Program Verification. Simple Imperative Programming Language. Hoare Logic. Meaning of Hoare Triples

that shows the semi-decidability of the problem (i.e. a semi-decision procedure for EXISTS-HALTING) and argue that it is correct.

What happens to the value of the expression x + y every time we execute this loop? while x>0 do ( y := y+z ; x := x:= x z )

Axiomatic Semantics: Verification Conditions. Review of Soundness of Axiomatic Semantics. Questions? Announcements

Lecture Notes: Axiomatic Semantics and Hoare-style Verification

Formal Methods for Probabilistic Systems

Axiomatic Semantics. Hoare s Correctness Triplets Dijkstra s Predicate Transformers

Programming Languages and Compilers (CS 421)

Mid-Semester Quiz Second Semester, 2012

The Assignment Axiom (Hoare)

ALGEBRA. 1. Some elementary number theory 1.1. Primes and divisibility. We denote the collection of integers

Hoare Logic: Reasoning About Imperative Programs

Polynomial Space. The classes PS and NPS Relationship to Other Classes Equivalence PS = NPS A PS-Complete Problem

Solutions to exercises for the Hoare logic (based on material written by Mark Staples)

Proof Calculus for Partial Correctness

Spring 2014 Program Analysis and Verification. Lecture 6: Axiomatic Semantics III. Roman Manevich Ben-Gurion University

Dynamic Semantics. Dynamic Semantics. Operational Semantics Axiomatic Semantics Denotational Semantic. Operational Semantics

Introduction to Logic in Computer Science: Autumn 2006

A Short Introduction to Hoare Logic

Critical Reading of Optimization Methods for Logical Inference [1]

THE AUSTRALIAN NATIONAL UNIVERSITY Second Semester COMP2600/COMP6260 (Formal Methods for Software Engineering)

CSE 105 THEORY OF COMPUTATION

Hoare Logic: Reasoning About Imperative Programs

Program Analysis and Verification

Reasoning About Imperative Programs. COS 441 Slides 10b

CMPSCI 250: Introduction to Computation. Lecture #11: Equivalence Relations David Mix Barrington 27 September 2013

Programming Languages

Soundness and Completeness of Axiomatic Semantics

First Order Logic vs Propositional Logic CS477 Formal Software Dev Methods

Spring 2015 Program Analysis and Verification. Lecture 6: Axiomatic Semantics III. Roman Manevich Ben-Gurion University

Program Analysis Part I : Sequential Programs

Verification Frameworks and Hoare Logic

COMP2111 Glossary. Kai Engelhardt. Contents. 1 Symbols. 1 Symbols 1. 2 Hoare Logic 3. 3 Refinement Calculus 5. rational numbers Q, real numbers R.

THE AUSTRALIAN NATIONAL UNIVERSITY Second Semester COMP2600 (Formal Methods for Software Engineering)

Hoare Logic (I): Axiomatic Semantics and Program Correctness

CS 301. Lecture 18 Decidable languages. Stephen Checkoway. April 2, 2018

THE AUSTRALIAN NATIONAL UNIVERSITY Second Semester COMP2600 (Formal Methods in Software Engineering)

CS156: The Calculus of Computation Zohar Manna Autumn 2008

Chapter 7 Propositional Satisfiability Techniques

Deductive Verification

Axiomatic semantics. Semantics and Application to Program Verification. Antoine Miné. École normale supérieure, Paris year

Essential facts about NP-completeness:

Automata Theory and Formal Grammars: Lecture 1

CS 336. We use the principle of inclusion and exclusion. Let B. = {five digit decimal numbers ending with a 5}, and

AN INTRODUCTION TO SEPARATION LOGIC. 2. Assertions

Chapter 7 Propositional Satisfiability Techniques

Program verification. 18 October 2017

Why Learning Logic? Logic. Propositional Logic. Compound Propositions

Lecture Notes on Software Model Checking

1 Propositional Logic

In this episode of The Verification Corner, Rustan Leino talks about Loop Invariants. He gives a brief summary of the theoretical foundations and

Computation and Logic Definitions

Proof Techniques (Review of Math 271)

Part V. Intractable Problems

Denotational Semantics

KB Agents and Propositional Logic

Lecture 17: Floyd-Hoare Logic for Partial Correctness

Truth-Functional Logic

Foundations of Computation

Chapter 3 Deterministic planning

Lecture Notes on Compositional Reasoning

Chapter 3. Cartesian Products and Relations. 3.1 Cartesian Products

We are going to discuss what it means for a sequence to converge in three stages: First, we define what it means for a sequence to converge to zero

This is logically equivalent to the conjunction of the positive assertion Minimal Arithmetic and Representability

Mathematical Foundations of Logic and Functional Programming

CS156: The Calculus of Computation

A Humble Introduction to DIJKSTRA S A A DISCIPLINE OF PROGRAMMING

Probabilistic Guarded Commands Mechanized in HOL

Announcements. CS311H: Discrete Mathematics. Propositional Logic II. Inverse of an Implication. Converse of a Implication

Chapter 2. Assertions. An Introduction to Separation Logic c 2011 John C. Reynolds February 3, 2011

Completeness in the Monadic Predicate Calculus. We have a system of eight rules of proof. Let's list them:

PROBLEM SET 3: PROOF TECHNIQUES

A Tutorial Introduction to CSP in Unifying Theories of Programming

Introduction to Axiomatic Semantics

CIS (More Propositional Calculus - 6 points)

Propositional Language - Semantics

CHAPTER 4 CLASSICAL PROPOSITIONAL SEMANTICS

Informal Statement Calculus

Herbrand Theorem, Equality, and Compactness

CM10196 Topic 2: Sets, Predicates, Boolean algebras

Design of Distributed Systems Melinda Tóth, Zoltán Horváth

Announcements. Assignment 5 due today Assignment 6 (smaller than others) available sometime today Midterm: 27 February.

Mathematics 114L Spring 2018 D.A. Martin. Mathematical Logic

Transcription:

12/14: solved Strength; Weakest Preconditions CS 536: Science of Programming, Spring 2018 A. Why To combine correctness triples, we need to weaken and strengthen conditions. A weakest precondition is the most general requirement that a program must meet to produce the right result. B. Objectives At the end of today you should understand What weakening and strengthening are. What weakest preconditions (wp) and weakest liberal preconditions (wlp) are. How the domain predicate of an expression ensures it doesn t cause runtime failure. How to calculate the wp and wlp of loop-free programs, with the help of domain predicates. C. Stronger and Weaker Predicates Generalizing the Sequence Rule: We ve already seen that two triples {p} S₁ {q} and {q} S₂ {r} can be combined to form the sequence {p} S₁ ; S₂ {r}. Say we want to combine two triples that don t have a common middle condition, {p} S₁ {q} and {q } S₂ {r}. We can do this iff q q. If S₁ terminates in state τ and {p} S₁ {q} is valid, then τ q. If q q, then τ q, so if {q } S₂ {r}, then if running S₂ in τ terminates, it terminates in a state satisfying r. This reasoning works both for partial and total correctness. Our earlier rule with q q is a special case of this more general rule (since q always implies itself). Definition: If q r then q is stronger than r and r is weaker than q. I.e., the set of states that satisfy q is the set of states that satisfy r. (Technically, we should say stronger than or equal to and weaker than or equal to, since we can have q r, but it s too much of a mouthful. Then strictly stronger means stronger than or equal to and not equal to; strictly weaker is similar.) Note: One notation for implication is p q; it's important not to read that as a superset symbol, since the set of states for p the set of states for q. Example 1: x = 0 is stronger than x = 0 x = 1, which is stronger than x 0. A predicate corresponds to the set of states that satisfy it, so we can use Venn diagrams with sets of states to display comparisons of predicate strength. In Figure 1, p q because the set of states for p set of states for q. We don t have p r, q r, r p, or r q, but we do have p r (all of p is outside r), r p, and q p. In Figure 2, we see that stronger predicates stand for smaller sets of states. No states satisfy F (false), so it is the strongest predicate. F stands for, the empty set of states. Weaker states stand for larger sets of states. Since every state satisfies T (true), T stands for Σ, the set of all states. CS 536-05: Science of Programming 1 James Sasaki, 2018

p q r Stronger p F x = 0 x 0 x Z q r Σ x Z y = 0 Weaker q (in p q) T Figure 1: Predicates As Sets of States Figure 2: Weaker Predicates Stand For Larger Sets of States D. Strengthening and Weakening Conditions This idea of freely going from stronger to weaker predicates applies to individual triples, not just sequences. Both of the following properties are valid for both partial and total correctness of triples (i.e., for and tot ). Preconditions can always be strengthened: If p₀ p₁ and {p₁} S {q}, then {p₀} S {q}. Postconditions can always be weakened: If q₀ q₁ and {p} S {q₀}, then {p} S {q₁}. Example 2: If {x 0} S {y = 0} is valid, then so are {x 0} S {y = 0 y = 1} (Weak. postcondition) {x = 0} S {y = 0} (Str. precondition) {x = 0} S {y = 0 y = 1} (Str. precond. and weak. postcond.) Note these rules still apply if p₀ p₁ or q₀ q₁. Example 3: Let p₀ s = sum(0, k) and p₁ s+k+1 = sum(0, k+1) Since p₀ p₁, we can strengthen {p₁} S {q} to {p₀} S {q} (where q s = sum(0, k)). Just because we can strengthen preconditions and weaken postconditions doesn t mean we should. In the limit, strengthening preconditions and weakening postconditions gives us uninteresting correctness triples: σ {F} S {q} and σ tot {F} S {q} have the strongest possible preconditions σ {p} S {T} has the weakest possible postcondition σ tot {p} S {T} has the weakest possible postcondition but it does tell us that S terminates when you start it in p. From the programmer s point of view, if {p} S {q} has a bug, then strengthening p or weakening q can get rid of the bug without changing S. Example 4: If {p} S {q} causes an error if x = 0, we can tell the user to use {p x 0} S {q}. Example 5: If {p} S {q} causes an error because S terminates with y = 1 (and y = 1 does not imply q), we can tell the user to use {p} S {q y = 1}. From the user's point of view, weaker preconditions and stronger postconditions make triples more useful: CS 536-05: Science of Programming 2 James Sasaki, 2018

Weaker precondition adds starting states: If p q then going from {p} S {r} to {q} S {r} provides more information because it increases the set of states that we say S can start in and end with r. Basically, we combine {p} S {r} and { p q} S {r} to get {q} S {r}. Stronger postconditions remove ending states: If q r then going from {p} S {r} to{p} S {q} provides more information because it decreases the set of states we say S will end in. E. The Weakest Precondition (wp) If {p} S {q} is valid, it s often the case that p only describes a subset of the set of all states that lead to satisfaction of q. I.e., we can often weaken p to some other predicate that still works as a precondition for S and q. Say {p} S {q} is valid but { p} S {q} is satisfiable: Say {r} S {q} is valid, where r p. Since {p} S {q} and {r} S {q} are both valid, so is {p r} S {q}. Since r p, we know p p r is valid but p r p is not, so p is strictly stronger than p r: The set of states that satisfy p r is a strict superset of the set of states that satisfy p. This means that going from {p} S {q} to {p r} S {q} weakens the precondition for S and q: It increases the set of states we can start in and end up with q satisfied. Though we can strengthen a precondition arbitrarily far (to F in the limit), we usually can t weaken a precondition all the way to T: We keep weakening p while maintaining {p} S {q}, but eventually we get to a p that can't be weakened any more and still get us to q. In particular, if { p} S {q} is unsatisfiable, then there s no r that implies p that we can use to weaken p to p r while maintaining satisfaction of q. Example 6: If x N, then {x 6} x := x*x {x 4} is valid, but it s also the case that {x < 6} x := x*x {x 4} is satisfiable (e.g., when x = 5), so we can extend x 6 with x 5 (i.e., x 6 x = 5) and maintain validity. On the other hand, {x 2} x := x*x {x 4} is valid and {x < 2} x := x*x {x 4} is unsatisfiable, so there s no way to weaken x 2 and maintain validity. Definition: The weakest precondition of program S and postcondition q, written wp(s, q), is the predicate w such that tot {w} S {q} and for every σ w, we have M(S, σ) q (i.e., M(S, σ) includes or for some τ M(S, σ), we have τ q). Example 7: wp(x := y*y, x 4) y*y 4. If σ y*y 4 then M(S, σ) x 4. If σ y*y 4 then M(S, σ) x 4. Example 8: If x Z, then wp(if x 0 then y := x else y := -x fi, y = abs(x)) T. No matter what state we start in, this program sets y to the absolute value of x. Equivalently, in no state does it not set y to the absolute value of x. Note that since you can t weaken T, if T is a valid precondition for S and q, then it s also the weakest precondition for S and q. The wp(s, q) concept is important, so it s good to look more into it. First, as a predicate, wp(s, q) is only unique up to logical equivalence. In some sense, a predicate is just a name for the set of states that satisfy it, which is why (e.g.) x = 0 x-1 = 1. So let w wp(s, q) CS 536-05: Science of Programming 3 James Sasaki, 2018

means that w can be any of the logically equivalent predicates that are satisfied by exactly the set of states {σ Σ M(S, σ) q}, which is the definition of wp(s, q) as a set of states. If w wp(s, q), then w has two properties. First, it s a precondition for S and q: tot {w} S {q}. Note since w is a precondition, any stronger p that implies w is also a precondition: If p w then if σ p then σ w, so M(S, σ) q. Since σ tot {p} S {q} holds for any σ, we know p is a valid precondition. The second property for w, being the weakest precondition, can be characterized a couple of ways: In the definition of w wp(s, q), the weakest precondition part is (σ w M(S, σ) q). This turns out to be equivalent to saying that any valid precondition for S and q must be stronger than w: If tot {p} S {q}, then p w. To see this, take the contrapositive of (σ w M(S, σ) q), which is (M(S, σ) q σ w, which in turn implies σ w, since σ is a state. If σ tot {p} S {q}, then σ p implies M(S, σ) q, which implies σ w, so σ p w. Since this holds for any σ that satisfies p, we know p w is valid. So if w is any valid precondition ( tot {w} S {q}), then p w σ tot {p} S {q}. Only if w is the weakest precondition do we get the other direction, σ tot {p} S {q} p w. F. wp and Deterministic Programs If S is deterministic, then S leads to a unique result: M(S, σ) = {τ} for some τ Σ. If S terminates normally (τ Σ), then the start state σ is part of either wp(s, q) or wp(s, q), depending on whether τ satisfies q or q. Since wp(s, q) is the set of states that lead to satisfaction of q, wp(s, q) is the set of states that lead to an error or to satisfaction of q. Similarly, wp(s, q) is the set of states that lead to an error or to satisfaction of q. The intersection of these two sets, wp(s, q) wp(s, q), is the set of states that lead to an error. Since σ must lead S either to termination satisfying q, termination satisfying q, or nontermination, every state satisfies exactly one of wp(s, q), wp(s, q), and wp(s, q) wp(s, q). Let E wp(s, q) wp(s, q), then we get the identities wp(s, q) E wp(s, q). The negation of S terminates with q true is S doesn t terminate or it terminates with q false. wp(s, q) E wp(s, q) is symmetric: The negation of S terminates with q false is S doesn t terminate or it terminates with q true. If S contains a loop and M(S, σ) diverges (and σ Σ), then σ wp(s, q) wp(s, q). (See Figure 3.) On the left of Figure 3 is the set of all states Σ broken up into three partitions The states that establish q form wp(s, q) = {σ Σ M(S, σ) = {τ} and τ q} The states that establish q form wp(s, q) = {σ Σ M(S, σ) = {τ} and τ q} The states that lead to form wp(s, q) wp(s, q) = {σ Σ M(S, σ) = { }} The arrows indicate that starting from wp(s, q) or wp(s, q) yields a state that satisfies q or q respectively. Starting from a state outside both weakest preconditions leads to an error. CS 536-05: Science of Programming 4 James Sasaki, 2018

Establish q wp(s, q) E (Causes error) wp(s, q) wp(s, q) Establish q wp(s, q) σ Σ S (converges to q) S (yields an error) S (converges to q) τ q τ q M(S, σ) = {τ} Figure 3: The Weakest Precondition for Deterministic S G. wp and Nondeterministic Programs If S is nondeterministic, then M(S, σ) is a nonempty subset of Σ that can contain more than one member. If everything in M(S, σ) satisfies q, then σ wp(s, q). Similarly, if everything in M(S, σ) satisfies q, then σ wp(s, q). As with deterministic programs, the three predicates wp(s, q), wp(s, q), and wp(s, q) wp(s, q) together cover together all possible states. With deterministic programs, σ wp(s, q) wp(s, q) iff running S in σ causes an error. This is because M(S, σ) is a singleton set {τ], and τ must either be or satisfy q or satisfy q. For deterministic S: σ wp(s, q) wp(s, q) iff M(S, σ) = { }. But with nondeterministic programs, even if M(S, σ) doesn t include, (i.e., tot {T} S {T}), it can still mix states that satisfy q with states that satisfy q. A set like that neither satisfies q nor q. Figure 4 illustrates M(S, σ) q and M(S, σ) q because M(S, σ) {τ₁, τ₂} where τ₁ q and τ₂ q. For nondeterministic S, σ wp(s, q) wp(s, q) iff (1) M(S, σ) or (2) There exist τ₁, τ₂ M(S, σ) with τ₁ q and τ₂ q. Establish q wp(s, q) E (Causes error) wp(s, q) wp(s, q) σ Establish q wp(s, q) σ Σ τ₁ q τ₂ q M(S, σ) {τ₁, τ₂}, τ₁ q, and τ₂ q Figure 4: σ wp(s, q) wp(s, q) even though S terminates Example 9: Let S if x 0 x := 10 x 0 x := 20 fi, and let Σ₀ = M(S, {x = 0}) be the set with two states {{x = 10}, {x = 20}}. Then Σ₀ x = 10, x 10, x = 20, and x 20. (We do have Σ₀ x = 10 x = 20.) CS 536-05: Science of Programming 5 James Sasaki, 2018

H. Disjunctive Postconditions There are some relationships that hold between the wp of a predicate and the wp's of its subpredicates. E.g., if you start in a state that is guaranteed to lead to a result that satisfies q₁ and q₂ separately, then the result will also satisfy q₁ q₂, and vice versa. In symbols, wp(s, q₁) wp(s, q₂) wp(s, q₁ q₂). This relationship holds for both deterministic and nondeterministic S. The relationship between wp(q₁ q₂) and wp(q₁) and wp(q₂) differs for deterministic and nondeterministic S. Deterministic S: For all S, wp(s, q₁) wp(s, q₂) wp(s, q₁ q₂) Nondeterministic S: For all S, wp(s, q₁) wp(s, q₂) wp(s, q₁ q₂), but doesn't hold for some S. For deterministic S, M(S, σ) = {τ} for some τ Σ. If τ q₁ q₂ then either τ q₁ or τ q₂ (or both). So if M(S, σ) { }, then M(S, σ) q₁ q₂ iff M(S, σ) q₁ or M(S, σ) q₂. Because of this, wp(s, q₁) wp(s, q₂) wp(s, q₁ q₂). For nondeterministic S, we still have wp(s, q₁) wp(s, q₂) wp(s, q₁ q₂). I.e., if you start in a state that's guaranteed to terminate satisfying q₁, or guaranteed to terminate satisfying q₂, then that state is guaranteed to terminate satisfying q₁ q₂. For nondeterministic S, the other direction, wp(s, q₁) wp(s, q₂) wp(s, q₁ q₂), doesn't always hold: S can guarantee establishing q₁ q₂ without leaving any way to guarantee satisfaction of just q₁ or just q₂. Example 10: Let CoinFlip if T x := 0 T x := 1 fi. For all σ, M(CoinFlip, σ) = {{x = 0, x = 1}}, which x = 0 x = 1 but x = 0 and x = 1. Let Heads wp(coinflip, x = 0), Tails = wp(coinflip, x = 1), and Heads_or_Tails = wp(coinflip, x = 0 x = 1). We find Heads Tails F but Heads_or_Tails T. Altogether, (Heads Tails) (but not ) Heads_or_Tails. So for nondeterministic S, even though tot {wp(s, q)} S {q}, if q is disjunctive, it's possible for you to run S in a state σ wp(s, q) but still terminate without error in a state satisfying q. (For deterministic S, this won't happen.) E.g., if S if B then x := 0 else x := 1 fi, then M(S, σ) x = 0 or M(S, σ) x = 1 (tails), wp(s, x = 0) B and wp(s, x = 1) B. I. The Weakest Liberal Precondition (wlp) The relationship between the weakest precondition (wp) and the weakest liberal precondition (wlp) is the same as total vs partial correctness. wp(s, q) is the set of start states that guarantee termination establishing q. wlp(s, q) is the set of start states that guarantee (causing an error or termination establishing q). Definition: The weakest liberal precondition of S and q, written wlp(s, q), is the predicate w such that {w} S {q} and for every σ w, M(S, σ) and M(S, σ) q. If we start in a state σ satisfying wlp(s, q) then either some execution path for S in σ causes an error or else all execution paths for S in σ lead to final states that q. If we start in a σ satisfying wlp(s, q), then every execution path for S in σ leads to a final state and at least one of the final states q. CS 536-05: Science of Programming 6 James Sasaki, 2018

We always have wp(s, q) wlp(s, q); the other direction, wp(s, q) wlp(s, q), only holds if S never causes an error. Example 11: Let W while x 0 do x := x-1; y := 0 od; let q x = 0 y = 0 (so q x 0 y 0) Then for M(W, σ), If σ x = 0 then M(W, σ) = {σ} If σ x > 0 then M(W, σ) = {σ[x 0][y 0]} If σ x < 0 then M(W, σ) = { } So wp(w, q) x > 0 x = y = 0 and wlp(w, q) wp(w, q) x < 0 Also, wp(w, q) x = 0 y 0 and wlp(w, q) wp(w, q) x < 0 For the wp, q x 0 y 0, but there s no way for W to terminate satisfying x 0, and the only way for W to terminate with y 0 is to start with x = 0. That wlp is the weakest precondition for partial correctness is analogous to wp being the weakest precondition for total correctness: If w wlp(s, q), then {w} S {q} and for all p, {p} S {q} iff p w. J. Calculating wlp for Loop-Free Programs It s easy to calculate the wlp of a loop-free program. If a loop-free program cannot cause a runtime error then its wp and wlp are the same, which is also nice. The following algorithm takes S and q where S has no loops and syntactically calculates a particular predicate for wlp(s, q), which is why it s described using wlp(s, q) instead of wp(s, q). wlp(skip, q) q wlp(v := e, Q(v)) Q(e) where Q is a predicate function over one variable The operation that takes us from Q(v) to Q(e) is called syntactic substitution; we ll look at it in more detail soon, but in the simple case, we simply inspect the definition of Q, searching its text for occurrences of the variable v and replacing them with copies of e. wlp(s₁; S₂, q) wlp(s₁, wlp(s₂,q)) wlp(if B then S₁ else S₂ fi, q) (B w₁) ( B w₂) where w₁ wlp(s₁,q) and w₂ wlp(s₂,q). If you want, you can write (B w₁) ( B w₂), which is equivalent. wlp(if B₁ S₁ B₂ S₂ fi (B₁ w₁) (B₂ w₂) where w₁ wlp(s₁,q) and w₂ wlp(s₂,q). For the nondeterministic if, don t write (B₁ w₁) (B₂ w₂) instead of (B₁ w₁) (B₂ w₂); they aren t logically equivalent. When B₁ and B₂ are both true, either S₁ or S₂ can run, so we need B₁ B₂ w₁ w₂. Using (B₁ w₁) (B₂ w₂) fails because it allows for the possibility that B₁ and B₂ are both true but one of w₁ and w₂ is not true. This isn t a problem when B₂ B₁, which is why we can use (B w₁) ( B w₂) with deterministic if statements. Some examples of calculating wp/wlp: The programs in these examples end in state, so the wp and wlp are equivalent. CS 536-05: Science of Programming 7 James Sasaki, 2018

Example 12: wp(x := x+1, x 0) x+1 0 Example 13: wp(y := y+x; x := x+1, x 0) wp(y := y+x, wlp(x := x+1, x 0)) wp(y := y+x, x+1 0) x+1 0 Example 14: wp(y := y+x; x := x+1, x y) wp(y := y+x, wp(x := x+1, x y)) wp(y := y+x, x+1 y) x+1 y+x If we were asked only to calculate the wp, we'd stop here. If we also wanted to logically simplify the wp then x+1 y+x y 1. Example 15: (Swap the two assignments in Example 14) wp(x := x+1; y := y+x, x y) wp(x := x+1, wp(y := y+x, x y)) wp(x := x+1, x y+x)) x+1 y+x+1 [ y 0 if you want to logically simplify] Example 16: wp(if y 0 then x := y fi, x 0) wp(if y 0 then x := y else skip fi, x 0) (y 0 wp(x := y, x 0)) (y < 0 wp(skip, x 0)) (y 0 y 0) (y < 0 x 0). If we want to simplify logically, we can continue with y 0 (y < 0 x 0) (y 0 y < 0) (y 0 x 0) y 0 x 0 (which is also y < 0 x 0, if you prefer) K. Avoiding Runtime Errors in Expressions To avoid runtime failure of σ(e), we'll take the context in which we're evaluating e and augment it with a predicate that guarantee non-failure of σ(e). For example, for {P(e)} v := e {P(v)}, we'll augment the precondition to guarantee that evaluation of e won't fail. Definition: For each expression e, we define a domain predicate D(e) where σ D(e) implies σ(e) e. This predicate has to be defined recursively, since we need to handle complex expressions like D(b[b[i]]) 0 i < size(b) 0 b[i] < size(b). As with wp and sp, the domain predicate for an expression is unique only up to logical equivalence. For example, D(x/y + u/v) y 0 v 0 v 0 y 0. We must define D for each kind of expression that can cause a runtime error: D(b[e]) 0 e < size(b) D(e) D(e₁/e₂) D(e₁ % e₂) e₂ 0 D(e₁) D(e₂) D(sqrt(e)) e 0 D(e) And so on, depending on the datatypes and operations being used. CS 536-05: Science of Programming 8 James Sasaki, 2018

We also have to check the subexpressions of larger expressions: If op doesn't cause errors then D(e₁ op e₂) D(e₁) D(e₂) D(op e) D(e) For conditional expressions: D(if B then e₁ else e₂ fi) (B D(e₁)) ( B D(e₂)) D(B). Evaluation of a variable or constant doesn't cause failure: D(e) T if e some constant c or variable v. Example 17: D(b[b[i]]) 0 b[i] < size(b) D(b[i]) 0 b[i] < size(b) 0 i < size(b) D(i) 0 i < size(b) 0 b[i] < size(b) Example 18: D((-b + sqrt(b*b - 4*a*c))/(2*a)) 2*a 0 (b*b - 4*a*c) 0. In detail, D((-b + sqrt(b*b - 4*a*c))/(2*a)) 2*a 0 D((-b + sqrt(b*b - 4*a*c)) D(2*a) 2*a 0 D(-b) D(sqrt(b*b - 4*a*c)) 2*a 0 (b*b - 4*a*c 0) D(b*b - 4*a*c) 2*a 0 (b*b - 4*a*c 0) Example 19: D(if 0 i < size(b) then b[i] else 0 fi) T. In detail, D(if 0 i < size(b) then b[i] else 0 fi) (B D(b[i])) ( B D(0)) D(B) where B 0 i < size(b) (B D(b[i])) ( B T) since D(B) and D(0) T! (B 0 i < size(b) D(i)) expanding D(b[i]) T!!!!!!!! since B 0 i < size(b) L. Avoiding Runtime Errors in Loop-Free Programs Recall that we extended our notion of operational semantics to include S, σ * E, e to indicate that evaluation of S causes a runtime failure. For loop-free programs, we can avoid runtime failure of statements by adding domain predicates to the preconditions of statements.. Definition: wp(s, q) extends the definition of wlp as follows: wp(v := e, P(v)) D(e) wlp(v := e, p) D(e) P(e) wp(if B then S₁ else S₂ fi, q) D(B) (B wp(s₁, q)) ( B wp(s₂, q)) wp(if B₁ S₁ B₂ S₂ fi, q) D(B₁) D(B₂) (B₁ B₂) (B₁ wp(s₁, q)) (B₂ wp(s₂, q)) * The condition (B₁ B₂) ensures that the nondeterministic if-fi won t fail because none of the guard booleans hold. The two remaining clauses of wp match their wlp counterparts: * This can easily be extended to if-fi with not just two arms. CS 536-05: Science of Programming 9 James Sasaki, 2018

wp(skip, q) wlp(skip, q) q wp(s₁; S₂, q) wp(s₁, wp(s₂, q)) Example 20: wlp(x := y; z := v/x, z > x+2) wlp(x := y, wlp(z := v/x, z > x+2)) wlp(x := y, v/x > x+2) v/y > y+2 wp(x := y; z := v/x, z > x+2) wp(x := y, wp(z := v/x, z > x+2)) wp(x := y, x 0 v/x > x+2) y 0 v/y > y+2 Example 22: Let S if b[i] b[j] z := b[i] b[j] b[i] z := b[j] fi. Define a Q(v) that says v is the min of b[i] and b[j] via Q(v) (v = b[i] v = b[j]) v b[i] v b[j]). Note we can simplify Q(b[i]) b[i] b[j] and Q(b[j]) b[j] b[i]. Then wp(s, Q(z)) D(b[i] b[j]) D(b[j] b[i]) (b[i] b[j] b[j] b[i]) (b[i] b[j] wp(z := b[i], Q(z))) (b[j] b[i] wp(z := b[j], Q(z))) (0 i < size(b) 0 j < size(b)) (0 j < size(b) 0 i < size(b)) T (b[i] b[j] wp(z := b[i], Q(z))) (b[j] b[i] wp(z := b[j], Q(z))) 0 i, j < size(b))* (b[i] b[j] Q(b[i])) (b[j] b[i] Q(b[j])) 0 i, j < size(b)) (b[i] b[j] b[i] b[j]) (b[j] b[i] (b[j] b[i]) 0 i, j < size(b) * 0 i, j < size(b)) is shorthand for 0 i < size(b)) 0 j < size(b)) CS 536-05: Science of Programming 10 James Sasaki, 2018

Illinois Institute of Technology Activities 10 & 11 Strength; Weakest Preconditions CS 536: Science of Programming A. Why The weakest precondition is the most general precondition that a program needs in order to run correctly. B. Objectives At the end of this activity you should Be able to calculate the wp of a simple loop-free program. C. Problems For all the problems, just syntactically calculate the wp; don't also logically simplify the result. 1. Calculate wp(k := k - s, n = 3 k = 4 s = -7). 2. Calculate wp(n := n*(n-k), n = 3 k = 4 s = -7). 3. Calculate wp(if B then x := x/2 fi; y := x, x = 5 y = z). 4. Calculate wp(n := n*(n-k); k := k - s, n > k + s) 5. Calculate wp(if x 0 then x := x*2 else x := y fi; x := c*x, a x < y) For Problems 6 8, let Q(i, s) 0 i n s = sum(0, i) where sum(u, v) is the sum of u, u+1,, v (when u v) or 0 (when u > v). 6. Calculate wp(s := s+i; i := i+1, Q(i, s)) 7. Calculate wp(s := s+i+1; i := i+1, Q(i, s)) 8. Calculate wp(i := i+1; s := s+i, Q(i, s)). (Problems 7 and 8 have the same result.) 9. Let w wlp(s, q). The definition of wlp tells us that if σ w, then M(S, σ) Σ and M(S, σ) q. Using this, show that w is the weakest precondition for partial correctness: if {p} S {q} then p w. Hint: Make the argument analogous to the one for wp. 10. How are wp(s, q₁ q₂) and wp(s, q₁) wp(s, q₂), related if S is deterministic? If S is nondeterministic? For Problems 11 and 12, be sure to include the domain predicates. If you want, logically simplify as you go. 11. Use wp to calculate a total correctness precondition p for {p} x := y/b[i] {x > 0}. 12. Use wp to calculate total correctness conditions p₁ and p₂ for {p₁} y := sqrt(b[j]) {z < y} and {p₂} j := x/j {p₁} CS 536: Science of Programming 11 James Sasaki, 2018

Illinois Institute of Technology Activities 10 & 11 13. Which of the following statements are correct? a. For all σ Σ, σ wp(s, q) iff M(S, σ) q b. For all σ Σ, σ wlp(s, q) iff M(S, σ) Σ q c. tot {wp(s, q)} S {q} d. {wlp(s, q)} S {q} e. tot {p} S {q} iff p wp(s, q) f. {p} S {q} iff p wlp(s, q) g. { wp(s, q)} S { q} h. tot { wlp(s, q)} S { q} i. wlp(s, q) wlp(s, q) is not satisfiable j. p wp(s, q) iff tot {p} S {q} k. p wlp(s, q) iff {p} S {q} CS 536: Science of Programming 12 James Sasaki, 2018

Illinois Institute of Technology Activities 10 & 11 Activity 10 Solution 1. wp(k := k - s, n = 3 k = 4 s = -7) n = 3 k-s = 4 s = -7 2. wp(n := n*(n-k), n = 3 k = 4 s = -7) n*(n-k) = 3 k = 4 s = -7 3. wp(if B then x := x/2 fi; y := x, x = 5 y = z) wp(if B then x := x/2 fi, wp(y := x, x = 5 y = z)) wp(if B then x := x/2 fi, x = 5 x = z) (B wp(x := x/2, x = 5 x = z)) ( B wp(skip, x = 5 x = z)) (B x/2 = 5 x/2 = z) ( B x = 5 x = z) 4. wp(n := n*(n-k); k := k-s, n > k+s) wp(n := n*(n-k), wp(k := k-s, n > k+s)) wp(n := n*(n-k), n > k-s+s) n*(n-k) > k-s+s 5. wp(if x 0 then x := x*2 else x := y fi; x := c*x, a x < y) wp(s, wp(x := c*x, a x < y)) where S is the if statement wp(s, a c*x < y) wp(if x 0 then x := x*2 else x := y fi, a c*x < y) (x 0 wp(x := x*2, a c*x < y)) (x < 0 wp(x := y, a c*x < y)) (x 0 a c*(x*2) < y) (x < 0 a c*y < y) For Problems 6 8, q 0 i n s = sum(0, i). 6. wp(s := s+i; i := i+1, q) wp(s := s+i, wp(i := i+1, 0 i n s = sum(0, i))) wp(s := s+i, 0 i+1 n s = sum(0, i+1)) 0 i+1 n s+i = sum(0, i+1) 7. wp(s := s+i+1; i := i+1, q) wp(s := s+i+1, wp(i := i+1, 0 i n s = sum(0, i))) wp(s := s+i+1, 0 i+1 n s = sum(0, i+1)) 0 i+1 n s+i+1 = sum(0, i+1) 8. wp(i := i+1; s := s+i, q) wp(i := i+1, wp(s := s+i, 0 i n s = sum(0, i))) wp(i := i+1, 0 i n s+i = sum(0, i)) 0 i+1 n s+(i+1) = sum(0, i+1) [Same as for Problem 7] CS 536: Science of Programming 13 James Sasaki, 2018

Illinois Institute of Technology Activities 10 & 11 9. Assume {p} S {q}; we want to show that p w where w wlp(s, q). The definition of wlp tells us that for any σ w, we have M(S, σ) Σ and M(S, σ) q. The contrapositive of this is ((M(S, σ) Σ or M(S, σ) q) σ w). Since σ Σ, σ w implies σ w. Turning back to the triple, σ {p} S {q} implies (σ p (M(S, σ) Σ or M(S, σ) q)). Applying the contrapositive tells us (σ p σ w), so σ p w. Since this holds for any σ Σ, we know p w. 10. For deterministic S, wp(s, q₁ q₂) = wp(s, q₁) wp(s, q₂). For nondeterministic S, we have instead of =. (See Example 10.) 11. For {p} x := y/b[i] {x > 0}, let p wp(x := y/b[i], x > 0) D(y/b[i]) (y/b[i] > 0). (0 i < size(b) b[i] 0) (y/b[i] > 0) 12. For {p₁} y := sqrt(b[j]) {z < y}, let p₁ wp(y := sqrt(b[j]), z < y) D(sqrt(b[j])) wlp(y := sqrt(b[j]), z < y) 0 j < size(b) b[j] 0 z < sqrt(b[j]) For {p₂} j := x/j; {p₁}, let p₂ wp(j := x/j, p₁) D(x/j) wlp(j := x/j, p₁) j 0 (0 j < size(b) b[j] 0 z < sqrt(b[j]))[x/j / j] j 0 0 x/j < size(b) b[x/j] 0 z < sqrt(b[x/j]). 13. (Properties of wp and wlp) The following properties are correct: (a) and (b) are the basic definitions of wp and wlp (c) and (d) say that wp and wlp are preconditions (e) and (f) say that wp and wlp are weakest preconditions (g) and (h) also say that wp and wlp are weakest (j) and (k) are the contrapositives of (e) and (f). However, (i) is incorrect: It claims that wlp(s, q) wlp(s, q) is never satisfiable, but if M(S, σ) { }, then σ satisfies both wlp(s, q) and wlp(s, q). CS 536: Science of Programming 14 James Sasaki, 2018