Temporal Logic and Fair Discrete Systems

Similar documents
Temporal Logic. Stavros Tripakis University of California, Berkeley. We have designed a system. We want to check that it is correct.

Introduction to Model Checking. Debdeep Mukhopadhyay IIT Madras

Linear Temporal Logic and Büchi Automata

Timo Latvala. February 4, 2004

T Reactive Systems: Temporal Logic LTL

From Liveness to Promptness

Temporal logics and explicit-state model checking. Pierre Wolper Université de Liège

Introduction to Temporal Logic. The purpose of temporal logics is to specify properties of dynamic systems. These can be either

Abstractions and Decision Procedures for Effective Software Model Checking

Chapter 5: Linear Temporal Logic

Chapter 6: Computation Tree Logic

Automata-Theoretic Model Checking of Reactive Systems

Computation Tree Logic (CTL) & Basic Model Checking Algorithms

The State Explosion Problem

First-order resolution for CTL

Model Checking: An Introduction

Chapter 4: Computation tree logic

Computation Tree Logic

FORMAL METHODS LECTURE III: LINEAR TEMPORAL LOGIC

Model for reactive systems/software

Temporal Logic Model Checking

Computer-Aided Program Design

Double Header. Model Checking. Model Checking. Overarching Plan. Take-Home Message. Spoiler Space. Topic: (Generic) Model Checking

An Introduction to Temporal Logics

Timo Latvala. March 7, 2004

Verification Using Temporal Logic

Lecture Notes on Software Model Checking

Complexity Theory VU , SS The Polynomial Hierarchy. Reinhard Pichler

Outline. Complexity Theory EXACT TSP. The Class DP. Definition. Problem EXACT TSP. Complexity of EXACT TSP. Proposition VU 181.

Finite-State Model Checking

Automata-based Verification - III

Synthesis of Designs from Property Specifications

THEORY OF SYSTEMS MODELING AND ANALYSIS. Henny Sipma Stanford University. Master class Washington University at St Louis November 16, 2006

PSL Model Checking and Run-time Verification via Testers

LTL with Arithmetic and its Applications in Reasoning about Hierarchical Systems

Helsinki University of Technology Laboratory for Theoretical Computer Science Research Reports 66

Modal and Temporal Logics

Model Checking. Temporal Logic. Fifth International Symposium in Programming, volume. of concurrent systems in CESAR. In Proceedings of the

Alan Bundy. Automated Reasoning LTL Model Checking

A Sound and Complete Deductive System for CTL* Verification

Synthesis of Reactive(1) Designs

Failure Diagnosis of Discrete Event Systems With Linear-Time Temporal Logic Specifications

A Hierarchy for Accellera s Property Specification Language

Characterizing Fault-Tolerant Systems by Means of Simulation Relations

Alternating Time Temporal Logics*

Automata-based Verification - III

Temporal Logic. M φ. Outline. Why not standard logic? What is temporal logic? LTL CTL* CTL Fairness. Ralf Huuck. Kripke Structure

Model Checking of Safety Properties

LTL is Closed Under Topological Closure

PSPACE-completeness of LTL/CTL model checking

Probabilistic Model Checking Michaelmas Term Dr. Dave Parker. Department of Computer Science University of Oxford

The algorithmic analysis of hybrid system

Automata, Logic and Games: Theory and Application

Decentralized Control of Discrete Event Systems with Bounded or Unbounded Delay Communication

Design of Distributed Systems Melinda Tóth, Zoltán Horváth

Lecture 16: Computation Tree Logic (CTL)

Algorithmic verification

Topics in Verification AZADEH FARZAN FALL 2017

Overview. Discrete Event Systems Verification of Finite Automata. What can finite automata be used for? What can finite automata be used for?

Overview. overview / 357

Computation Tree Logic

Theoretical Foundations of the UML

Chapter 3 Deterministic planning

Lecture Notes on Emptiness Checking, LTL Büchi Automata

IC3 and Beyond: Incremental, Inductive Verification

Lecture 7 Synthesis of Reactive Control Protocols

CTL Model checking. 1. finite number of processes, each having a finite number of finite-valued variables. Model-Checking

Lecture Notes on Model Checking

CS 267: Automated Verification. Lecture 1: Brief Introduction. Transition Systems. Temporal Logic LTL. Instructor: Tevfik Bultan

Chapter 3: Linear temporal logic

Guest lecturer: Prof. Mark Reynolds, The University of Western Australia

Effective Synthesis of Asynchronous Systems from GR(1) Specifications

Temporal Logic of Actions

CS357: CTL Model Checking (two lectures worth) David Dill

Revising UNITY Programs: Possibilities and Limitations 1

Essential facts about NP-completeness:

Tecniche di Verifica. Introduction to Propositional Logic

Logic. Propositional Logic: Syntax

Model checking the basic modalities of CTL with Description Logic

Propositional and Predicate Logic - V

Model Checking Algorithms

Modal logics: an introduction

Mathematical Foundations of Logic and Functional Programming

FAIRNESS FOR INFINITE STATE SYSTEMS

State-Space Exploration. Stavros Tripakis University of California, Berkeley

Dipartimento di Scienze dell Informazione

Temporal & Modal Logic. Acronyms. Contents. Temporal Logic Overview Classification PLTL Syntax Semantics Identities. Concurrency Model Checking

Linear-Time Logic. Hao Zheng

The complexity of recursive constraint satisfaction problems.

ESE601: Hybrid Systems. Introduction to verification

Weak Alternating Automata Are Not That Weak

Alternating-Time Temporal Logic

CDS 270 (Fall 09) - Lecture Notes for Assignment 8.

Petri nets. s 1 s 2. s 3 s 4. directed arcs.

Model Theory of Modal Logic Lecture 1: A brief introduction to modal logic. Valentin Goranko Technical University of Denmark

MODEL CHECKING. Arie Gurfinkel

Course Runtime Verification

Safety and Liveness Properties

Chapter 5: Linear Temporal Logic

Formal Verification Techniques. Riccardo Sisto, Politecnico di Torino

Transcription:

Temporal Logic and Fair Discrete Systems Nir Piterman and Amir Pnueli Abstract Temporal logic was used by philosophers to reason about the way the world changes over time. Its modern use in specification and verification of systems describes the evolution of states of a program/design giving rise to descriptions of executions. Temporal logics can be classified by their view of the evolution of time as either linear or branching. In the linear time view, we see time ranging over a linear total order and executions are sequences of states. When the system has multiple possible executions (due to nondeterminism or reading input) we view them as separate possible evolutions and a system has a set of possible behaviors. In the branching time view, a point in time may have multiple possible successors and accordingly executions are tree-like structures. According to this view, a system has exactly one execution that takes the form of a tree. We start this chapter by introducing Fair Discrete Structures, the model on which we evaluate the truth and falsity of temporal logic formulas. Fair Discrete Structures describe the states of a system and their possible evolution. We then proceed with the linear time view and introduce Propositional Linear Temporal Logic (LTL). We explain the distinction between safety and liveness properties and introduce a hierarchy of liveness properties of increasing expressiveness. We study the expressive power of full LTL and cover extensions that increase its expressive power. We introduce algorithms for checking the satisfiability of LTL and model checking LTL. We turn to the branching time framework and introduce Computational Tree Logic (CTL). As before, we discuss its expressive power, consider extensions, and cover satisfiability and model check- Nir Piterman University of Leicester, Leicester, UK, e-mail: nir.piterman@le.ac.uk Comment by Nir Piterman: Sadly, during the preparation of this manuscript, in November 2009, Amir passed away. While I tried to keep the chapter in spirit of our plans I cannot be sure that Amir would have agreed with the final outcome. All faults and inaccuracies are definitely my own. In addition, opinions or judgements passed in this chapter may not reflect Amir s view. I would like to take this opportunity to convey my deep gratitude to Amir; my teacher, supervisor, mentor, and friend. 1

2 Nir Piterman and Amir Pnueli ing. We then dedicate some time to examples of formulas in both LTL and CTL and stress the differences between the two. We finalize with a formal comparison of LTL and CTL and, in view of this comparison, introduce CTL*, a hybrid of LTL and CTL that combines the linear and branching views into one logic. 1 Introduction For many years, philosophers, theologians, and linguists have been using logic in order to reason about the world, freedom of choice, and the meaning of spoken language. In the middle of the 20th century foundations of this research led to two complementary breakthroughs. In the late 1950s Arthur Prior introduced what we call today tense logic; essentially, introducing the operators F p meaning it will be the case that p and its dual Gp meaning it will always be the case that p [34]. This work was done in the context of modal logic, where operators G and F meant it must be and it is possible. Roughly at the same time, working on modal logic, Saul Kripke introduced a semantics, which we call today Kripke semantics to interpret modal logic [19]. His suggestion was to interpret modal logic over a set of possible worlds and an accessibility relation between worlds. The modal operators are then interpreted over the accessibility relation of the worlds. About 20 years later, these ideas penetrated the verification community. In a landmark paper, Pnueli introduced Prior s tense operators to verification and suggested that they can be used for checking the correctness of programs [32]. In particular, Pnueli s ideas considered ongoing and non-terminating computations of programs. The existing paradigm of verification at the time was that of the pre- and postconditions matching programs that get an input and produce an output upon termination. Instead, what was later termed as reactive systems [16] continuously interact with their environment, receive inputs, send outputs, and importantly do not terminate. In order to describe the behaviors of such programs one needs to describe, for example, causality of interactions and their order. Temporal logic proved a convenient way to do that. It can be used to capture the specification of programs in a formal notation that can then be checked on programs. Linear Temporal Logic (abbreviated LTL), the logic introduced by Pnueli, is linear in the sense that at every moment in time there is only one possible future. In the linear view an execution is a sequence of states. Multiple possible executions (e.g., due to different inputs) are treated separately as independent sequences. Logicians also had another possible option, to view time as branching: at every moment in time there may be multiple possible futures. An alternative view, that of branching time, was introduced to verification in [3]. In the branching-time approach, the program itself provides a branching structure, where every possible snapshot of the program may have multiple options to continue. Then, the program is one logical structure that encompasses all possible behaviors. This was shortly followed by a second revolution. Clarke and Emerson [10] and Queille and Sifakis [35] introduced a more elaborate branching-time logic, Computation Tree Logic (abbreviated CTL), and the

Temporal Logic and Fair Discrete Systems 3 idea of model checking: Model checking is the algorithmic analysis for determining if a program satisfies a given temporal logic specification. The linear-branching dichotomy, originating in a religious debate regarding freedom of choice and determinacy of fate, led to a ideological debate regarding the usage of temporal logic in verification (see [41]). Various studies compared the lineartime and branching-time approaches leading also to the invention of CTL, a logic that combines linear temporal logic and computation tree logic [12]. Over the years, extensive knowledge on all aspects of temporal logic and its usage in model checking and verification has been gathered: what properties can and cannot be expressed in various logics, how to extend the expressive power of logics, and algorithmic aspects of different questions. In particular, effective modelchecking algorithms scaled to huge systems and established model checking as a successful technique in both industry and academia. In this chapter, we cover the basics of temporal logic, the specification language used for model checking. We introduce Kripke semantics and a variant of it that we are going to use as a mathematical model for representing programs. We present fair discrete systems and explain how to use them for representing programs. In Sect. 3 we present LTL, the basic linear temporal logic. We give the basic definition of the logic, discuss several possible extensions of it, and finally, define and show how to solve model checking for LTL formulas. In Sect. 4 we present CTL, the basic branching temporal logic including its definition, extensions to it, and model checking. Then, in Sect. 5 we cover examples for the usage of both logics. Finally, in Sect. 6, we compare the expressive power of the linear-time and branching-time variants and introduce the logic CTL, a powerful branching-time logic that combines both LTL and CTL. 2 Fair Discrete Systems In order to be able to formally reason about systems and programs, we have to agree on a formal mathematical model on which reasoning can be applied. Transition systems are, by now, a standard tool in computer science to represent programs. Transition systems are essentially labeled graphs, where labels appear either on states, edges, or both. There are many different variants of transition systems supplying many different flavors and needs. We define Kripke structures, one of the most popular versions of transition systems used in modeling. Then, we present a symbolic version of transition systems, which we call a fair discrete system or FDS for short, where the states arise as interpretations of variables and transitions correspond to changes in variables values. As their name suggests, variables range over discrete domains, such as integers, Booleans, or other finite range domains. In Chaps. 27 and 28 continuous variables will be considered. One of the most important features of programs and systems is concurrency, or the ability to communicate with other programs. Here, communication is by reading and writing the values of shared variables. In order to reason about multiple communicating programs (and also about

4 Nir Piterman and Amir Pnueli temporal logic) our systems include weak and strong fairness. Fairness requirements restrict our attention to certain paths in the system that have some good qualities when considering interaction and communication. 2.1 Kripke Structures We give a short exposition of Kripke structures [19], one of the most popular formalisms for representing transition systems in the context of model checking. These are transition systems where states are elements of an abstract set and initial states, and transitions are defined over this explicit set. In addition, states are labeled with propositions (or observations about the state). Definition 1 (Kripke Structure). A Kripke structure is K = AP,S,S 0,R,L, where AP is a finite set of atomic propositions, S is a set of states, S 0 S is a set of initial states, R S S is a transition relation, and L : S 2 AP is a labeling function. Kripke structures are state-labeled transition systems. We specify in advance a set of labels (propositions), that are the basic facts that are known about the world. The labeling function associates every state with the set of atomic propositions that are true in it. The set of initial states and transition relation allow us to add a notion of executions to Kripke structures. Definition 2 (Paths and Runs). A path of a Kripke structure K starting in state s S is a maximal sequence of states σ = s 0,s 1,... such that s 0 = s and for every j 0 we have (s i,s i+1 ) R. A sequence σ is maximal if either σ is infinite or σ = s 0,...,s k and s k has no successor, i.e., for all s S we have (s k,s) / R. A path starting in a state s 0 S 0 is called a run. We denote by Runs(K ) the set of runs of K. The direct representation of states and transitions makes Kripke structures convenient for certain needs. However, in model checking, higher level representations of transition systems enable a more direct relation to programs and enable us to reason about systems using sets of states rather than individual states. We now introduce one such modeling framework, called Fair Discrete Systems. The idea is that states are obtained by interpreting the values of variables of the system. In addition, in the context of model checking it is sometimes necessary to ignore some not interesting runs of the system. For that we introduce the notion of fairness, which is usually not considered with Kripke structures. 2.2 Definition of Fair Discrete System Definition 3 (Fair Discrete System). An FDS is D = V,θ,ρ,J,C, where V is a finite set of typed variables, θ is an initial condition, ρ is a transition relation,

Temporal Logic and Fair Discrete Systems 5 J a set of justice requirements, and C a set of compassion requirements. Further details about these components are given below. V = {v 1,...,v n }: A finite set of typed variables. Variables range over discrete domains, such as Boolean or integers. Although variables ranging over finite domains can be represented by multiple Boolean variables, sometimes it is convenient to use variables with a larger range. Program counters, ranging over the domain of program locations, are a prominent example. A state s is an interpretation of V, i.e., if D v is the domain of v, then s is an element in vi V D vi. We denote the set of possible states Σ V. Given a subset V 1 of V, we denote by s V1 the projection of s on the variables in V 1. That is, the assignment to variables in V 1 that agrees with s. When all variables range over finite domains, the system has a finite number of states, called a finite-state system. Otherwise, it is an infinite-state system. We assume some underlying first-order language over V that includes (i) expressions constructed from the variables in V, (ii) atomic formulas that are either Boolean variables or the application of different predicates to expressions, and (iii) assertions that are first-order formulas constructed from atomic formulas using Boolean connectives or quantification of variables. Assertions, also sometimes called state formulas, characterize states through restriction of possible variable values in them. For example, for a variable x ranging over integers x+1 > 5 is an atomic formula, and for a Boolean variable b, the assertion b x + 1 > 5 characterizes the set of states where b is false and x is at least 5. θ : The initial condition. This is an assertion over V characterizing all the initial states of the FDS. A state is called initial if it satisfies θ. ρ : A transition relation. This is an assertion ρ(v V ), where V is a primed copy of the variables in V. The transition relation ρ relates a state s Σ to its D-successors s Σ, i.e., (s,s ) = ρ, where s supplies the interpretation to the variables in V and s supplies the interpretation to the variables in V. For example, the assignment x = x+1 is to be written x = x+1, stating that the next value of x is equal to the current value of x plus one. Given a set of variables X V, we denote by keep(x ) the formula x X x = x, that preserves the values of all variables in X. J = {J 1,...,J m } : A set of justice requirements (weak fairness). Each requirement J J is an assertion over V that is intended to hold infinitely many times in every computation. C = {(P 1,Q 1 ),...,(P n,q n )} : A set of compassion requirements (strong fairness). Each requirement (P,Q) C consists of a pair of assertions, such that if a computation contains infinitely many P-states, it should also contain infinitely many Q-states. Definition 4 (Paths, Runs, and Computations). A path of an FDS D starting in state s is a maximal sequence of states σ = s 0,s 1,... such that s 0 = s and for every j 0 we have (s i,s i+1 ) = ρ, where s i+1 is a primed copy of s i+1, i.e., it is an assignment to V such that for every v V we have s i+1 (v) = s i+1 (v ). A sequence σ is maximal if either σ is infinite or σ = s 0,...,s k and s k has no D-successor, i.e.,

6 Nir Piterman and Amir Pnueli for all s k+1 Σ, (s k,s k+1 ) = ρ. We denote by σ the length of σ, that is σ = ω if σ is infinite and σ = k + 1 if σ = s 0,...,s k is finite. Given a path σ = s 0,s 1,..., we denote by σ V1 the path s 0 V1,s 1 V1,... A path σ is fair if it is infinite and satisfies the following additional requirements: (i) justice (or weak fairness), i.e., for each J J, σ contains infinitely many J- positions, i.e., positions j 0, such that s j = J, and (ii) compassion (or strong fairness), i.e., for each (P,Q) C, if σ contains infinitely many P-positions, it must also contain infinitely many Q-positions. A path starting in a state s such that s = θ is called a run. A fair path that is a run is called a computation. We denote by Runs(D) the set of runs of D and by Comp(D) the set of computations of D. A state s is said to be reachable if it appears in some run. It is reachable from t, if it appears on some path starting in t. A state s is viable if it appears in some computation. An FDS is called viable if it has some computation. An FDS is said to be fairness-free if J = C = /0. It is called a just discrete system (JDS) if C = /0. When J = /0 or C = /0 we simply omit them from the description of D. Note that for most reactive systems, it is sufficient to use a JDS (i.e., compassionfree) model. Compassion is only needed in cases in which the system uses built-in synchronization constructs such as semaphores or synchronous communication. A fairness-free FDS can be converted to a Kripke structure. Given an FDS D = V,θ,ρ and a set of basic assertions over its variables {a 1,...,a k } the Kripke structure obtained from it is K D = AP,S,S 0,R,L, where the components of K D are as follows. The set of states S is the set of possible interpretations of the variables in V, namely Σ V. The set of initial states S 0 is the set of states s such that s = θ, i.e., S 0 = {s = θ}. The transition relation R contains the pairs (s,t) such that (s,t ) = ρ. Notice that t is a primed copy of t and is interpreted over the primed variables V. Finally, the set of propositions is AP = {a 1,...,a k } and a i L(s) iff s = a i. We note that the number of states of the Kripke structure may be exponentially larger than the description of the FDS. We state without proof that this translation maintains the notion of a run. We note that one can add fairness to Kripke structures and define a notion of computation that is similar to that of FDS. Lemma 1. Given an FDS D, the sets of runs of D and K D are equivalent, namely, Runs(D) = Runs(K D ). Sometimes it will be convenient to construct larger FDSs from smaller FDSs. Consider two FDSs D 1 = V 1,θ 1,ρ 1,J 1,C 1 and D 2 = V 2,θ 2,ρ 2,J 2,C 2, where V 1 and V 2 are not necessarily disjoint. The synchronous parallel composition of D 1 D 2 of D 1 and D 2 is the FDS defined as follows. D 1 D 2 = V 1 V 2,θ 1 θ 2,ρ 1 ρ 2,J 1 J 2,C 1 C 2 A transition of the synchronous parallel composition is a joint transition of the two systems. A computation of the synchronous parallel composition when restricted to the variables in one of the systems is a computation of that system. Formally, we have the following.

Temporal Logic and Fair Discrete Systems 7 Fig. 1 A simple loop. Var n:integer initially n = 10 l 0 : while (n > 0) { l 1 : n = n 1; l 2 : } l 3 : Lemma 2. A sequence σ (Σ V1 V 2 ) ω is a computation of D 1 D 2 iff σ V1 is a computation of D 1 and σ V2 is a computation of D 2. Proof is omitted. The asynchronous parallel composition D 1 D 2 of D 1 and D 2 is the FDS defined as follows. D 1 D 2 = V 1 V 2,θ 1 θ 2,ρ,J 1 J 2,C 1 C 2, where ρ = (ρ 1 keep(v 2 \ V 1 )) (ρ 2 keep(v 1 \ V 2 )). A transition of the asynchronous parallel composition is a transition of one of the systems preserving unchanged the variables of the other. A computation of the asynchronous parallel composition, when restricted to the variables in one of the systems is not necessarily a computation of that system. For example, if the sets of variables of the two systems intersect and system one modifies the variables of system two, the projection of the computation on the variables of system two could include changes not allowed by the transition of system two. 2.3 Representing Programs We show how FDS can represent programs. We do not formally define a programming language, however, the meaning of commands and constructs will be clear from the translation to FDS. A more thorough discussion of representation of programs is given in Chap. 3. FDSs are a simple variant of the State Transition Systems (STS) defined in that chapter. Consider for example the program in Fig. 1. It can be represented as an FDS with the variables π and n, where π is the program location variable ranging over {l 0,...,l 3 } and n is an integer that starts as 10. Formally, D = {π,n},θ,ρ,j,c, where J = /0, C = /0, and θ and ρ are as follows. θ : π = l 0 n = 10 ρ : (π = l 0 n > 0 π = l 1 n = n) (π = l 0 n 0 π = l 3 n = n) (π = l 1 π = l 2 n = n 1) (π = l 2 π = l 0 n = n) (π = π n = n) For software programs, we always assume that the transition relation ρ includes as a disjunct the option to stutter, that is, do nothing. This allows modeling the

8 Nir Piterman and Amir Pnueli environment of a single processor that devotes attention to one of many threads. Given a program counter variable, we denote by at li the formula π = l i. In case of multiple program counters, we assume that their ranges are disjoint and identify the right variable by its range. E.g., one program counter ranges over l i and the other over m i making π = m i unambiguous. Similarly, at l i is π = l i The following sequence of states is a run of the simple loop in Fig. 1. σ = π:l 0,n:10, π:l 1,n:10, π:l 2,n:9, π:l 0,n:9, π:l 1,n:9, π:l 2,n:8, π:l 0,n:8,..., π:l 0,n:1, π:l 1,n:1, π:l 2,n:0, π:l 0,n:0, π:l 3,n:0,... Indeed, it starts in an initial state, where π = l 0 and n = 10, and every two adjacent states satisfy the transition relation. For example, the two states π : l 1,n : 9 and π : l 2,n : 8 satisfy the disjunct at l1 at l 2 n = n 1 where π and n range over the first state and π and n range over the second state. Consider the two processes in Fig. 2, depicting Peterson s mutual exclusion algorithm [31]. Consider the process on the left. It can be represented as an FDS with Boolean variables x, y, and t and a location variable π 1 ranging over {l 0,...,l 7 }. Formally, D 1 = {π 1,x,y,t},θ 1,ρ 1,J,C, where the components of D 1 are as follows. θ 1 : at l0 x = 0 ρ 1 : (at l0 at l 1 keep(x,y,t)) (at l1 at l 2 keep(x,y,t)) (at l2 at l 3 x = 1 keep(y,t)) (at l3 at l 4 t = 1 keep(x,y)) (at l4 (t = 0 y = 0) at l 5 keep(x,y,t)) (at l5 at l 6 keep(x,y,t)) (at l6 x = 0 at l 7 keep(y,t)) (at l7 at l 0 keep(x,y,t)) keep(π 1,x,y,t) Notice that the disjunct keep(π 1,x,y,t) allows this process to stutter but also includes, the transition from l 4 to l 4 in case that t 0 and y 0. Dually, the process on the right can be represented as an FDS with Boolean variables {x,y,t} and location variable π 2 ranging over {m 0,...,m 7 }. Formally D 2 = {π 2,x,y,t},θ 2,ρ 2,J,C, where the components of D 2 are as follows. θ 2 : at m0 y = 0 ρ 2 : (at m0 at m 1 keep(x,y,t)) (at m1 at m 2 keep(x,y,t)) (at m2 at m 3 y = 1 keep(x,t)) (at m3 at m 4 t = 0 keep(x,y)) (at m4 (t = 1 x = 0) at m 5 keep(x,y,t)) (at m5 at m 6 keep(x,y,t)) (at m6 y = 0 at m 7 keep(x,t)) (at m7 at m 0 keep(x,y,t)) keep(π 2,x,y,t) In addition we add an FDS T that sets t = 0 initially. Let T = {t},t = 0,t = t, /0, /0. In order to obtain an FDS that represents the entire behavior of the two processes together, we construct the asynchronous parallel composition of D 1, D 2, and T. That is, the FDS representing Peterson s mutual exclusion protocol is D = D 1 D 2 T.

Temporal Logic and Fair Discrete Systems 9 Var t,x,y: Boolean initially t = 0, x = 0, y = 0 l 0 : while (true) { m 0 : while (true) { l 1 : Non Critical; m 1 : Non Critical; l 2 : x = 1; m 2 : y = 1; l 3 : t = 1; m 3 : t = 0; l 4 : await (t == 0 y == 0); m 4 : await (t == 1 x == 0); l 5 : Critical; m 5 : Critical; l 6 : x = 0; m 6 : y = 0; l 7 : } m 7 : } Fig. 2 Peterson s mutual exclusion algorithm. The following sequence of states is a run of the processes in Fig. 2. Between every two states we write the location from which a transition is applied. σ : π 1 : l 0,x : 0,π 2 : m 0,y : 0,t : 0 m 0 π 1 : l 0,x : 0,π 2 : m 1,y : 0,t : 0 π 1 : l 0,x : 0,π 2 : m 2,y : 0,t : 0 m 2 π 1 : l 0,x : 0,π 2 : m 3,y : 1,t : 0 l 1 π 1 : l 1,x : 0,π 2 : m 3,y : 1,t : 0 π1 : l 2,x : 0,π 2 : m 3,y : 1,t : 0 l 3 π 1 : l 3,x : 1,π 2 : m 3,y : 1,t : 0 π1 : l 4,x : 1,π 2 : m 3,y : 1,t : 1 l π 1 : l 4,x : 1,π 2 : m 3,y : 1,t : 1 4 l 4 l 4 l 4 l 4 m 1 l 0 l 2 l 4 However, this run seems to violate our basic intuition regarding scheduling of different threads. Indeed, from some point onwards the processor gives attention only to the first thread. In order to remove such behaviors we add the following two justice requirements. J 1 : { at li, at l4 (t = 1 y = 1) i {0,2,3,5,6,7}} J 2 : { at mi, at m4 (t = 0 x = 1) i {0,2,3,5,6,7}} The justice requirement for their composition D is J = J 1 J 2. Notice that it is fine for the processes to remain forever in their non-critical sections. Now, the run above violates the justice requirement. Indeed, the requirement at m3 does not hold on the last state. Thus, in this run there are only finitely many positions satisfying at m3. In order to constitute a computation the following suffix, for example, could be added. π 1 : l 4,x : 1,π 2 : m 3,y : 1,t : 1 m 3 π 1 : l 4,x : 1,π 2 : m 4,y : 1,t : 0 l π 1 : l 5,x : 1,π 2 : m 4,y : 1,t : 0 5 l π1 : l 6,x : 1,π 2 : m 4,y : 1,t : 0 6 π 1 : l 7,x : 0,π 2 : m 4,y : 1,t : 0 m 4 π 1 : l 7,x : 0,π 2 : m 5,y : 1,t : 0 We now turn to an example eliciting the need for compassion. Consider the two processes in Fig. 3. The process on the left can be represented as an FDS with a Boolean variable x and a location variable π 1 ranging over {l 0,...,l 5 }. Formally l 4

10 Nir Piterman and Amir Pnueli D 1 = {π 1,x},θ 1,ρ 1,J,C, where the components of D 1 are as follows. θ 1 : at l0 x = 1 ρ 1 : (at l0 at l 1 keep(x)) (at l1 at l 2 keep(x)) (at l2 at l 3 x = 1 x = 0) (at l3 at l 4 keep(x)) (at l4 at l 5 x = 1) (at l5 at l 0 keep(x)) keep(π 1,x) The system D 2 is obtained from D 1 by replacing every reference to l by reference to m. It is clear that we have to include also requirements from a scheduler. This time, location l 2 is problematic. Suppose that we try, as before the following sets. J 1 : { at li, at l2 x = 0 i {0,3,4,5}} J 2 : { at mi, at m2 x = 0 i {0,3,4,5}} However, it is simple to see that this is not strong enough. Consider the following computation. σ : π 1 : l 0,π 2 : m 0,x : 1 m 0 π 1 : l 0,π 2 : m 1,x : 1 m 1 π 1 : l 0,π 2 : m 2,x : 1 m 2 π 1 : l 0,π 2 : m 3,x : 0 l π 1 : l 1,π 2 : m 3,x : 0 1 l π1 : l 2,π 2 : m 3,x : 0 2 π 1 : l 2,π 2 : m 3,x : 0 m 3 π 1 : l 2,π 2 : m 4,x : 0 m 4 m 0 π 1 : l 2,π 2 : m 5,x : 1 m 5 m 1 m 2 l 2 m 3 m 4 This computation keeps D 1 in location 2 forever. This is indeed a computation as there are infinitely many positions where the transition from l 2 is not enabled and at l2 x = 0 holds. Clearly, this does not seem acceptable. One could ask why not replace the justice requirement related to location 2 by at l2. However, this is too strong. If we replace D 2 by an FDS that sets x to 0 and never resets it to 1, clearly, we cannot expect a computation that fulfils the justice requirement at l2. In order to solve this problem we replace the justice requirements relating to location 2 by the following compassion. C 1 : { at l2 x = 1, at l2 } C 2 : { at m2 x = 1, at m2 } Namely, if there are enough opportunities where D 1 is at location 2 and x is free, we expect D 1 to get an option to move when x is free; and similarly for D 2. l 0 2.4 Algorithms We now consider the algorithms that show whether a given FDS has some computation. For infinite-state systems the question is in general undecidable, as an infinitestate system can easily represent the halting problem or its dual. For various kinds of

Temporal Logic and Fair Discrete Systems 11 Var x: Boolean initially x = 1 l 0 : while (true) { m 0 : while (true) { l 1 : Non Critical; m 1 : Non Critical; l 2 : request (x); m 2 : request (x); l 3 : Critical; m 3 : Critical; l 4 : release (x); m 4 : release (x); l 5 : } m 5 : } Fig. 3 Mutual exclusion using semaphores. infinite-state systems this question is discussed in Chaps. 16, 27, and 28. Here, we concentrate on the case of finite-state systems, where all variables range over finite domains. For such systems, simple algorithms establish whether an FDS is viable. We start with the simple case of a fairness-free FDS. We then show how to solve the same problem for JDS and for general FDS. Here, we concentrate on symbolic algorithms that handle sets of states. We assume that we have some efficient way to represent, manipulate, and compare assertions. One such system is discussed in Chap. 4. Enumerative algorithms, which consider individual states, are discussed in Chap. 11A. We fix an FDS D = V,θ,ρ,J,C. In the algorithm we use the operators post() and pre(). The operator post(s,ρ) returns the set {t s S. (s,t ) = ρ} of successors of the states in S according to ρ. Similarly, the operator pre(s,ρ) returns the set {s t S. (s,t ) = ρ} of predecessors of the states in S. For an assertion τ characterizing the set of states S, we use the following operators. prime(τ) = V. (τ ( v V v = v )) unprime(τ) = V. (τ ( v V v = v )) post(τ,ρ) = unprime( V. (τ ρ)) pre(τ,ρ) = V. (prime(τ) ρ) For example, given an assertion τ, the operator post(τ, ρ) produces an assertion describing the successors of the states satisfying τ. Indeed, the assertion τ ρ describes the pairs of states satisfying the transition relation ρ such that the first (unprimed) satisfies τ. Then, V. (τ ρ) quantifies out the first state leaving us with a description over the set of variables V of the set of successors of states in τ. Finally, unprime( V. (τ ρ)) converts it to an assertion over the variables in V. The operator pre(τ,ρ) first converts the assertion τ to an assertion over V using the operator prime(τ), then the conjunction with ρ creates a description of pairs that satisfy the transition such that the second state (primed) satisfies τ. Finally, quantifying the primed variables creates an assertion describing the predecessors of τ. We are now ready to present the first algorithm. Alg. 1 computes an assertion characterizing the reachable states. All variables range over assertions. As the system has a finite number of states, it is clear that this algorithm terminates. Indeed, each time the loop body is run the assertion new characterizes more and more states. As the number of states is finite, at some point, old is equivalent to new. It is also simple to see that when the loop terminates, the assertion new characterizes the set of reachable states. Indeed, initially, it starts with all initial states

12 Nir Piterman and Amir Pnueli Algorithm 1 Reachable states 1: new := θ; 2: old := new; 3: while (new old) 4: old := new; 5: new := new post(new, ρ) 6: end while and gradually adds states that are reachable with an increasing number of steps. We do not prove this formally. To simplify future algorithms, we introduce the operators reach(τ, ρ) and backreach(τ, ρ) that compute the set of states reachable from τ using the transition relation ρ and the set of states that can reach τ using the transition relation ρ, respectively. The operator reach(τ, ρ) is the result of running the algorithm for reachability initializing new to τ. The operator backreach(τ,ρ) is the result of running the algorithm obtained from the algorithm for reachability by replacing the usage of post() by pre() and initializing new to τ. The algorithm computes the fixpoint of the operator in the loop body. We introduce a shorthand for this type of while loops. The loop header fix(new:= τ), initializes the variable new to τ, initializes the variable old to τ, terminates when old and new represent the same assertion, and updates old to new whenever it starts the loop body. We denote by new 0 the initial value of the loop variable and by new i the value at the end of the ith iteration. Generally, as for some j 0 we have new j+1 = new j, we consider the value of new i for i j to be new j. Let new fix denote the value of the variable new when exiting the loop. We now turn our attention to the question of viability, starting in the case of fairness-free FDS. For finite-state FDS, viability is essentially reduced to finding the set of reachable states from which infinite paths can start. Algorithm 2 Viability (no fairness) 1: reach := reach(θ,ρ); 2: fix (new := reach) 3: ρ f := ρ new prime(new); 4: new := new pre(new,ρ f ); 13: end fix Again, like the reachability algorithm, it is clear that this algorithm terminates. The set of states represented by new is non-increasing when recomputed by the loop body. As the system is finite state, at some point, this set either becomes empty or does not change, leading to termination. Lemma 3. For a fairness free FDS D Alg. 2 computes the set of viable states. Proof. We show that for every i and for every s such that s = new i there is a path of length at least i + 1 starting from s. For i = 0 this clearly holds as from every state

Temporal Logic and Fair Discrete Systems 13 there is a path of length 1. Suppose that for s = new i there is a path of length at least i + 1 starting at s. Then clearly, new i+1 = new i pre(new i,ρ) includes the set of states from which there is a path of length at least i + 2. It follows that for every s = new fix for every i 0 there is a path of length i + 1 starting at s. Arrange all the paths starting in s in the form of a tree such that a path of length i+1 is a descendant of a path of length i. The tree has finite branching degree and an infinite number of nodes. It follows from König s lemma that the tree contains an infinite path. Hence, from s there is an infinite path. Let inf denote the set of reachable states s that have an infinite path starting from them. Then, inf new fix. Indeed, for every i 0 we have inf new i. Clearly, inf new 0. Suppose that inf new i. But for every state s such that s = inf there is a successor t such that t = inf. Indeed, this state t is the first state after s in the infinite path starting from s. Then, for every s = inf we have s = new i+1. Finally, as new starts from the set of reachable states, if new is not empty the FDS has some infinite run. Clearly, if a system contains the stuttering clause, i.e., every state is allowed to stutter, Alg. 2 returns all reachable states. For such systems, we could replace the transition relation ρ by ρ v V v v that removes stuttering steps from ρ. We now turn to consider JDS. For such systems, it is not enough to just find infinite paths in the system. We have to ensure in addition that we can find an infinite path that visits each J J infinitely often. For that, we extend Alg. 2 by adding the lines 5 8. Algorithm 3 Viability (justice) 1: reach := reach(θ,ρ); 2: fix (new := reach) 3: ρ f := ρ new prime(new); 4: new := new pre(new,ρ f ); 5: for all (J J ) 6: reach J := backreach(new J,ρ f ); 7: new := new reach J ; 8: end for {all (J J )} 13: end fix Lemma 4. For a JDS D Alg. 3 computes the set of viable states. Proof. Clearly, every state s such that s = new fix is reachable. Furthermore, for every s such that s = new fix and for every J J we have that there is some state t such that t = new fix and t = J and t has a successor in new fix. Using this observation it is easy to construct paths of increasing length that visit all J J. As before, we organize these paths in the form of a tree. An infinite path in this tree visits all J J infinitely often. In the other direction we show that every viable state remains in new fix. Similar to the previous proof, a computation that starts from state s is used to show that s

14 Nir Piterman and Amir Pnueli can reach all J J. The suffix of this path is used to show that it is possible to continue from s to some successor in the fixpoint. Finally, we consider a general FDS. We extend Alg. 3 by adding the lines 9 12. These lines ensure, in addition, that every compassion requirement is satisfied. This algorithm is due to [18]. Algorithm 4 Viability (compassion) 1: reach := reach(θ,ρ); 2: fix (new := reach) 3: ρ f := ρ new prime(new); 4: new := new pre(new,ρ f ); 5: for all (J J ) 6: reach J := backreach(new J,ρ f ); 7: new := new reach J ; 8: end for {all (J J )} 9: for all ((P,Q) C ) 10: reach Q := backreach(new Q,ρ f ); 11: new := new ( P reach Q ); 12: end for {all ((P,Q) C )} 13: end fix {(new)} Lemma 5. For an FDS D Alg. 4 computes the set of viable states. The proof is similar to that of Lem. 4. Theorem 1. Viability of a finite-state FDS can be checked in time polynomial in the number of states of the FDS and space logarithmic in the number of states. The polynomial upper bound follows from the analysis of termination of the algorithm. The space complexity follows from the algorithms in [42]. We note that the number of states of the system may be exponential in the size of its description. Indeed, a system with n Boolean variables has potentially 2 n states. Thus, if we consider the size of the description of the FDS, these algorithms can be considered to run in exponential time and polynomial space, respectively. The reliance of these algorithms on computing the set of reachable states makes it problematic to apply them to infinite-state systems. Indeed, for such systems the iterative search for new states may never terminate. For some infinite-state systems, other techniques for the computation of the reachable and back-reachable states are available. In such cases, viability algorithms may be available. See Chaps. 27, 28, and 21 for treatment of infinite-state systems. In what follows we use FDS as a formal model of systems. We now turn our attention to a way to describe properties of such systems. We start with a description of linear temporal logic.

Temporal Logic and Fair Discrete Systems 15 3 Linear Temporal Logic We use logic to specify computations of FDS. Specifications that tell us what computations should and should not do are written formally as logic formulas. The temporal operators of the logic connect different stages of the computation and talk about dependencies and relations between them. Then, correctness of computations can be checked by checking whether they satisfy logical specifications. Here, we present linear temporal logic (abbreviated LTL). As its name suggests, it takes the linear-time view. That is, every point in time has a unique successor and models are infinite sequences of worlds. Systems, as defined in Sect. 2, may have multiple successors for a given state. From LTL s point of view, a branching from some state would result in two different computations that are considered separately (though they share a prefix). We start with a definition of LTL, viewing infinite sequences as models of the logic and a logical formula defining a set of models. We then discuss several extensions of LTL and what properties it can (and cannot) express. Finally, we connect the discussion to the FDSs defined in Sect. 2, define model checking, and show how to solve it. We note that the discussion of LTL is restricted to infinite models. When considering FDS D, we assume that Runs(D) contains only infinite paths. If this is not the case, some modifications need to be made either to definition of LTL or the FDS itself in order to handle finite paths (cf. Chap. 15B). 3.1 Definition of Linear Temporal Logic We assume a countable set of Boolean propositions P. A model σ for a formula ϕ is an infinite sequence of truth assignments to propositions. Namely, if ˆP is the set of propositions appearing in ϕ, then for every finite set P such that ˆP P, a word in (2 P ) ω is a model. Given a model σ = σ 0,σ 1,..., we denote by σ i the set of propositions at position i. We refer to sets of models also as languages. LTL formulas are constructed using the normal Boolean connectives for disjunction and negation and introduce the temporal operators next, previous, until, and since. ϕ ::= p ϕ ϕ 1 ϕ 2 Xϕ Yϕ ϕ 1 U ϕ 2 ϕ 1 S ϕ 2 (1) As mentioned, a model for an LTL formula is an infinite sequence. While Boolean connectives have their normal roles, the temporal connectives go between the different locations in the model. Intuitively, the unary next operator X indicates that the rest of the formula is true in the next location in the sequence; the unary previous operator Y indicates that the rest of the formula is true in the previous location; the binary until operator U indicates that its first operand holds in all points in the future until some future point where its second operand holds; and dually (with respect to time) the binary since operator S indicates that its first operand holds in all points in the past until some past point where its second operand holds. For-

16 Nir Piterman and Amir Pnueli mulas that do not use Y or S are pure future formulas. Formulas that do not use X or U are pure past formulas. In many cases (in practice and in this handbook), attention is restricted to pure future LTL and past is omitted. In general, when interested in satisfaction of formulas in the beginning of models, a formula with past can be converted to a pure-future formula [17, 15]. The conversion, however, can be exponential [24]. Formally, for a formula ϕ and a position i 0, we say that ϕ holds at position i of σ, written σ,i = ϕ, and define it inductively as follows: For p P we have σ,i = p iff p σ i. σ,i = ϕ iff σ,i = ϕ. σ,i = ϕ ψ iff σ,i = ϕ or σ,i = ψ. σ,i = Xϕ iff σ,i + 1 = ϕ. σ,i = ϕ U ψ iff there exists k i such that σ,k = ψ and σ, j = ϕ for all j, i j < k. σ,i = Yϕ iff i > 0 and σ,i 1 = ϕ. σ,i = ϕ S ψ iff there exists k,0 k i such that σ,k = ψ and σ, j = ϕ for all j, k < j i. We note that the interpretation of future and past is non-strict. The until operator interpreted in location i characterizes the locations greater than i as well as i itself; and similarly for the past. If σ,0 = ϕ, then we say that ϕ holds on σ and denote it by σ = ϕ. A set of models M satisfies ϕ, denoted M = ϕ, if every model in M satisfies ϕ. We use the usual abbreviations of the Boolean connectives, and and the usual definitions for true and false. We introduce the following temporal abbreviations F (eventually), G (globally), W (weak-until), and for the past fragment H (historically), P (once), and B (back-to) which are defined as follows. Fφ true U φ, Gψ F ψ, ϕ W ψ (ϕ U ψ) Gϕ, Pφ true S φ, Hψ P ϕ, and ϕ Bψ (ϕ S ψ) Hϕ. For example, the formula ϕ 1 G(p Fq) holds in models where every location where p is true is followed later (or concurrently) by a location where q holds. A location where p happened in the past but no q happened since satisfies q S ( q p). Thus, ϕ 2 GF( ( q S ( q p))) means that there is no location where p holds and no q occurs after it, i.e., when checked in the beginning of a model it is the same as G(p Fq). Another common notation for the temporal operators is X for next, Y for previous, F for eventually (or future), G for globally, H for historically, and P for once (past). Using this notation G(p Fq) becomes G(p Fq) and G(p Xq) becomes G(p Xq).

Temporal Logic and Fair Discrete Systems 17 For an LTL formula ϕ, we denote by L (ϕ) the set of models that satisfies ϕ. That is, L (ϕ) = {σ σ = ϕ}. We are mostly interested in when an LTL formula is satisfied in the first location in a sequence. However, generally, the satisfaction is related to a location. This duality gives rise to two notions of equivalence for LTL formulas. We say that two LTL formulas ϕ and ψ are equivalent if L (ϕ) = L (ψ). We say that two LTL formulas ϕ and ψ are globally equivalent if for every model σ and every location i 0 we have σ,i = ϕ iff σ,i = ψ. For example, the two formulas mentioned above ϕ 1 and ϕ 2 are equivalent but not globally equivalent. Indeed, consider a model where p holds in the first location and both p and q are false forever after that. In the second location in this sequence the formula ϕ 1 holds. However, the formula ϕ 2 does not hold in the second location as it notices the p in the first location is never followed by a q. 3.2 Safety Versus Liveness and the Temporal Hierarchy A very important classification of LTL formulas is the distinction between safety and liveness. Intuitively, a safety property says that bad things will never happen. A liveness property says that the system will also do good things. Algorithmically, safety is much easier to check than liveness and this is the most prevalent form of specification that is encountered in practice. Nevertheless, we further refine the properties expressible in LTL to a temporal hierarchy that relates to expressiveness, the topology of sets of models characterized by them, and the types of deterministic FDS (or automata) that accept them. Formally, we have the following. A property ϕ is a safety property for a set of models if for every model σ that violates ϕ, i.e., σ = ϕ, there exists an i such that for every σ that agrees with σ up to position i, i.e., 0 j i, σ i = σ i, σ also violates ϕ. As mentioned, safety properties specify bad things that should never happen. Thus, once an error occurred (in location i), it is impossible to undo it. Every possible extension also includes the same error and is unsafe. A property ϕ is a liveness property for a set of models if for every prefix of a model w 0,...,w i there exists an infinite model σ that starts with w 0,...,w i and σ = ϕ. As mentioned liveness properties specify good things that should occur. Thus, regardless of the history of the computation, it is still possible to find an extension that will fulfill the specification. One prevalent form of safety is the invariant, a formula of the form G p where p is propositional. In general, safety properties are easier to check than liveness properties. As we are interested in violations of safety, we may restrict attention to finite paths and hence to reachability of violations. In particular, for every safety property ϕ and an FDS D, it is possible to construct an FDS D 1 and an invariant G p such that there is no initial path in D that violates ϕ iff all reachable states of

18 Nir Piterman and Amir Pnueli Safety Gbeta Guarantee Fbeta Obligation [Gbeta i Fgamme i ] i Recurrence GFbeta Persistence FGgamme Reactivity [GFbeta i FGgamme i ] i Fig. 4 The temporal hierarchy, where β, γ, β i, and γ i are pure past temporal properties. D 1 satisfy p. Checking p over D 1 is much easier to check and this is indeed the way safety properties are checked in practice. Compare this with the more complex algorithm for viability in the presence of justice in Sect. 2 (in the next subsection we see that this is required for LTL model checking even if the model is fairness free). More theoretically, Alpern and Schneider [1] show that every language can be described as the intersection of a liveness and a safety property (not restricted to properties expressed in LTL). Theorem 2. Every language L (2 P ) ω is expressible as the intersection L s L l, where L s is a safety property and L l is a liveness property. Proof. Consider a language L (2 P ) ω. We have the following definitions. pre f (L) = {w 0,...,w n w 0,...,w n is a prefix of some w L} L s = {w 0,w 1,... for every i, w 0,...,w i pre f (L)} L l = L {w 0,w 1,... there is i such that w 0,...,w i / pre f (L)} It is simple to see that L s is a safety property. In order to see that L l is liveness consider some prefix w 0,...,w i. If w 0,...,w i pre f (L), then clearly there is some extension σ of w 0,...,w i such that σ L L l. Otherwise, if w 0,...,w i / pre f (L) then all extensions σ of w 0,...,w i are in L l. Finally, it is simple to see that L L s and L L l hence L L s L l. In the other direction, consider a word σ L s L l. It follows that either σ L s L or σ L s {w 0,w 1,... there is i such that w 0,...,w i / pre f (L)}. In the first case, clearly σ L. The second intersection must be empty as σ can not have at the same time all its prefixes in pre f (L) and some prefix not in pre f (L). We now discuss a more refined classification of temporal properties. Although Safety, as appearing in this classification seems different from the description of safety above, Th. 3 shows that they are actually the same. This classification is important, for example, in synthesis (see Chap. 25) and deductive verification (see Chap. 14B). In Fig. 4 we define six different classes of properties. In both synthesis and deductive verification, formulas in one of these classes have a better (i.e., more efficient or simpler) treatment than formulas in higher classes. Furthermore, this classification is related to the ability to convert such properties to deterministic ω- automata (see Chap. 7) and the topological complexity of properties. We do not state formally most properties of this hierarchy. As shown in Fig. 4, all containments are strict. Furthermore, there are Safety properties that are not Guarantee properties and vice versa. Similarly, there are Recurrence properties that are

Temporal Logic and Fair Discrete Systems 19 not Persistence properties and vice versa. Each of Obligation and Reactivity form a strict hierarchy according to the number of conjuncts. The lowest in the Obligation and Reactivity hierarchies (i.e., one conjunct) contain all the classes below them. Then, all classes are closed under disjunction and conjunction and Obligation and Reactivity are also closed under negation. The complement of every Safety property is a Guarantee property and vice versa. Similarly, the complement of every Recurrence property is a Persistence property and vice versa. We state formally that the classification is exhaustive. Theorem 3. Every Safety property expressible in LTL can be expressed as a formula of the form Gβ, where β is a pure past formula. Every LTL formula can be expressed as a Reactivity formula. We note that the translation to this normal form causes an explosion in the size of the formula. The translation uses as subroutines translations between LTL, automata, and regular expressions [15]. A characterization of safety and liveness in terms of pure-future LTL and decision procedures for whether a formula is safety or liveness are given in [37]. Further details about these classes, formal statement of the claims above, their proofs, and relation of these classes to topology and to automata on infinite words are available in [29]. The intersection of Safety and Guarantee is studied in [22]. Exhaustiveness of the hierarchy is covered in [27]. 3.3 Extensions of LTL Logicians have been studying for many years the first-order logic and second-order logic of infinite sequences, denoted FOL1 and S1S, respectively. In our context, these logics are restricted to use the relations <, =, and the function +1. Naturally, LTL was compared with these logics. It was shown that LTL and FOL1 are equally expressive [17]. That is, for every FOL1 formula there is an LTL formula that characterizes the same models. As the semantics of LTL is expressed in FOL1, it is quite clear that it cannot be more expressive that FOL1. Showing that LTL is as expressive as FOL1 is more complicated and not covered here. This observation led to declare that LTL is expressively complete [15]. However, very natural properties that are required for specifying programs cannot be expressed in LTL [43]. Following this realization different studies of how to extend the expressive power of LTL so that it becomes equivalent to S1S followed, culminating in the definition of PSL (see Chap. 15B). S1S formulas extend Boolean connectives by using location and predicate variables. We assume a countable set of location variables var = {x,y,...} and a countable set of predicate variables VAR = {X,Y,...}. We define the set of terms (τ), atomic formulas (α), and formulas (ϕ) of S1S.