A Modern Mathematical Theory of Co-operating State Machines

Similar documents
The State Explosion Problem

What You Must Remember When Processing Data Words

Structure Preserving Bisimilarity,

Review of The π-calculus: A Theory of Mobile Processes

A Logical Viewpoint on Process-Algebraic Quotients

Models of Concurrency

Temporal logics and explicit-state model checking. Pierre Wolper Université de Liège

Introduction to Temporal Logic. The purpose of temporal logics is to specify properties of dynamic systems. These can be either

Trace Refinement of π-calculus Processes

Temporal Logic. Stavros Tripakis University of California, Berkeley. We have designed a system. We want to check that it is correct.

Compositional Abstractions for Interacting Processes

The Turing machine model of computation

Sequence convergence, the weak T-axioms, and first countability

Modal and Temporal Logics

Semantic Equivalences and the. Verification of Infinite-State Systems 1 c 2004 Richard Mayr

Automata-based Verification - III

Finite-State Model Checking

A Compositional Approach to Bisimulation of Arenas of Finite State Machines

Helsinki University of Technology Laboratory for Theoretical Computer Science Research Reports 66

Combining Propositional Dynamic Logic with Formal Concept Analysis

Lecture 22: Quantum computational complexity

Correspondence between Kripke Structures and Labeled Transition Systems for Model Minimization

Automata-based Verification - III

Chapter 4: Computation tree logic

CSE 331 Winter 2018 Reasoning About Code I

Design of Distributed Systems Melinda Tóth, Zoltán Horváth

Lecture Notes on Software Model Checking

DISTINGUING NON-DETERMINISTIC TIMED FINITE STATE MACHINES

A framework based on implementation relations for implementing LOTOS specifications

Petri nets. s 1 s 2. s 3 s 4. directed arcs.

Process Algebras and Concurrent Systems

Formal Techniques for Software Engineering: CCS: A Calculus for Communicating Systems

Introducing Proof 1. hsn.uk.net. Contents

State-Space Exploration. Stavros Tripakis University of California, Berkeley

Model Checking: An Introduction

Safety and Liveness Properties

Alan Bundy. Automated Reasoning LTL Model Checking

An Intuitive Introduction to Motivic Homotopy Theory Vladimir Voevodsky

TESTING is one of the most important parts of the

Notes on ordinals and cardinals

Communicating Parallel Processes. Stephen Brookes

Introduction. Pedro Cabalar. Department of Computer Science University of Corunna, SPAIN 2013/2014

External Behaviour of Systems of State Machines with Variables

Takeaway Notes: Finite State Automata

Simulation Preorder on Simple Process Algebras

T Reactive Systems: Temporal Logic LTL

Analysis and Optimization of Discrete Event Systems using Petri Nets

Efficient Algorithm for Reachability Checking in Modeling

The efficiency of identifying timed automata and the power of clocks

Models for Concurrency

Universal Algebra for Logics

The Turing Machine. CSE 211 (Theory of Computation) The Turing Machine continued. Turing Machines

Decidable Subsets of CCS

Lecture Notes on Inductive Definitions

Timo Latvala. March 7, 2004

Timed Test Generation Based on Timed Temporal Logic

Probabilistic Bisimilarity as Testing Equivalence

On the coinductive nature of centralizers

Hoare Logic: Reasoning About Imperative Programs

Modern Algebra Prof. Manindra Agrawal Department of Computer Science and Engineering Indian Institute of Technology, Kanpur

MATH2206 Prob Stat/20.Jan Weekly Review 1-2

CHAPTER 1: Functions

Theory of Computation Prof. Kamala Krithivasan Department of Computer Science and Engineering Indian Institute Of Technology, Madras

Model for reactive systems/software

Nondeterministic finite automata

Model Checking. Boris Feigin March 9, University College London

Partial model checking via abstract interpretation

AN ALGEBRA PRIMER WITH A VIEW TOWARD CURVES OVER FINITE FIELDS

Logic Model Checking

MATH 25 CLASS 21 NOTES, NOV Contents. 2. Subgroups 2 3. Isomorphisms 4

Introduction to Model Checking. Debdeep Mukhopadhyay IIT Madras

(Refer Slide Time: 0:21)

Embedded systems specification and design

Getting Started with Communications Engineering

1 Computational problems

Real Analysis Prof. S.H. Kulkarni Department of Mathematics Indian Institute of Technology, Madras. Lecture - 13 Conditional Convergence

Some Remarks on Alternating Temporal Epistemic Logic

Lecture Notes on Inductive Definitions

A Brief Introduction to Model Checking

Automata-Theoretic Model Checking of Reactive Systems

Notes on Complexity Theory Last updated: November, Lecture 10

A Enforceable Security Policies Revisited

Clojure Concurrency Constructs, Part Two. CSCI 5828: Foundations of Software Engineering Lecture 13 10/07/2014

The Underlying Semantics of Transition Systems

Discrete Mathematics for CS Fall 2003 Wagner Lecture 3. Strong induction

Further discussion of Turing machines

CMPSCI 250: Introduction to Computation. Lecture #22: From λ-nfa s to NFA s to DFA s David Mix Barrington 22 April 2013

Advanced topic: Space complexity

Supplementary Notes on Inductive Definitions

Automata with modulo counters and nondeterministic counter bounds

Probabilistic Model Checking Michaelmas Term Dr. Dave Parker. Department of Computer Science University of Oxford

chapter 12 MORE MATRIX ALGEBRA 12.1 Systems of Linear Equations GOALS

PSPACE-completeness of LTL/CTL model checking

Time-bounded computations

cis32-ai lecture # 18 mon-3-apr-2006

CDS 270 (Fall 09) - Lecture Notes for Assignment 8.

Timo Latvala. February 4, 2004

6-1 Computational Complexity

Automata, Logic and Games: Theory and Application

CS4026 Formal Models of Computation

Transcription:

201 A Modern Mathematical Theory of Co-operating State Machines Antti Valmari Abstract Valmari, Antti (2005). A Modern Mathematical Theory of Co-operating State Machines In Proceedings of the Algorithmic Information Theory Conference, Vaasa 2005. Proceedings of the University of Vaasa,Reports 124, 101 214. Eds S. Hassi, V. Keränen, C.-G. Källman, M. Laaksonen, and M. Linna. In this work we apply theoretical results from so-called process algebras to state machines, and develop the theory further. State machines are a central concept in the development of telecommunication protocols and embedded software in practice. Unfortunately, the engineers notion of state machine is vague and varying. We hope that with the aid of our theory, engineers could improve their understanding of state machines and systems that consist of them, and thus become capable of designing better systems with less effort. This article focuses on the theoretical part of our endeavour. Antti Valmari, Institute of Software Systems, Tampere University of Technology, P.O. Box 553, FI-33101 Tampere, Finland, E-mail: Antti.Valmari@tut.fi Keywords: Formal Systems. Mathematics Subject Classification (2000): 68Q85; 68N30. 1. Introduction Process algebras were originated by Tony Hoare and Robin Milner around 1980, and have become an extensive research topic. Hoare s version is known as Communicating Sequential Processes or CSP. Roscoe (1998) is a good book about it. Milner s Calculus of Communicating Systems (CCS) is explained in Milner (1989). Process algebras were intended as theories of concurrency. Such theories have a lot of potential applications in the design of telecommunication systems, embedded software, and so on. Although many practice-oriented process-algebraic analysis methods and tools and even an ISO standard for notation, see Bolognesi, and Brinksma (1987) have been developed, process algebras have had very limited success in the industry. This is certainly to a large extent due to the fact that practical engineers use state machines

202 extensively, but the corresponding concept in process algebras is expressed with clumsy mysterious-looking algebraic notation, as was criticised by Karsisto (2003). Furthermore, giving a sound meaning to recursive algebraic definitions has been a major and mathematically demanding problem in process algebras. For explicit state machines this issue and thus much of process-algebraic literature is not relevant at all. Another problem is that process algebras put emphasis solely on actions, while engineers need actions, states, or both, depending on the application. Furthermore, there has been a proliferation of mutually inconsistent process-algebraic formalisations of the important notion of abstracted, or externally observable, behaviour, as evidenced by van Glabbeek (1993). This has caused a lot of confusion among (at least) the outsiders. Altogether, engineers cannot find what they need in process algebra literature, although much of it is there. It is often assumed that state machines can be understood in terms of the theory of finite automata. This is incorrect, mostly but not only because finite automata theory does not consider concurrently executing entities. An example of this can be seen in early versions of the widely used CCITT specification language SDL (see, e.g., Færgemand (1993)). In them, a system is modelled as a collection of Mealy machines with multiple output. Each machine has a single input queue. This model runs into big problems in certain common design situations, so an obscure save mechanism had to be added. As a whole, the design is absurd. The goal of this work is to develop a solid mathematical theory of co-operating state machines that can be easily applied to practical engineering needs. Its basic concepts should be simple and elegant but they need not be handy in practical engineering work, like in good fundamental theories and unlike in practical programming languages. This part of the theory is not meant for designing systems, but for understanding the fundamental concepts and phenomena. However, the theory should facilitate the introduction of powerful practical constructs, whose meaning can be defined via clear, straightforward mappings to the concepts of the theory. Our theory inherits a lot from process algebras. To meet engineering needs, concepts for talking about the properties of states were added. Furthermore, picking the right

203 pieces from the variety of possibilities offered by different process algebras was nontrivial, and putting the pieces together with each other and the state-related concepts required fine-tuning or even more work here and there. Although some details still need polishing, the big picture is clear enough for presentation. 2. Plain state machines We reserve the symbol τ for a special purpose that will be explained later. A plain state machine M can be defined as a tuple (S, Σ, Π,, val, Ŝ) whose components are as follows. S is a (not necessarily finite) set of states. Ŝ is the set of initial states and = Ŝ S. Σ is any set that does not contain τ, and it is known as the action alphabet. It will be important when defining the interaction of (plain) state machines with each other. The transition relation is any collection of triples of the form (s, a, s ), where s and s are states and either a = τ or a Σ. The middle element a is the label of the transition. The elements of the set Π are called state propositions, and val is a function from S to subsets of Π. We (unfortunately have to) assume that if ŝ 1 Ŝ and ŝ 2 Ŝ, then val(ŝ 1) = val(ŝ 2 ). Clearly S and induce a directed graph. The idea is, of course, that M starts its life in a nondeterministically chosen element of Ŝ, and repeatedly moves from one state to another along the elements of. If M is isolated, then it can freely choose from among the output transitions of its current state, but if it is connected to other state machines, then they may prevent M from executing transitions, as will be discussed later. The environment of M cannot block those transitions whose label is τ. M either makes infinitely many moves (one move for each natural number, to be more precise), or stops because of reaching a state without output transitions or because of the environment blocking all output transitions of the state. The label of a transition determines how the outside world sees the execution of the transition. If the label is τ, then the outside world does not see it at all (but may be able to see its consequences later on). The components Π and val model what the outsiders see of the state of M: val(s) lists precisely the state propositions that hold in s. This information is structured as a set of propositions mainly because later definitions need that. Furthermore, many

204 concurrency researchers use state-based formalisms that have Π and val with this structure, and no Σ (e.g., Browne, Clarke, and Grumberg (1988)). Such a system can be interpreted as a Kripke structure whose properties can be analysed with temporal logics. Those techniques naturally carry over to state machines. As was discussed in the introduction, process algebras have Σ but no Π. An important, partly open but fortunately not critical issue is what (or who) precisely are the outsiders that see the values val(s). Certainly one task of Π and val is to give the atomic entities, with which the correctness criteria of the system in question can be stated. For instance, the Π of a mutual exclusion system could consist of the propositions customer i is requesting for service and customer i is using the service for i {1, 2}. Then one can write (and hopefully verify) a formula stating that customers 1 and 2 are never simultaneously using the service, and if customer 1 is now requesting for the service, then it will be using the service in some future moment in time. Π and val are thus important for people who try to analyse the correctness of the system. What is open is whether it is reasonable to let other state machines of the system observe, or even block, transitions based on values of val(s). The current intuition is that it does not matter whether observing is allowed, but blocking should not be allowed. The important insight is that what the verifier should be shown of a state machine is a different issue from what the neighbouring state machines should be shown. Process algebras do not make this distinction, as they use actions for both roles. Most other theories fail to make this distinction, because they consider only closed systems, that is, all components of the system and even the necessary parts of its environment are in the model. Therefore, the nature of the connections between state machines is irrelevant for those theories. Unfortunately, a discussion on potential benefits of the distinction would require more background on abstract semantics (Section 6) than space allows. Nothing in the model corresponds to the final states of finite automata.

205 3. Co-operating plain state machines Let M 1,..., M n be plain state machines that we intend to connect together, and let s 1,..., s n be their current states. We assume that their Π s are mutually disjoint. Our theory uses the simple form of interaction where the system can make a transition to s 1,..., s n with the label a, if and only if either a τ, (s i, a, s i) is a transition for those M i to whose action alphabet a belongs, and s i = s i for the remaining M i ; or a = τ, some M i has the transition (s i, τ, s i), and s j = s j for the remaining M j. This kind of interaction is synchronous in the sense that an interaction consists of all participating state machines executing an action simultaneously. Process algebras use synchronous interaction, although often not this precise form. Long experience has shown that synchronous interaction is entirely general in the sense that all other forms of interaction can be easily modelled with it, whereas constructing synchronous interaction from other forms is often difficult or impossible. (This result relies on the assumption that more than two entities can participate the same interaction, and all of them may freely have alternative transitions from their current state, including alternative interactions. CCS does not satisfy the former, see Milner (1989).) For instance, communication via message queues is modelled by treating the queue as a state machine in its own right, with whom first the sending state machine and later on the receiving state machine interacts. The intuitively natural notions of input and output prove to be certain kinds of roles in restricted forms of synchronous interaction. Let us consider the communication of a value from the range 0, 1,..., 9. Ten different actions, say a0, a1,..., a9, are reserved for the purpose. A state machine O that is ready to output has one transition from its current state, and its label corresponds to the value that it wants to output. For instance, if O wants to output the value 6, then the label is a6. A state machine I that is ready to input has a transition for each label from its current state, and they lead to different states. If O and I are connected together, they execute synchronously their a6-labelled transitions. Thus O determines the value that is communicated, and the value determines the next state of I.

206 All this indicates that synchronous interaction is the fundamental form of interaction. The precise form of synchronous interaction presented above is common but not pervasive in process algebras. It was chosen, because intuitively it looks more fundamental than alternative forms. Together with two other natural process-algebraic operations, namely relational renaming and hiding, quite complicated synchronous interaction schemes can be easily built from it (with potential exponential blow-up), as was demonstrated by Karsisto (2003). They include all that are commonly used in process algebras. Furthermore, Valmari, and Kervinen (2002) proved that a certain natural family of problems is PSPACE-complete with it, but EXPSPACE-complete with its common alternatives. This is rather concrete complexity-theoretic evidence. The parallel composition operator of Karsisto (2003) is an example of a powerful, practice-oriented engineering construct whose meaning is defined in terms of the basic concepts of the theory via a clear mapping. The result of connecting M 1,..., M n together is a plain state machine, namely the one with S = S 1 S n, Σ = Σ 1 Σ n, Π = Π 1 Π n, consists of the transitions explained above, val((s 1,..., s n )) = val 1 (s 1 ) val n (s n ), and Ŝ = Ŝ1 Ŝn. It is common to use a refined definition where those states and their evaluations and adjacent transitions are thrown away, which cannot be reached from any element of Ŝ via zero or more elements of. This operation is usually denoted with the expression M 1 M n. There are also other operators, although we skip them here. With and them, one can write an expression that describes how the system is built from individual state machines, and formally produces a state machine. This state machine represents or is the behaviour of the system. 4. Equivalence of state machines at detailed level It is very generally accepted in process algebras and to some extent also elsewhere (e.g., Browne et al. (1988)) that the right notion to compare state machines at this level of treatment is bisimilarity, also known as strong bisimilarity. Two state machines

207 M i = (S i, Σ i, Π i, i, val i, Ŝi) are bisimilar if and only if Σ 1 = Σ 2, Π 1 = Π 2, and there is a relation S 1 S 2 such that all of the following hold: For every element ŝ 1 of Ŝ1, there is an element ŝ 2 of Ŝ2 such that ŝ 1 ŝ 2. For every element ŝ 2 of Ŝ2, there is an element ŝ 1 of Ŝ1 such that ŝ 1 ŝ 2. If s 1 s 2, then val 1 (s 1 ) = val 2 (s 2 ). If (s 1, a, s 1) 1 and s 1 s 2, then there is an s 2 such that (s 2, a, s 2) 2 and s 1 s 2. If (s 2, a, s 2) 2 and s 1 s 2, then there is an s 1 such that (s 1, a, s 1) 1 and s 1 s 2. (We say that the latter transition (s 3 i, a, s 3 i) simulates the former transition (s i, a, s i).) The important remark in this context is that this equivalence is neither isomorphism, nor the language equivalence in finite automata theory. Bisimilarity can be thought of as a generalisation of isomorphism, where the bijection is replaced by the relation. Assume that is a relation that satisfies the last two items of the definition of under the assumption that S 1 = S 2, 1 = 2, val 1 = val 2 and Ŝ1 = Ŝ2 (that is, compares the states of a single state machine to each other). Then the largest exists and it is an equivalence. If, furthermore, each state is reachable from some initial state via transitions, then the result of merging the equivalence classes of the largest to single states is the unique smallest state machine that is bisimilar with the original one. 5. Adding variables Variables (in the sense used in conventional programming languages) are crucial from the practical engineering point of view. Fortunately, there is a pleasant way of adding them to our state machine formalism, which facilitates formal reasoning in their presence, while at the same time completely avoids issues regarding concrete syntax. We assume that each state has an associated set of variables, and each variable has an associated, nonempty set called its type. Furthermore, actions a (where a τ) are replaced by entities of the form a p 1,..., p k, where the p i are the parameters of the action. They represent values that the state machine communicates during an interaction. For instance, when talking about communication between a bankteller machine and the bank, it may be that a = withdrawal, p 1 is the account number,

208 and p 2 is the amount. We assume that there is some set U that contains at least all the values that are needed as p i. Let the types of the variables of the states s and s be T 1,..., T n and T 1,..., T m. A transition is now of the form (s, a, k, R, s ), where k is a natural number, and R T 1 T n U k T 1 T m. The interpretation is that the configuration consisting of the state s together with the values v 1,..., v n in its variables is really one state of a plain state machine known as the unfolded machine, and there is the transition from s v 1,..., v n to s v 1,..., v m with label a p 1,..., p k if and only if R(v 1,..., v n, p 1,..., p k, v 1,..., v m) holds. (Any experience with practical state machines entices to stipulate that the v h should be defined as a function of the v i and the p j, and those p j that represent output should be a function of the v i. Indeed, all earlier formalisms of this kind that I remember reflect this kind of thinking in one way or another. However, when modelling more and more systems, one eventually realises that these functions must sometimes be nondeterministic. So, it is better to replace them with a relation.) All the fancy notation used in practical state machine languages for specifying preconditions (or guards), transition-time assignments to variables, input parameters, output parameters and postconditions can now be thought of as means of specifying R. Thus the problem of choosing or designing a notation has been totally isolated from our theory of state machines. The designer of the notation need not know our theory, and we can continue without worrying about the notation. Unfolding is an old idea. Textbooks on Turing machines often say that some piece of information is stored in the finite control of the machine. That is unfolding. The semantics of Coloured Petri Nets has been defined via unfolding. The meaning of a state machine with variables is its unfolded plain state machine. Thus, variables can be thought of as just a notational convenience. However, it is relatively simple to define interconnection of state machines with variables (denoted here with ) such that unfold(m 1 M n ) = unfold(m 1 ) unfold(m n ) (where = denotes isomorphism), and the same holds for other common operators used in building systems. This makes it possible to lift formal results and reasoning to the

209 case where variables are present. The fact that practical (i.e., variable-containing) co-operating state machines can be formally combined to one (variable-containing) state machine is little known. It is true that combinations tend to grow quickly in size as more and more state machines are added. Even so, when the idea becomes more widely known, it will perhaps find important uses. 6. Externally observable behaviour An unparallelled aspect of process algebras is full abstraction (ACM (1991) mentions it among the reasons for giving the Turing award to Milner). Roughly speaking, it means that the behaviour of a component is described at such a level that everything, but nothing more, that is essential regarding its co-operation with other components which are unknown and may be just anything is presented. When a system is built from state machines one at a time, it is common that some actions that were needed for connecting state machines together are entirely uninteresting from a larger point of view. For instance, a person is interested in the actions that happen between her and the bankteller machine, and very interested in the actions that concern her bank account, but the activity in the telecommunication link that connects the bankteller machine to the bank is implementation technology that should do its duty out of sight. For this purpose, process algebras contain in one form or another a hiding operator, with which actions can be removed from the action alphabet, converting them to τ wherever they are used as transition labels. After hiding, instead of containing 73 properly named transitions between pressing the button and giving the money, the system shows 73 τ-transitions. To get rid of them, each process algebra contains an abstract semantics. A good start towards an abstract semantics is to mimic language equivalence of finite automata, with every state treated as a final state. We ignore the state propositions for the moment. Under this assumption, a trace of a plain state machine is any finite sequence that can be constructed by starting at any initial state, taking a finite number of successive transitions, writing down their labels, and removing all τ s from the result. Two plain

210 state machines with an empty Π are trace equivalent, iff they have the same action alphabet and same traces. Trace equivalence is, however, insufficient (even with an empty Π), because it throws away most information regarding deadlocks, livelocks and other situations where, although not providing any wrong service, the machine fails to provide the right service. Thus something must be added to the abstract semantics. (So we get away from the realm of finite automata theory.) Unfortunately, unanimity ends at this point. Literally hundreds of different abstract semantics have been suggested, representing different opinions as to what aspects of the behaviour of the system should be taken into account, and different solutions to the difficult problems that arise when trying to ensure nice mathematical properties for the semantics. (van Glabbeek (1993) lists quite a few semantics.) The most important property is that the equivalence induced by the semantics must be a congruence. That is, if M M and f(m) is a system containing M as a component, then it must hold that f(m ) f(m). This property is an essential ingredient of full abstraction. One major group of abstract equivalencies consists of Milner s (1989) observation equivalence and its variants. Let us write s =a 1 a n s whenever there is a path from state s to state s such that the sequence of the non-τ transition labels along the path is a 1 a n. (Thus the traces are { σ Σ ŝ Ŝ : s : ŝ =σ s }.) Observation equivalence, also known as weak bisimilarity, is defined like the strong bisimilarity of Section 4, except that instead of transitions (s i, a, s i) i, sequences s i =a s i are used, where a Σ or a = ε (the empty sequence). The definition may be written equivalently as stating that the (s i, a, s i) i are simulated by paths s 3 i =b s 3 i, where a = b Σ or a = τ and b = ε. Π-less state machines can be minimised and compared according to observation equivalence in polynomial time. This is because there is a polynomial time saturation operation whose output is observation equivalent to input such that saturated machines are observation equivalent if and only if they are strongly bisimilar. It consists of adding the transition (s, a, s ) wherever a Σ and s =a s ; and the transition (s, τ, s ) wherever s =ε s. The branching bisimilarity of van Glabbeek, and Weijland (1989) is strictly stronger than observation equivalence. In it, each (s 1, a, s 1) 1 is simulated either by

211 s 2 =ε s 2 and (s 2, a, s 2) 2, where s 2 is a state such that s 1 s 2; or by s 2 = s 2, in which case a must be τ. Of course, elements of 2 must be simulated in the same way by M 1. A saturated Π-less state machine is not necessarily branching bisimilar to the original one. Even so, polynomial time algorithms for minimisation and comparison are known. Branching bisimilarity is much less popular than observation equivalence, perhaps because of its later publication and more complicated definition. It has, however, the advantage that a variant of it preserves properties stated in the very natural and widely used state-based logic next-less CTL (De Nicola, and Vaandrager (1995)). No natural state-based logic counterpart for observation equivalence is known. Yet another popular group of semantics is built around the notion of (stable) failure. A stable failure is a pair (σ, A) such that σ is a trace of the Π-less state machine in question, A Σ, and the machine can execute σ so that it ends up in a state, none of whose outgoing transitions is labelled with an element of A {τ}. A failure is a related strictly weaker concept. Valmari (1995) proved that if one tries to generalise the notion of deadlock that is, a situation where the machine cannot do anything at all such that the resulting equivalence is a congruence, then stable failures are what one ends up with. The trace σ leads to a deadlock if and only if (σ, Σ) is a stable failure. (Stable) failures are a linear time notion in that, unlike observation equivalence and branching bisimilarity, they do not keep track of what alternative choices the Π-less state machine could have done in the middle of its execution. Consider a professor who sometimes drinks coffee and sometimes tea. Both stable failures and observation equivalence tell what she takes today, but only observation equivalence reveals whether she decided between tea and coffee before or after entering the cafeteria. The failures family of equivalences is complicated because of problems in dealing with livelocks or divergences. A divergence trace is such a trace that after at least one way of executing it, the Π-less state machine can execute an infinite sequence of τ-labelled transitions. One mathematically well-working possibility is to keep track of traces and stable failures and ignore divergences, see e.g., Roscoe (1998) or Valmari (1995). Then almost all information on livelocks is lost. In the solution known as catastrophic divergence those divergence traces are preserved whose proper prefixes are not divergence traces, and absolutely no information is preserved about the behaviour after

212 them. This solution is popular in the CSP world, because it gives unique meanings to recursive process-algebraic definitions. The Chaos-Free Failures Divergences (CFFD) semantics presented by Valmari, and Tienari (1991) preserves precise information on stable failures and divergence traces, but does not facilitate (easy) algebraic recursive definition of processes. With state machines this deficiency does not matter, however. To ensure the congruence property with infinite machines, infinite traces were later added to it. See Valmari (2000) for a discussion on CFFD and its relation to the semantics used with CSP. As defined above, observation equivalence and branching bisimilarity ignore divergences. They can be made divergence-sensitive in more than one way. Yet another source of diversity is a congruence problem that is related to the operators used for specifying alternative behaviours in process algebras. This semantics explosion is actually a consequence of the fact that concurrent systems are often nondeterministic. Namely, all semantics that were discussed in this section collapse to the same if the systems are deterministic in the following sense: there are neither τ-transitions from initial states nor divergence traces, and for every σ Σ and a Σ, if at least one way of executing σ from an initial state leads to a state s such that s =a, then the end state s of any way of executing σ from any initial state satisfies s =a. Here s =a denotes that there is some s such that s =a s. This notion of deterministic allows the presence of τ-transitions. No matter what semantics we choose, we still have the problem of extending it to a non-empty Π. The requirement that if s 1 s 2, then val(s 1 ) = val(s 2 ) works well with branching bisimilarity, provided that divergences are handled in a certain complicated way; but it does not work well with observation equivalence, and it does not apply to failure-based equivalences. Hansen, Virtanen, and Valmari (2003) solved the problem by replacing actions by pairs (a, P ) in the =σ -relation, where a is an action and P is the set of those state propositions, whose values change during the transition. One should also record val(ŝ) for each (equivalently, any) initial state ŝ. The earlier role of τ as the hidden action is now taken by the pair (τ, ); so, if P is non-empty, then (τ, P ) is included in the trace. It is important that the semantics keeps track of the changes of values of state propositions instead of the values per se, because that

213 gives us a unique value for no change, adequately with the goal of full abstraction. Naturally, we introduce a new operator for removing state propositions from Π and val. 7. Concluding remarks An area where the theory is still being developed further is the relation of state information to action information. It seems that Π and val can be reduced to transition labels in a certain precise sense. Such reduction would, on one hand, reveal whether letting state propositions affect synchronisation would change the expressible power of the formalism or invalidate important properties; and, on the other hand, greatly simplify the importing of more results from process algebras to the theory. This reduction also sheds new light on a result by Kaivola, and Valmari (1992) and Valmari (2000) on the relationship between well-known state-based logics and process-algebraic semantics. Many of the ideas in this work have been implemented in the Tampere Verification Tool of Virtanen, Hansen, Valmari, Nieminen, and Erkkilä (2004). The tool is being developed simultaneously with the theory. References ACM (1991). http://www.acm.org/awards/turing citations/milner.html Bolognesi, T. & E. Brinksma (1987). Introduction to the ISO Specification Language LOTOS. Computer Networks and ISDN Systems 14, 25 59. Browne, M.C., E.M. Clarke & O. Grumberg (1988). Characterizing Finite Kripke Structures in Propositional Temporal Logic. Theoretical Computer Science 59, 115 131. De Nicola, R. & F. Vaandrager (1995). Three Logics for Branching Bisimulation. Journal of the ACM 42:2, 458 487. Færgemand, O. (1993). Introduction to SDL. Chapter 4 of K. Turner (ed.), Using Formal Description Techniques, 85 124. Wiley. Hansen, H., H. Virtanen & A. Valmari (2003). Merging State-based and Action-based Verification. Proc. ACSD 03, Third International Conference on Application

214 of Concurrency to System Design, Guimaraes, Portugal, June 18 20, IEEE, 150 156. Kaivola, R. & A. Valmari (1992). The Weakest Compositional Semantic Equivalence Preserving Nexttime-less Linear Temporal Logic. Proc. CONCUR 92, Third International Conference on Concurrency Theory. Lecture Notes in Computer Science 630, 207 221. Springer-Verlag. Karsisto, K. (2003). A New Parallel Composition Operator for Verification Tools. Dr.Tech. Thesis, Tampere University of Technology Publications 420, Tampere, Finland, 114 p. Milner, R. (1989). Communication and Concurrency. Prentice-Hall, 260 p. Roscoe, A.W. (1998). The Theory and Practice of Concurrency. Prentice-Hall, 565 p. Valmari, A. (1995). The Weakest Deadlock-Preserving Congruence. Information Processing Letters 53, 341 346. Valmari, A. (2000). A Chaos-Free Failures Divergences Semantics with Applications to Verification. Millennial Perspectives in Computer Science, Proc. 1999 Oxford Microsoft Symposium in Honour of sir Tony Hoare, 365 382. Palgrave. Valmari, A. & A. Kervinen (2002). Alphabet-Based Synchronisation is Exponentially Cheaper. Proc. CONCUR 2002 Concurrency Theory, 13th International Conference, Brno, Czech Republic, 20 23 August 2002. Lecture Notes in Computer Science 2421, 161 176. Springer-Verlag. Valmari, A. & M. Tienari (1991). An Improved Failures Equivalence for Finite-State Systems with a Reduction Algorithm. Proc. Protocol Specification, Testing and Verification XI, 3 18. North-Holland. van Glabbeek, R. (1993). The Linear Time Branching Time Spectrum II: The Semantics of Sequential Systems with Silent Moves. Proc. CONCUR 93, Fourth International Conference on Concurrency Theory, Lecture Notes in Computer Science 715, 66 81. Springer-Verlag. van Glabbeek, R. & P. Weijland (1989). Branching Time and Abstraction in Bisimulation Semantics (Extended Abstract). Proc. IFIP International Conference on Information Processing 89, 613 618. North-Holland. Virtanen, H., H. Hansen, A. Valmari, J. Nieminen & T. Erkkilä (2004). Tampere Verification Tool. Proc. TACAS 2004, Tools and Algorithms for the Construction and Analysis of Systems, 10th International Conference, Barcelona, Spain, March 29 April 2, Lecture Notes in Computer Science 2988, 153 157. Springer-Verlag.