of acceptance conditions (nite, looping and repeating) for the automata. It turns out,

Similar documents
Extending temporal logic with!-automata Thesis for the M.Sc. Degree by Nir Piterman Under the Supervision of Prof. Amir Pnueli Department of Computer

Temporal logics and explicit-state model checking. Pierre Wolper Université de Liège

Weak Alternating Automata Are Not That Weak

Weak ω-automata. Shaked Flur

From Liveness to Promptness

for Propositional Temporal Logic with Since and Until Y. S. Ramakrishna, L. E. Moser, L. K. Dillon, P. M. Melliar-Smith, G. Kutty

Automata-Theoretic LTL Model-Checking

A Note on the Reduction of Two-Way Automata to One-Way Automata

CS256/Spring 2008 Lecture #11 Zohar Manna. Beyond Temporal Logics

Automata, Logic and Games: Theory and Application

Linear Temporal Logic and Büchi Automata

Automata-Theoretic Verification

Partially Ordered Two-way Büchi Automata

Preface These notes were prepared on the occasion of giving a guest lecture in David Harel's class on Advanced Topics in Computability. David's reques

Automata-based Verification - III

Tecniche di Specifica e di Verifica. Automata-based LTL Model-Checking

Chapter 4: Computation tree logic

Linear-time Temporal Logic

How to Pop a Deep PDA Matters

Complexity Bounds for Muller Games 1

Automata Theory and Model Checking

LTL is Closed Under Topological Closure

Non-elementary Lower Bound for Propositional Duration. Calculus. A. Rabinovich. Department of Computer Science. Tel Aviv University

Monadic Second Order Logic and Automata on Infinite Words: Büchi s Theorem

From its very inception, one fundamental theme in automata theory is the quest for understanding the relative power of the various constructs of the t

Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space

Electronic Notes in Theoretical Computer Science 18 (1998) URL: 8 pages Towards characterizing bisim

1 CHAPTER 1 INTRODUCTION 1.1 Background One branch of the study of descriptive complexity aims at characterizing complexity classes according to the l

Tecniche di Specifica e di Verifica. Automata-based LTL Model-Checking

Automata theory. An algorithmic approach. Lecture Notes. Javier Esparza

Moshe Y. Vardi y. IBM Almaden Research Center. Abstract. We present an automata-theoretic framework to the verication of concurrent

Logic and Automata I. Wolfgang Thomas. EATCS School, Telc, July 2014

Chapter 3: Linear temporal logic

Parameterized Regular Expressions and Their Languages

Lecture Notes on Emptiness Checking, LTL Büchi Automata

Lecture 14 - P v.s. NP 1

Logic Model Checking

Computer-Aided Program Design

method. In model checking [CES86, LP85, VW86], we check that a system meets a desired requirement by checking that a mathematical model of the system

Automata-based Verification - III

starting from the initial states. In this paper, we therefore consider how forward verication can be carried out for lossy channel systems. For that w

{},{a},{a,c} {},{c} {c,d}

The efficiency of identifying timed automata and the power of clocks

Model Checking of Safety Properties

arxiv: v3 [cs.fl] 2 Jul 2018

2 PLTL Let P be a set of propositional variables. The set of formulae of propositional linear time logic PLTL (over P) is inductively dened as follows

A shrinking lemma for random forbidding context languages

Counting and Constructing Minimal Spanning Trees. Perrin Wright. Department of Mathematics. Florida State University. Tallahassee, FL

Antichain Algorithms for Finite Automata

Computability and Complexity

On the Succinctness of Nondeterminizm

Description Logics: an Introductory Course on a Nice Family of Logics. Day 2: Tableau Algorithms. Uli Sattler

Finite-Delay Strategies In Infinite Games

First-order resolution for CTL

A version of for which ZFC can not predict a single bit Robert M. Solovay May 16, Introduction In [2], Chaitin introd

Timo Latvala. March 7, 2004

PSPACE-completeness of LTL/CTL model checking

CPSC 421: Tutorial #1


On Reducing Linearizability to State Reachability 1

Complexity Theory VU , SS The Polynomial Hierarchy. Reinhard Pichler

Finite Automata. Mahesh Viswanathan

Outline. Complexity Theory EXACT TSP. The Class DP. Definition. Problem EXACT TSP. Complexity of EXACT TSP. Proposition VU 181.

Temporal Logic Model Checking

SETH FOGARTY AND MOSHE Y. VARDI

Theory of Computation

Decision, Computation and Language

Weak Alternating Automata and Tree Automata Emptiness

Advanced Automata Theory 7 Automatic Functions

Finite Automata. BİL405 - Automata Theory and Formal Languages 1

One Quantier Will Do in Existential Monadic. Second-Order Logic over Pictures. Oliver Matz. Institut fur Informatik und Praktische Mathematik

On the Complexity of the Reflected Logic of Proofs

Büchi Automata and Their Determinization

Deciding Safety and Liveness in TPTL

Automata Theory for Presburger Arithmetic Logic

2 THE COMPUTABLY ENUMERABLE SUPERSETS OF AN R-MAXIMAL SET The structure of E has been the subject of much investigation over the past fty- ve years, s

On language equations with one-sided concatenation

Alternating Automata: Checking Truth and Validity for Temporal Logics

Helsinki University of Technology Laboratory for Theoretical Computer Science Research Reports 66

2. Elements of the Theory of Computation, Lewis and Papadimitrou,

Automata Theory and Formal Grammars: Lecture 1

ω-automata Automata that accept (or reject) words of infinite length. Languages of infinite words appear:

Variable Automata over Infinite Alphabets

cse303 ELEMENTS OF THE THEORY OF COMPUTATION Professor Anita Wasilewska

(Refer Slide Time: 0:21)

Pushdown timed automata:a binary reachability characterization and safety verication

Lecture 23 : Nondeterministic Finite Automata DRAFT Connection between Regular Expressions and Finite Automata

CS243, Logic and Computation Nondeterministic finite automata

Modular Model Checking? URL: orna

Wojciech Penczek. Polish Academy of Sciences, Warsaw, Poland. and. Institute of Informatics, Siedlce, Poland.

Failure Diagnosis of Discrete Event Systems With Linear-Time Temporal Logic Specifications

CS 154, Lecture 2: Finite Automata, Closure Properties Nondeterminism,

Lecture Notes On THEORY OF COMPUTATION MODULE -1 UNIT - 2

Foundations of Informatics: a Bridging Course

ICML '97 and AAAI '97 Tutorials

Decision issues on functions realized by finite automata. May 7, 1999

Helsinki University of Technology Laboratory for Theoretical Computer Science Research Reports 83

Weak Alternating Automata and Tree Automata Emptiness. Moshe Y. Vardi y. Rice University. are used for specication and verication of nonterminating

Theory of computation: initial remarks (Chapter 11)

INDEPENDENCE OF THE CONTINUUM HYPOTHESIS

Transcription:

Reasoning about Innite Computations Moshe Y. Vardi y IBM Almaden Research Center Pierre Wolper z Universite de Liege Abstract We investigate extensions of temporal logic by connectives dened by nite automata on innite words. We consider three dierent logics, corresponding to three dierent types of acceptance conditions (nite, looping and repeating) for the automata. It turns out, however, that these logics all have the same expressive power and that their decision problems are all PSPACE-complete. We also investigate connectives dened by alternating automata and show that they do not increase the expressive power of the logic or the complexity of the decision problem. 1 Introduction For many years, logics of programs have been tools for reasoning about the input/output behavior of programs. When dealing with concurrent or nonterminating processes (like operating systems) there is, however, a need to reason about innite computations. Thus, instead of considering the rst and last states of nite computations, we need to consider the innite sequences of states that the program goes through. Logics to reason about such sequences include temporal logic [Pn77] and temporal-logic-based process logics [Ni80, HKP80]. In the propositional case, computations can be viewed as innite sequences of propositional truth assignments. For reasoning about individual propositional truth assignments, propositional logic is a descriptively complete language, i.e., it can specify any set of propositional truth assignments. There is, however, no a priori robust notion of descriptive completeness for reasoning about sequences of propositional truth assignments. A preliminary version of this paper, authored by P. Wolper, M.Y. Vardi, and A.P. Sistla, appeared in Proc. 24th IEEE Symp. on Foundations of Computer Science, 1983, pp. 185{194, under the title \Reasoning about Innite Computation Paths". y Address: IBM Almaden Research K53-802, 650 Harry Rd., San Jose, CA 95120-6099,USA, vardi@almaden.ibm.com z Address: Institut Monteore, B28, Universite de Liege, B-4000 Liege Sart-Tilman, Belgium, pw@monteore.ulg.ac.be 1

In [GPSS80] propositional temporal logic (P T L) was shown to be expressively equivalent to the monadic rst-order theory of (N; <), the natural numbers with the less-than relation. This was taken as an indication that P T L and the process logics based on it are \descriptively complete". The above assumes that rst-order notions are all we need to reason about innite computations. But, the very fundamental notion of regularity of sets of event sequences is not rst-order; even the simple assertion that \the proposition p holds at least in every other state on a path" is not expressible in P T L [Wo83]. In fact, P T L and the rst-order theory of (N; <) are known to be expressively equivalent to star-free!-regular languages [Lad77, Tho79, Tho81]. On the other hand,!-regular sequences are a natural way of describing concurrent processes [Sh79], [Mi80]; and furthermore, the ability to describe!-regular sequences is crucial to the task of program verication [LPZ85]. There are several dierent ways to extend the expressive power of P T L. We could add some non-rst-order construct, such as least-xpoint or second-order quantication, but we prefer here to add an explicit mechanism for specifying!-regular events as was done in [Wo83]. There, P T L is extended with a temporal connective corresponding to every nondeterministic nite automaton on innite words. 1;2 For example, if = fa; bg and A is the automaton accepting all innite words over having a in every other position, then A is also a binary temporal connective, and the formula A(p; true) is satised by the paths where p holds in at least every other state. An important point that was not considered in [Wo83] is that to dene!-regular sequences one needs to impose certain repeating conditions on accepting automata runs. For example, Buchi automata, which dene exactly the!-regular languages, require that some accepting state occurs innitely often in the run [Bu62, McN66]. Using Buchi automata to dene temporal connectives gives rise to an extended temporal logic that we call ET L r. In addition to the notion of repeating acceptance, we also consider two other notions of acceptance. The rst notion is nite acceptance: an automaton accepts an innite word if it accepts some prex of the word by the standard notion of acceptance for nite words. The second notion is looping acceptance: an automaton accepts an innite word if it has some innite run over the word (this is the notion used in [Wo83]). Using automata with nite and looping acceptance conditions to dene temporal connectives gives rise to extended temporal logics that we call ET L f and ET L l, respectively. We should note that our interest in nite and looping acceptance is not due to their automata-theoretic 1 To be precise, the formalism used in [Wo83] is that of right-linear context-free grammars over innite words. 2 Note that here the use of automata is conceptually dierent from the use of automata in dynamic logic (cf. [Ha84, HS84, Pr81]). In dynamic logic automata are used to describe owchart programs, while here automata are used to describe temporal sequences. Thus, dynamic logic automata describe regular sequences of program statements, while temporal logic automata describe regular properties of state sequences. 2

signicance, 3 but due to their signicance as specication constructs. Finite acceptance can be viewed as describing a liveness property, i.e., \something good will eventually happen", while looping acceptance can be viewed as a safety condition, i.e., \something bad will never happen". 4 Thus, nite and looping acceptance can be viewed as extensions of the \Eventually " and \Always" constructs of P T L. The notions of nite and looping acceptance are incomparable and are strictly weaker than the notion of repeating acceptance. For example, the sequence (ab? )! can be dened by repeating acceptance but not by nite or looping acceptance. Thus, we could expect the logics ET L r, ET L f, and ET L l to have dierent expressive powers. One of the main results of this paper is that all these logics are expressively equivalent. They all have the expressive power of!-regular expressions, which by [Bu62] is the same as that of the monadic second-order theory of (N; <), usually denoted S1S. We also consider the complexity of the decision problem for ET L f and ET L l. It turns out that both logics have the same complexity as P T L: polynomial space (cf. [HR83, SC85]). In contrast, the decision problem for S1S is nonelementary [Me75], as is the emptiness problem for regular expressions with complement [MS73]. 5 An important contribution of this paper is to show that temporal logic formulas can be directly compiled into equivalent Buchi automata. To make the construction and its proof of correctness easier and more systematic, we rst dene variants of Buchi automata that are geared towards recognizing models of temporal formulas and prove that these variants can be converted to Buchi automata. We then use our construction to obtain decision procedures. Our approach is, starting with a formula, to build an equivalent Buchi automaton and then to check that this automaton is nonempty. 6 Note that the construction of Buchi automata from temporal logic formulas is not only useful for obtaining decision procedures for the logic but is also the cornerstone of synthesis and verication methods based on temporal logic [MW84, PR89, VW86b, Wo89]. This construction is also the cornerstone for applications of temporal logic to the verication of probabilistic and real-time programs (cf. [AH90, Va85b]. Finally, to explore the full power of our technique, we introduce alternating nite automata connectives. These can be exponentially more succinct than nondeterministic 3 For an extensive study of acceptance conditions for!-automata see [Ch74, Ka85, Lan69, MY88, Sta87, Tho90, Wa79]. Our three conditions corresponds to the rst three conditions in [Lan69]. It is known that these three conditions exhaust all the possibilities in the Landweber classication (cf. [Wa79]). 4 The notions of safety and liveness are due to Lamport [Lam77]. Our notion of liveness here corresponds to the notion of guarantee in [MP89]. 5 In [SVW87] it is shown that the decision problem for ET L r is also PSPACE-complete. This result requires signicantly more complicated automata-theoretic techniques that are beyond the scope of this paper. For other comments on the complexity of ET L r see Section 3. 6 The automata-theoretic approach described here can be viewed as a specialization of the automatatheoretic approach to decision problems of dynamic logic described in [VW86a] (see also [ES84, St82]). Note, however, that while the tree automata constructed in [ES84, St82, VW86a] accept only some models of the given formulas, the automata constructed here accept all models of the given formulas. 3

automata connectives, and one may expect their introduction to push the complexity of the logic up. We investigate both AT L f and AT L l, which are the alternating analogues of ET L f and RT L l. Surprisingly, we show that the decision problems for these logics are still PSPACE-complete. 2 Finite Automata on Innite Words In this section we dene several classes of nite automata on innite words and examine their nonemptiness problem. The framework developed here is a specialization of the treeautomata framework developed in [VW86a]. In view of the wide range of applications of temporal logic (cf. [BBP89]), we describe the word-automata framework in detail, in order to make the paper self-contained. 2.1 Denitions We start by making our notation for innite words precise. We denote by! the set of natural numbers f0; 1; 2; : : :g. We use the notation [i; j] for the interval fi; i + 1; : : : ; jg of!. An innite word over an alphabet is a function w :!!. The ith letter in an innite word w is thus w(i). We will often denote it by w i and write w = w 0 w 1 : : :. A nondeterministic nite automaton (abbr. NFA) on innite words is a tuple A = (; S; ; S 0 ; F ) where: is a nite alphabet of letters, S is a nite set of states, : S! 2 S is the transition function, mapping each state and letter to a set of possible successor states, S 0 is a set of initial states, and F S is a set of accepting states. The automaton is deterministic if j S 0 j= 1 and j (s; a) j 1 for all s 2 S and a 2. A nite run of A over an innite word w = w 0 w 1 : : :, is a nite sequence = s 0 s 1 : : : s n 1, where s 0 2 S 0, s i+1 2 (s i ; w i ), for all 0 i < n 1. A run of A over an innite word w = w 0 w 1 : : :, is an innite sequence = s 0 s 1 : : :, where s 0 2 S 0, s i+1 2 (s i ; w i ), for all i 0. Depending on the acceptance condition we impose, we get dierent types of automata. In nite acceptance automata, a nite run = s 0 s 1 : : : s n 1 is accepting if s n 1 2 F. In looping acceptance automata, any run = s 0 s 1 : : : is accepting, i.e., no condition is imposed by the set F. In repeating acceptance automata (Buchi automata [Bu62]), a run 4

= s 0 s 1 : : : is accepting if there is some accepting state that repeats innitely often, i.e., for some s 2 F there are innitely many i's such that s i = s. In all three types of automata, an innite word w is accepted if there is some accepting run over w. The set of innite words accepted by an automaton A is denoted L(A). The three dierent types of automata recognize dierent classes of languages. It turns out that the class of languages accepted by repeating acceptance automata (L repeat ) strictly contains the class of languages accepted by looping (L loop ) and nite acceptance (L f inite ) automata. These last two classes are incomparable. The languages accepted by Buchi (repeating acceptance) automata are often called the!-regular languages. By results of Buchi [Bu62] (see also McNaughton [McN66]), this class of languages is closed under union, intersection and complementation and is equivalent to the class of languages describable in the monadic second-order theory of one successor (S1S) and by!-regular expressions. The monadic second-order theory of successor is the interpreted formalism which has individual variables ranging over the natural numbers, monadic predicate variables ranging over arbitrary sets of natural numbers, the constant 0, the successor function, boolean connectives and quantication over both types of variables.!-regular expressions are expressions of the form [ i i ( i )! where the union is nite, and are classical regular expressions, and! denotes countable repetition. To establish the relations between L repeat, L loop and L f inite, we state the following well-known facts (cf. [Ch74, Ka85, Lan69, MY88, Sta87, Tho90, Wa79]). Lemma 2.1: 1. The language a! over the alphabet = fa; bg is in L loop but not in L f inite. 2. The language a? b(a [ b)! over the alphabet = fa; bg is in L f inite but not in L loop. 3. The language (a? b)! over the alphabet = fa; bg is in L repeat, but not in L f inite or in L loop. Corollary 2.2: 1. L repeat strictly contains L f inite and L loop. 2. L f inite and L loop are incomparable. For the rest of this section, we will only deal with Buchi automata. 2.2 The Nonemptiness Problem for Buchi Automata An important problem we will have to solve for Buchi automata is the nonemptiness problem, which is, given a Buchi automaton, to determine whether it accepts at least one word. We are interested in the complexity of this problem as a function in the size 5

of the given automaton. (We assume some standard encoding of automata, so the size of an automaton is the length of its encoding.) Let A = (; S; ; S 0 ; F ) be an automaton. We say that a state t is reachable from a state s if there are a nite word w = w 1 : : : w k in? and a nite sequence s 0 ; : : : ; s k of states in S such that s 0 = s, s k = t, and s i+1 2 (s i ; w i+1 ) for 0 i k 1. Lemma 2.3: [TB73] A Buchi automaton accepts some word i there is an accepting state of the automaton that is reachable from some initial state and is reachable from itself. We can now prove the following: Theorem 2.4: The nonemptiness problem for Buchi automata is logspace-complete for NLOGSPACE. Proof: We rst prove that the problem is in NLOGSPACE. Given Lemma 2.3, to determine if a Buchi automaton accepts some word, we only need to check if there is some accepting state reachable from some initial state and reachable from itself. To do this, we nondeterministically guess an initial state s and an accepting state r; we then nondeterministically attempt to construct a path from s to r and from r to itself. To construct a path from any state x to any other state y, we proceed as follows: 1. Make x the current state. 2. Choose a transition from the current state and replace the current state by the target of this transition. 3. If the current state is y, stop. Otherwise repeat from step (2). At each point only three states are remembered. Thus the algorithm requires only logarithmic space. To show NLOGSPACE hardness, given Lemma 2.3, it is straightforward to construct a reduction from the graph accessibility problem, proved to be NLOGSPACE complete in [Jo75]. To solve the satisability problem for extended temporal logic, we will proceed as follows: build a Buchi automaton accepting the models of the formula and determine if that automaton is nonempty. The automata we will build are exponential in the size of the formula. However, as the fact that the nonemptiness problem is in NLOGSPACE indicates, it is possible to solve the satisability problem using only polynomial space. The argument is that it is not necessary to rst build the whole automaton before applying the algorithm for nonemptiness. (A similar argument, though in a somewhat dierent framework, is used in [HR83, SC85, Wo83].) We now make this argument more precise. Let be some xed nite alphabet. A problem P is a subset of?. An instance is an element x of? for which we want to determine membership in P. Assume that we also use to encode Buchi automata and their constituent elements (alphabet,states,: : : ). 6

Lemma 2.5: Let P be a problem, let f be a polynomial, let ; ; ; ; be algorithms that associate with any instance x the encoding of a Buchi automaton in the following manner: A x = ( x ; S x ; x ; S x 0 ; F x) 1. if y 2 x or y 2 S x, then j y j f(j x j), 2. given y 2?, determines whether y 2 x using at most f(j x j) space, 3. given y 2?, determines whether y 2 S x using at most f(j x j) space, 4. given y 2?, determines whether y 2 S x 0 using at most f(j x j) space, 5. given y 2?, determines whether y 2 F x using at most f(j x j) space, 6. given u; v; w 2?, determines whether u 2 x (v; w), using at most f(j x j) space, and 7. x 2 P i A x accepts some word. Then there is a polynomial space algorithm for determining membership in P. Proof: We use the algorithm described in the proof of Theorem 2.4 to check if A x is nonempty. Given the assumptions we have made, each of the steps in the algorithm can be executed in polynomial space. Moreover, the two states being remembered are also of size polynomial in the size of the problem. So, the whole algorithm requires polynomial space. This establishes that the problem is in NPSPACE and hence PSPACE. When constructing Buchi automata from temporal logic formulas, we will often have to take the intersection of several Buchi automata. The following lemma is a special case of Theorem A.1 in [VW86a] and extends a construction of [Ch74] (see also [ES84]). Theorem 2.6: Let A 0 ; : : : ; A k 1 be Buchi automata. There is a Buchi automaton A with k k 1 i=0 ja i j states such that L(A) = L(A 0 ) \ : : : \ L(A k 1 ). Proof: Let A i = (; S i ; i ; S i 0 ; F i ). Dene A = (; S; ; S 0 ; F ) as follows: S = S 0 : : : S k 1 f0; : : : ; k 1g, S 0 = S 0 0 : : : S k 1 0 f0g, F = F 0 S 1 : : : S k 1 f0g, and (s 0 1 ; : : : ; sk 1 1 ; j) 2 ((s 0 ; : : : ; s k 1 ; i); a) i s l 1 2 (s l ; a), for 0 l k 1, and either s i 62 F i and i = j or s i 2 F i and j = (i + 1) mod k. The automaton A consists of k copies of the cross product of the automata A i. Intuitively, one starts by running the rst copy of this cross product. One then jumps from the copy i to the copy (i + 1) mod k when a nal state of A i is encountered. The acceptance condition imposes that one goes innitely often through a nal state of A 0 in the copy 0. This forces all the components of A to visit their accepting states in a cyclic order. We leave it to the reader to formally prove that L(A) = L(A 0 ) \ : : : \ L(A k 1 ). 7

2.3 Subword Automata To make the construction of Buchi automata from extended temporal logic formulas easier, we dene more specialized classes of automata: subword automata and set-subword automata. They are the specialization to words of subtree automata and set-subtree automata dened in [VW86b]. For completeness sake, we give a detailed treatment of this specialization. Intuitively, a subword automaton checks that, starting at every position in an innite word, there is a nite word accepted by some automaton. The automata accepting nite words at the various positions dier only by their initial state. The initial state corresponding to a position is determined by the symbol appearing at that position in the innite word. Formally, a subword automaton A is a tuple (; S; ; ; F ), where is the alphabet, S is the state set, : S! 2 S is the transition function, :! S is the labeling function, and F S is the nonempty set of accepting states. An innite word w 2! is accepted by A if the following two conditions hold: labeling condition: (w i+1 ) 2 ((w i ); w i ) for every i 2! subword condition: for every i 2!, there exists some j i and a mapping ' : [i; j]! S such that '(i) = (w i ), '(j) 2 F, and for all k, i k < j, '(k + 1) 2 ('(k); w k ). The labeling condition requires the labeling of the word to be compatible with the transition function of A. The subword condition, requires that from each position i in the word, there be a subword accepted by A viewed as an automaton on nite words with initial state (w i ). Using a construction similar to the ag construction of Choueka [Ch74], a subword automaton, even without the labeling condition, can be converted to a Buchi automaton with at most an exponential increase in size. We show now that because of the labeling condition we can do this conversion with only a quadratic increase in size. Before proving this we need a technical lemma. Lemma 2.7: Let A = (; S; ; ; F ) be a subword automaton. Then A accepts a word w :!! i (w i+1 ) 2 ((w i ); w i ) for every i 2!, and 8

for every i 2!, there exists some j > i and a mapping ' : [i; j]! S such that '(i) = (w i ), '(j) 2 F, and for all k, i k < j, '(k + 1) 2 ('(k); w k ). Proof: The only dierence between the condition in the lemma and the standard condition of acceptance is the requirement that the interval [i; j] contain at least two points and hence that the accepted subword w i : : : w j 1 be nonempty. Thus the \if" direction is trivial. For the \only if" direction assume that A accepts w. The labeling condition clearly holds, so it remains to show the existence of the \right" subword. Let i 2!. Then, there exists some j i and a mapping ' i : [i; j]! S that satises the subword condition at i. If i > j, then we are done. So, let us assume i = j. We know that there exists some k i + 1 and a mapping ' i+1 : [i + 1; k]! S that satises the subword condition at point i + 1. We claim that the interval [i; k] satises the subword condition at i. Indeed, because of the labeling condition, the mapping ' = ' i [ ' i+1 satises the required conditions Theorem 2.8: Every subword automaton with m states is equivalent to a Buchi automaton with O(m 2 ) states. Proof: Let A = (S; ; ; ; F ) be a subword automaton. We now dene two new transition functions 1 ; 2 : S! 2 S : 1 (s; a) = (s; a) if s = (a), and 1 (s; a) = ; otherwise. 2 (s; a) = ((a); a) if s 2 F, and 2 (s; a) = (s; a) otherwise. These transition functions let us dene two Buchi automata B 1 = (; S; 1 ; S; S) and B 2 = (; S; 2 ; S; F ). Basically, B 1 will take care of checking the labeling condition and B 2 of checking the subword condition. Let us show that a word w :!! is accepted by A i it is accepted by both B 1 and B 2. Suppose rst that w is accepted by B 1 and B 2. This means there are accepting runs 1 = s 10 ; s 11 ; : : : and 2 = s 20 ; s 21 ; : : : of B 1 and B 2 (resp.) on w. We verify rst that the labeling property holds. Clearly, 1 (s 1i ; w i ) 6= ; for every i 2!. Thus, s 1i = (w i ). Consequently, for every i 2!, (w i+1 ) = s 1;i+1 2 1 (s 1i ; w i ) = ((w i ); w i ): It remains to verify the subword condition. Let i 0. As the run 2 is accepting, there is some j i such that s 2j 2 F. For the same reason, there is a k > j such that s 2k 2 F. Assume without loss of generality that s 2l 62 F for all j < l < k. Consider the interval [i; k]. We claim that it satises the subword condition at i. 9

Indeed, a suitable mapping ' : [i; k]! S can be dened in the following way: '(l) = (w l ) for l 2 [i; j] and '(l) = s 2l for l 2 [j + 1; k]. Clearly '(k) 2 F. For i l < j we have '(l + 1) = (w l+1 ) 2 ((w l ); w l ) = ('(l); w l ): For l = j we have '(l + 1) = s 2l+1 2 2 (s 2l ; w l ) = ((w l ); w l ) = ('(l); w l ) as s 2j 2 F. Finally, for j + 1 l < k '(l + 1) = s 2l+1 2 2 (s 2l ; w l ) = (s 2l ; w l ) = ('(l); w l ) as s 2l 62 F for j + 1 l < k. We now have to show that if w is accepted by A then it is accepted by both B 1 and B 2. Consider the run 1 = s 10 ; s 11 ; s 12 ; : : : where s 1i = (w i ). By the labeling condition, 1 is an accepting run of B 1 on w. We now dene a computation 2 = s 20 ; s 21 ; s 22 ; : : : of B 2 by dening it on a sequence of increasing intervals [0; j 1 ]; [0; j 2 ]; : : : such that for each j i, s 2ji 2 F. The rst interval is [0; 0] and we dene s 20 = s, where s is an arbitrary member of F. Suppose now that 2 is dened on the interval [0; j n ]. By induction, s 2jn 2 F. By Lemma 2.7, there exists an interval [j n ; j n+1 ], j n+1 > j n and a mapping ' : [j n ; j n+1 ]! S such that '(j n ) = (w jn ), '(j n+1 ) 2 F, and for all k, j n k < j n+1, '(k + 1) 2 ('(k); w k ). Without loss of generality, we can assume that '(k) 62 F for j n < k < j n+1. For j n < k j n+1, we dene s 2k = '(k). We show now that for every j n k < j n+1, s 2;k+1 2 2 (s 2k ; w k ). Indeed, if k = j n, then s 2k = (w jn ) and s 2k 2 F. Consequently, s 2;k+1 = '(k + 1) 2 ((w k ); w k ) = 2 (s 2k ; w k ): If j n < k < j n+1, then s 2k = '(k) 62 F. Consequently, s 2;k+1 = '(k + 1) 2 ('(k); w k ) = 2 (s 2k ; w(k)): S 1 Clearly, n=1[0; j n ] =!. Hence we have dened a run 2 of B 2. Moreover, as for each j n, s 2jn 2 F, the run is accepting. We have shown that w is accepted by both B 1 and B 2. Finally, by Theorem 2.6, we can construct an automaton B such that a word w is accepted by B i it is accepted by both B 1 and B 2. It follows that the subword automaton A is equivalent to the Buchi automaton B. Subword automata are more adequate than Buchi automata for the purpose of reducing satisability of formulas to nonemptiness of automata. Nevertheless, to facilitate our task in the rest of the paper, we now specialize the notion of subword automata even further. The words that we shall deal with are going to consists of sets of formulas, 10

and the states of the automata that will accept these words are also going to be sets of formulas. Thus, we consider automata where the alphabet and the set of states are the same set and have the structure of a power set. We will call these automata set-subword automata. Formally, a set-subword automaton A is a pair ( ; ), where is a nite set of basic symbols (in fact these symbols will be just formulas of the logic). The power set 2 serves both as the alphabet and as the state set S. The empty set serves as a single accepting state. We will denote elements of 2 by a; b; : : : when viewed as letters from the alphabet, and by s; s 1 ; : : : when viewed as elements of the state set S. Intuitively, a letter is a set of formulas that are \alleged" to be true and a state is a set of formulas that for which the automaton tries to verify the \allegation". : S! 2 S is a transition function such that 1. (s; a) 6= ; i s a, 2. ; 2 (;; a), 3. if s s 0, s 1 s 0 1 and s 1 2 (s 0 ; a), then s 0 1 2 (s; a), 4. if s 0 1 2 (s 1; a) and s 0 2 2 (s 2; a), then s 0 1 [ s0 2 2 (s 1 [ s 2 ; a). A word w :!! is accepted by A if for every i 2! and every f 2 w i there exists a nite interval [i; j] and a mapping ' : [i; j]! S such that '(i) = ffg, '(j) = ;, and for all i k < j, '(k + 1) 2 ('(k); w k ). The acceptance condition requires that for each position i in the word and for each formula f in w i, the \right" formulas appear in the letters w i+1 ; w i+2 ; : : :. Intuitively, the transition of the automaton are meant to capture the fact that if a certain formula appears in w i, then certain formulas must appear in w i+1. The four conditions imposed on the transition relation can be explained as follows: 1. The formulas in the state of the automaton are formulas that the automaton is trying to verify. A minimal requirement is that these formulas appear in the letter of the scanned position of the word. As we will see, this condition is related to the labeling condition dened for subword automata. 2-3. These are what we call monotonicity conditions. A transition of the automaton is a minimum requirement on the formulas in w i+1 given the formulas that the automaton is trying to verify at i. Clearly if there is nothing to verify at i, then nothing is required at i + 1 (condition (2)). Also, the transition is still legal if we try to verify fewer formulas at i or more formulas at i + 1. 11

4. This is an additivity condition. It says that there is no interaction between dierent formulas that the automaton is trying to verify at position i. Thus the union of two transitions is a legal transition. The acceptance condition requires that for each position in the word, if we start the automaton in each of the singleton sets corresponding to the members of the letter of that position, it accepts a nite subword. We will now prove that, given conditions (2), (3) and (4), it is equivalent to require that the automaton accept when started in the state identical to the letter of the node. Theorem 2.9: Let A = ( ; ) be a set-subword automaton, let w :!! 2 and let i 2!. The following are equivalent: be a word, 1. There exists some j i and a mapping ' : [i; j]! S such that '(i) = w i, '(j) = ;, and for all i k < j, '(k + 1) 2 ('(k); w k ). 2. For every f 2 w i, there exists some j f i and a mapping ' f : [i; j f ]! S such that ' f (i) = ffg, ' f (j) = ;, and for all i k < j f, ' f (k + 1) 2 (' f (k); w k ). Proof: (1) ) (2). Let f 2 w i. We take j f = j, and dene ' f as follows: ' f (i) = ffg and ' f (k) = '(k) for i < k j f. We only have to show that ' f (k + 1) 2 (' f (k); w k ) for i k < j f. But this follows, by the monotonicity condition (3) in the denition of set-subword automata, since ' f (i) '(i). (2) ) (1). If w i = ; take j = i, otherwise take j = max f2w i fj f g. For every f 2 w i, extend ' f to [i; j] by dening ' f (k) = ; for j f < k j. If w i = ; we dene '(i) = ;, otherwise we dene '(k) = S f2w i ' f (y) for each k 2 [i; j]. Because j j f for each f 2 w i, '(j) = ;. Furthermore, by condition (4) in the denition of set-subword automata, if i k < j, then '(k + 1) 2 ('(k); w k ). We can now prove that set-subword automata can be converted to subword automata without any increase in size. Thus, by Theorem 2.8, a set-subword automaton can be converted to an equivalent Buchi automaton with only a quadratic increase in size. Theorem 2.10: The set-subword automaton A = ( ; ) is equivalent to the subword automaton A 0 = (2 ; 2 ; ; ; f;g), where is the identity function. Proof: By Lemma 2.9, it is immediate that if a word w is accepted by the automaton A 0, then it is also accepted by A. Also by Lemma 2.9, if the word w is accepted by the set-subword automaton A, the subword condition of the subword automaton A 0 is satised. It remains to show that satises the labeling condition. In other words, since is the identity mapping, we have to show that for every i 2!, w i+1 2 (w i ; w i ). Since A accepts w, there exists some j i and a mapping ' : [i; j]! S such that '(i) = w i, 12

'(j) = ; and for all i k < j, '(k + 1) 2 ('(k); w k ). If j = i, then w i = ;, and by the monotonicity conditions we have w i+1 2 (w i ; w i ). Otherwise, i + 1 2 [i; j], so '(i + 1) 2 (w i ; w i ). Consider now i + 1. If '(i + 1) = ;, then clearly '(i + 1) w i+1. Otherwise, ('(i + 1); w i+1 ) 6= ;, so, by condition (1) in the denition of set-subword automata, '(i + 1) w i+1. Thus by the monotonicity condition w i+1 2 (w i ; w i ). 3 Temporal Logic with Automata Connectives 3.1 Denition We consider propositional temporal logic where the temporal operators are dened by nite automata, similarly to the extended temporal logic (ET L) of [Wo83]. More precisely, we consider formulas built from a set P rop of atomic propositions. The set of formulas is dened inductively as follows: Every proposition p 2 P rop is a formula. If f 1 and f 2 are formulas, then :f 1 and f 1 ^ f 2 are formulas. For every nondeterministic nite automaton A = (; S; ; S 0 ; F ), where is the input alphabet fa 1 ; : : : ; a n g, S is the set of states, : S! 2 S is the transition relation, S 0 S is the set of initial states, and F S is a set of accepting states, if f 1 ; : : : ; f n are formulas (n = jj) then A(f 1 ; : : : ; f n ) is a formula. We call A an automaton connective. We assume some standard encoding for formulas. The length of a formula is the length of its encoding. This of course includes also the encoding of the automata connectives. A structure for our logic is an innite sequence of truth assignments, i.e., a function :!! 2 P rop that assigns truth values to the atomic propositions in each state. Note that such a function is an innite word over the alphabet 2 P rop. We will thus use the terms word and sequence interchangeably. We now dene satisfaction of formulas and runs of formulas A(f 1 ; : : : ; f n ) over sequences by mutual induction. Satisfaction of a formula f at a position i in a structure is denoted ; i j= f. We say that a sequence satises f (denoted j= f ) if ; 0 j= f. ; i j= p i p 2 (i), for an atomic proposition p. ; i j= f 1 ^ f 2 i ; i j= f 1 and ; i j= f 2. ; i j= :f i not ; i j= f. ; i j= A(f 1 ; : : : ; f n ), where A = (; S; ; S 0 ; F ), i there is an accepting run = s 0 ; s 1 ; : : : of A(f 1 ; : : : ; f n ) over, starting at i. 13

A run of a formula A(f 1 ; : : : ; f n ), where A = (; S; ; S 0 ; F ), over a structure, starting at a point i, is a nite or innite sequence = s 0 ; s 1 ; : : : of states from S, where s 0 2 S 0 and for all k, 0 k < jj, there is some a j 2 such that ; i+k j= f j and s k+1 2 (s k ; a j ). Depending on how we dene accepting runs, we get three dierent versions of the logic: ET L f : A run is accepting i some state s 2 F occurs in (nite acceptance) ET L l : A run is accepting i it is innite (looping acceptance) ET L r : A run is accepting i some state s 2 F occurs innitely often in (repeating or Buchi acceptance). Every formula denes a set of sequences, namely, the set of sequences that satisfy it. We will say that a formula is satisable if this set is nonempty. The satisability problem is to determine, given a formula f, whether f is satisable. We are also interested in the expressive power of the logics we have dened. Our yardstick for measuring this power is the ability of the logics to dene sets of sequences. Note that, by Corollary 2.2, ET L r is at least as expressive as ET L f and ET L l. We can not, however, use Corollary 2.2, to infer that ET L r is more expressive than ET L f and ET L l. Similarly, Corollary 2.2 does not give any information about the relative expressive power of ET L f and ET L l. Example 3.1: Consider the automaton A = (; S; ; S 0 ; F ), where = fa; bg, S = fs 0 ; s 1 g, (s 0 ; a) = fs 0 g, (s 0 ; b) = fs 1 g, (s 1 ; a) = (s 1 ; b) = ;, S 0 = fs 0 g and F = fs 1 g. If we consider nite acceptance, it accepts the language a? b. It thus denes an ET L f connective such that A(f 1 ; f 2 ) is true of a sequence i f 1 is true until f 2 is true. Thus A 1 is equivalent to \Until" connective of [GPSS80]. It is indeed not hard to see that all the extended temporal logics (ET L f, ET L l, and ET L r ) are at least as expressive as P T L. Example 3.2: Consider the automaton A = (; S; ; S 0 ; F ), where = fa; bg, S = fs 0 ; s 1 g, (s 0 ; a) = fs 1 g, (s 0 ; b) = ;, (s 1 ; a) = ;, (s 1 ; b) = fs 0 g, S 0 = fs 0 g and F = ;. If we consider looping acceptance, it only accepts one word: w = ababababab : : :. It thus denes an ET L l connective such that A 1 (f 1 ; f 2 ) is true of a sequence i f 1 is true in every even state and f 2 is true in every odd state of that sequence. It is shown in [Wo83] that this property is not expressible in P T L. 3.2 Translations to Automata and Decision Procedure As we have pointed out, the structures over which ET L is interpreted can be viewed as innite words over the alphabet 2 P rop. It is not hard to show that our logics are translatable into S1S, and hence, by [Bu62] and [McN66], they dene! -regular sets of 14

words. That is, given a formula in one of the logics ET L f, ET L l, or ET L r, one can build a Buchi automaton that accepts exactly the words satisfying the formula. Nevertheless, since negation in front of automata connectives causes an exponential blow-up, we might have expected the complexity of the translation to be nonelementary, as is the translation from S1S to Buchi automata [Bu62]. Not so; for each of the logics we have dened, there is an exponential translation to Buchi automata. This translation also yields a PSPACE decision procedure for the satisability problem. In this section, we will give the translation for ET L f and ET L l. The translation for ET L r is given in [SVW87]. We start with the translations for ET L f and then outline the dierences that occur when dealing with ET L l. We rst need to dene the notion of the closure of an ET L f formula g, denoted cl(g). It is similar in nature to the closure dened for P DL in [FL79]. From now on we identify a formula ::g 0 with g 0. Given an automaton A = (; S; ; S 0 ; F ), for each s 2 S we dene A s to be the automaton (; S; ; fsg; F ). The closure cl(g) of an ET L f formula g is then dened as follows: g 2 cl(g). g 1 ^ g 2 2 cl(g)! g 1 ; g 2 2 cl(g). :g 1 2 cl(g)! g 1 2 cl(g). g 1 2 cl(g)! :g 1 2 cl(g). A(g 1 ; : : : ; g n ) 2 cl(g)! g 1 ; : : : ; g n 2 cl(g). A(g 1 ; : : : ; g n ) 2 cl(g)! A s (g 1 ; : : : ; g n ) 2 cl(g), for all s 2 S. Intuitively, the closure of g consists of all subformulas of g and their negations, assuming that A s (g 1 ; : : : ; g n ) is considered to be a subformula of A(g 1 ; : : : ; g n ). Note that the cardinality of cl(g) can easily be seen to be at most 2l, where l is the length of g. A structure :!! 2 P rop for an ET L f formula g can be extended to a sequence :!! 2 cl(g) in a natural way. With each point i 2!, we associate the formulas in cl(f) that are satised at that point. Sequences on 2 cl(g) corresponding to models of a formula satisfy some special properties. A Hintikka sequence for an ET L f formula g is a sequence :!! 2 cl(g) that satises conditions 1{5 below. To state these conditions, we use a mapping that, for an automaton connective A, associates states of A to elements of 2 cl(g). Precisely, given A(g 1 ; : : : ; g n ) 2 cl(g) with A = (; S; ; S 0 ; F ), we dene a mapping A : 2 cl(g)! 2 S such that A (a) = fs 2 (s 0 ; a l ) : s 0 2 S 0 ; a l 2 ; and g l 2 ag: In the following conditions, whenever A(g 1 ; : : : ; g n ) 2 cl(g) is mentioned, we assume that A = (; S; ; S 0 ; F ). The conditions are then: 15

1. g 2 0, and for all i 2! 2. h 2 i i :h 62 i, 3. h ^ h 0 2 i i h 2 i and h 0 2 i, 4. if A(g 1 ; : : : ; g n ) 2 i, then either S 0 \ F 6= ; or there exists some j > i and a mapping ' : [i; j]! 2 cl(f ) such that: '(k) k for i k j, A s0 (g 1 ; : : : ; g n ) 2 '(i) for some s 0 2 S 0, '(j) = ;, for all k, i k < j, if A s (g 1 ; : : : ; g n ) 2 '(k), then either { s 2 F, or { there is some t 2 As ( k ) such that A t (g 1 ; : : : ; g n ) 2 '(k + 1), 5. if A(g 1 ; : : : ; g n ) 62 i, then: S 0 \ F = ; and A s (g 1 ; : : : ; g n ) 62 i+1 for all s 2 A ( i ). Hintikka conditions 2 and 3 are intended to capture the semantics of propositional connectives, while Hintikka conditions 4 and 5 are intended to capture the semantics of automata connectives. Note that for each propositional connective we need one condition using a double implication (\i"), while we need two conditions for automata connectives due to the use of a single implication (\if : : : then"). The reason for doing this is that Hintikka condition 5 is weaker than the converse of Hintikka condition 4. This makes it easier to construct an automaton that checks these conditions. Proposition 3.3: An ET L f formula g has a model i it has a Hintikka sequence. Proof: Only if: Let g be an ET L f formula and let be a model of g. We dene a Hintikka sequence for g as follows: for all i 0, i = fh 2 cl(g) : ; i j= hg. We now have to show that is indeed a Hintikka sequence. By the denition of a model, ; 0 j= g ; this implies Hintikka condition 1. That Hintikka conditions 2, 3, 4, and 5 hold follows immediately from the semantic denition of ET L f. If: Let g be an ET L f formula and let be a Hintikka sequence for g. Consider the structure such that (i) = fp 2 P rop : p 2 i g. We now show that ; 0 j= g. For this, we show by induction that for all g 0 2 cl(g) and i 2!, we have that g 0 2 i i ; i j= g 0. For the base case (g 0 2 P rop), this is immediate by construction. The inductive step for formulas of the form :h and h ^ h 0 follows directly from the Hintikka conditions 16

2 and 3, respectively. It remains to prove the inductive step for formulas of the form A(g 1 ; : : : ; g n ). Suppose rst that A(g 1 ; : : : ; g n ) 2 i. By Hintikka condition 4 and the inductive hypothesis, there is a nitely accepting run of A over starting at i, so ; i j= A(g 1 ; : : : ; g n ). Suppose now that A(g 1 ; : : : ; g n ) 62 i but ; i j= A(g 1 ; : : : ; g n ). Then there is a nitely accepting run of A over starting at i. That is, there are nite sequences s 0 ; : : : ; s k, and j 0 ; : : : ; j k, k 0, such that s 0 2 S 0, s k 2 F, and if 0 l < k, then ; i + l j= g jl and s l+1 2 (s l ; a jl ). By Hintikka condition 5 and by induction on k it follows that A sl (g 1 ; : : : ; g n ) 62 i+l for 0 l k. In particular, A sk (g 1 ; : : : ; g n ) 62 i+k. But s k 2 F, so Hintikka condition 5 is violated. We now build a Buchi automaton over the alphabet 2 cl(g) that accepts precisely the Hintikka sequences for g. We do that by building two automata, A L and A E, such that L(A L ) \ L(A E ) is the set of Hintikka sequences for g. The rst automaton A L, called the local automaton, checks the sequence locally, i.e., it checks Hintikka conditions 1-3 and 5. This automaton is a Buchi automaton. The second automaton A E, called the eventuality automaton, is a set-subword automaton that checks Hintikka condition 4. This automaton ensures that for all eventualities (i.e., formulas of the form A(g 1 ; : : : ; g n ) ), there is some nite word for which condition Hintikka 4 is satised. Finally, we convert the eventuality automaton to a Buchi automaton and combine it with the local automaton. The Local Automaton The local automaton is A L = (2 cl(g) ; 2 cl(g) ; L ; N g ; 2 cl(g) ). The state set is the collection of all sets of formulas in cl(g). For the transition relation L, we have that s 0 2 L (s; a) i a = s and: h 2 s i :h 62 s, h ^ h 0 2 s i h 2 s and h 0 2 s, if :A(g 1 ; : : : ; g n ) 2 s, the S 0 \ F :A s (g 1 ; : : : ; g n ) 2 s 0. = ;, and for all s 2 A (a) we have that The set of starting states N g consists of all sets s such that g 2 s. Clearly, A L accepts precisely the sequences that satisfy Hintikka conditions 1-3 and 5. The Eventuality Automaton The eventuality automaton is a set-subword automaton A E = (cl(g); E ). For the transition relation E, we have that s 0 2 E (s; a) i: s a, and If A(g 1 ; : : : ; g n ) 2 s, then, either S 0 \ F 6= ; or there is a state s 2 A (a) such that A s (g 1 ; : : : ; g n ) 2 s 0. 17

It is immediate to check that conditions (1-4) of the denition of set-subword automata are satised for A E. Furthermore, A E accepts precisely the sequences that satisfy Hintikka condition 4. We now have: Proposition 3.4: Let g be an ET L f formula and :!! 2 cl(g) be a sequence, then is a Hintikka sequence for f i 2 L(A L ) \ L(A E ). By Theorems 2.8 and 2.10, we can build a Buchi automaton that accepts L(A E ). This automaton will have 2 2cl(g) states. Then by using Theorem 2.6, we can build a Buchi automaton accepting L(A L ) \ L(A E ). This automaton will have 2 3cl(g)+1 states. 7 The automaton we have constructed, accepts words over 2 cl(g). However, the models of f are dened by words over 2 P rop. So, the last step of our construction is to take the projection of our automaton on 2 P rop. This is done by mapping each element b 2 2 cl(g) into the element a 2 2 P rop such that b\p rop = a. Equivalently, the automaton runs over a word over 2 P rop, guesses a corresponding word over 2 cl(g), and verify that the latter is a Hintikka sequence for g. To summarize, we have the following: Theorem 3.5: Given an ET L f formula g built from a set of atomic propositions P rop, one can construct a Buchi automaton A g (over the alphabet 2 P rop ), whose size is 2 O(jgj), that accepts exactly the sequences satisfying g. We can now give an algorithm and complexity bounds for the satisability problem for ET L f. To test satisability of an ET L f formula g, it is sucient to test if L(A g ) 6= ;. By Theorem 2.4, this can be done in nondeterministic logarithmic space in the size of A g. Moreover, it is not hard to see that the construction of A g satises the conditions of Lemma 2.5. Combining this with the fact that satisability for propositional temporal logic (a restriction of ET L f ) is PSPACE-hard [SC85], we have: Theorem 3.6: The satisability problem for ET L f is logspace complete for PSPACE. Let us now consider ET L l. The construction proceeds similarly to the one for ET L f. The rst dierence is that Hintikka conditions 4 and 5 are replaced by the following: 4 0 ) if A(g 1 ; : : : ; g n ) 2 i, then there is some s 2 A ( i ) such that A s (g 1 ; : : : ; g n ) 2 i+1. 5 0 ) if :A(g 1 ; : : : ; g n ) 2 i, then there exists some j i and a mapping ' : [i; j]! 2 cl(g) such that: '(k) k for i k j, 7 This construction contains some redundancy, since the work done by the component that checks for the labeling condition (from the proof of Theorem 2.7) is subsumed by the work done by the local automaton. By eliminating this redundancy we can build an equivalent automaton with 2 2cl(g) states. 18

:A s0 (g 1 ; : : : ; g n ) 2 '(i) for all s 0 2 S 0, '(j) = ;, for all k, i k < j, if :A s (g 1 ; : : : ; g n ) 2 '(k), then :A t (g 1 ; : : : ; g n ) 2 '(k + 1) for all t 2 As ( k ). The analogue of Proposition 3.3 holds: Theorem 3.7: An ET L l formula g has a model i it has a Hintikka sequence. Proof: As in Proposition 3.3, one direction is immediate. Let g be an ET L l formula and let be a Hintikka sequence for g. Consider the structure such that (i) = fp 2 P rop : p 2 i g. We show by induction that for all g 0 2 cl(g) and i 2!, we have that g 0 2 i i ; i j= g 0. We show the inductive step that corresponds to automata connectives. Suppose rst that A(g 1 ; : : : ; g n ) 2 i. By Hintikka condition 4 0, there are innite sequences s 0 ; s 1 ; : : : and j 0 ; j 1 ; : : : such that s 0 2 S 0 and if l 0 then s l+1 2 (s l ; a jl ) and g jl 2 i+l. By the induction hypothesis ; i + l j= g jl for all l 0, so ; i j= A(g 1 ; : : : ; g n ). Suppose now that A(g 1 ; : : : ; g n ) 62 i. Then, by Hintikka condition 2, we have that :A(g 1 ; : : : ; g n ) 2 i. If ; i j= A(g 1 ; : : : ; g n ), there is an innite run = s 0 ; s 1 ; : : : of A(g 1 ; : : : ; g n ) over starting at i. More explicitly, we have that s 0 2 S 0 and for all k 0 there is some a j 2 such that ; i + k j= g j and s k+1 2 (a j ; s k ). By the induction hypothesis, if ; i + k j= g j, then g j 2 i+k. But then, Hintikka condition 5 0 cannot hold. We again construct an automaton to recognize Hintikka sequences in two parts: the local automaton and the eventuality automaton. This time the local automaton deals with Hintikka conditions 1{4 and the eventuality automaton deals with condition 5. The local and eventuality automata only dier from those for ET L f by their transition functions. They are the following: The Local Automaton We have that s 0 2 L (s; a) i a = s and: h 2 s i :h 62 s, h ^ h 0 2 s i h 2 s and h 0 2 s, if A(g 1 ; : : : ; g n ) 2 s, the for some s 2 A (s) we have that A s (g 1 ; : : : ; g n ) 2 s 0. The Eventuality Automaton We have that s 0 2 E (s; a) i: s a, and 19

If :A(g 1 ; : : : ; g n ) 2 s, then for all s 2 A (a) we have that :A s (g 1 ; : : : ; g n ) 2 s 0. We now have the analogues of Proposition 3.4 and Theorems 3.5 and 3.6: Proposition 3.8: Let g be an ET L l formula and :!! 2 cl(g) is a Hintikka sequence for g i 2 L(A L ) \ L(A E ). be a sequence, then Theorem 3.9: Given an ET L l formula g built from a set of atomic propositions P rop, one can construct a Buchi automaton A g (over the alphabet 2 P rop ), whose size is 2 O(jgj), that accepts exactly the sequences satisfying f. Theorem 3.10: The satisability problem for ET L l is logspace complete for PSPACE. The construction described above simultaneously takes care of running automata in parallel, when we have an automata connective nested within another automata connective, and complementing automata, when we have a negated automata connective. Thus, the construction can be viewed as combining the classical subset construction of [RS59] and Choueka's \ag construction" in [Ch74]. This should be contrasted with the treatment of ET L r in [SVW87], where a special construction is needed to complement Buchi automata. As a result, the size of the automaton A g constructed in [SVW87] for an ET L r formula g is 2 O(jgj2 ). Using results by Safra [Sa88], this can be improved to 2 O(jgj log jgj), which is provably optimal. Thus, while the construction given here for ET L f and ET L l as well as the construction given in [SVW87] for ET L r are exponential, the exponent for ET L f and ET L l is linear, while it is nonlinear for ET L r. It is interesting the compare our technique here to Pratt's model construction technique [Pr79]. There one starts by building a maximal model, and then one eliminates states whose eventualities are not satised. Our local automata correspond to those maximal models. However, instead of eliminating states, we combine the local automata with the eventuality automata checking the satisfaction of eventualities. This construction always yields automata whose size is exponential in the size of the formula. We could also construct our automata using the tableau technique of [Pr80]. This technique can sometimes be more ecient than the maximal model technique. A major feature of our framework here is the use of set-subword automata to check for eventualities. A direct construction of a Buchi automaton to check for eventualities would have to essentially use the constructions of Theorems 2.8 and 2.9. To appreciate the simplicity of our construction using set-subword automata, the reader is encouraged to try to directly construct a Buchi automaton that checks Hintikka condition 4 for ET L f or checks Hintikka condition 5' for ET L l. There are, however, other possible approaches to the translation of formulas to automata. Streett [St90] suggested using so-called formula-checking automata. His approach eliminates the need for distinction between the local automaton and the eventuality automaton. Another approach, using weak alternating automata is described in [MSS88]. In that approach not only there 20