2014-12-15 178K 178 0 0

##### Description

Vardi Rice University Department of Computer Science PO Box 1892 Houston TX 772511892 USA Email vardicsriceedu URL httpwwwcsriceedu vardi Abstract The automatatheoretic approach to linear temporal logic u ses the theory of automata as a unifying par ID: 24337

**Embed code:**

## Download this pdf

DownloadNote - The PPT/PDF document "An AutomataTheoretic Approach to Linear ..." is the property of its rightful owner. Permission is granted to download and print the materials on this web site for personal, non-commercial use only, and to display it on your personal computer provided you do not modify the materials and that you retain all copyright notices contained in the materials. By downloading content from our website, you accept the terms of this agreement.

## Presentations text content in An AutomataTheoretic Approach to Linear Temporal Logic Moshe Y

Page 1

An Automata-Theoretic Approach to Linear Temporal Logic Moshe Y. Vardi Rice University Department of Computer Science P.O. Box 1892 Houston, TX 77251-1892, U.S.A. Email: vardi@cs.rice.edu URL: http://www.cs.rice.edu/ vardi Abstract. The automata-theoretic approach to linear temporal logic u ses the theory of automata as a unifying paradigm for program speci cation, veri˛cation, and synthesis. Both programs and speci˛cations are in essen ce descriptions of computations. These computations can be viewed as words ove r some alphabet. Thus,programs and speci˛cationscan be viewed as descripti ons of languagesover some alphabet. The automata-theoretic perspective consid ers the relationships between programs and their speci˛cations as relationships between languages.By translating programs and speci˛cations to automata, quest ions about programs and their speci˛cations can be reduced to questions about autom ata. More speci˛cally, questions such as satis˛ability of speci˛cations and corre ctness of programs with respect to their speci˛cations can be reduced to questions s uch as nonemptiness and containment of automata. Unlike classical automata theory, which focused on automat a on ˛nite words, the applications to program speci˛cation, veri˛cation, and sy nthesis, use automata on in˛nite words, since the computations in which we are inte rested are typically in˛nite. This paper provides an introduction to the theory o f automata on in˛nite words and demonstrates its applications to program speci˛c ation, veri˛cation, and synthesis. 1 Introduction While program veri˛cation was always a desirable, but never an easy task, the advent of concurrent programming has made it signi˛cantly more necessary and dif˛cult. Indee d, the conceptual complexity of concurrency increases the lik elihood of the program con- taining errors. To quote from [OL82]: ˚There is a rather larg e body of sad experience to indicate that a concurrent program can withstand very caref ul scrutiny without revealing its errors.º The ˛rst step in program veri˛cation is to come up with a formal speci˛cation of the program. One of the more widely used speci˛cation languages for concurrent programs is temporal logic [Pnu77, MP92]. Temporal logic comes in two varieties: linea r time and branching time ([EH86, Lam80]); we concentrate here on line ar time. A linear temporal Part of this work was done at the IBM Almaden Research Center.

Page 2

speci˛cation describes the computations of the program, so a program satis˛es the speci˛cation (is correct ) if all its computations satisfy the speci˛cation. Of cours e, a speci˛cation is of interest only if it is satis˛able . An unsatis˛able speci˛cation cannot be satis˛ed by any program. An often advocated approach to pr ogram development is to avoid the veri˛cation step altogether by using the speci˛ca tion to synthesize a program that is guaranteed to be correct. Our approach to speci˛cation, veri˛cation, and synthesis i s based on an intimate connection between linear temporal logic and automata theo ry, which was discussed explicitly ˛rst in [WVS83] (see also [LPZ85, Pei85, Sis83, S VW87, VW94]). This connection is based on the fact that a computation is essenti ally an in˛nite sequence of states. In the applications that we consider here, every s tate is described by a ˛nite set of atomic propositions, so a computation can be viewed as an in˛nite word over the alphabet of truth assignments to the atomic proposition s. The basic result in this area is the fact that temporal logic formulas can be viewed as ˛nite-state acceptors More precisely, given any propositional temporal formula, one can construct a ˛nite automaton on in˛nite words that accepts precisely the compu tations satis˛ed by the formula [VW94]. We will describe the applications of this ba sic result to satis˛ability testing, veri˛cation, and synthesis. (For an extensive tre atment of the automata-theoretic approach to veri˛cation see [Kur94]). Unlike classical automata theory, which focused on automat a on ˛nite words, the applications to speci˛cation, veri˛cation, and synthesis , use automata on in˛nite words, since the computations in which we are interested are typica lly in˛nite. Before going into the applications, we give a basic introduction to the th eory of automata on in˛nite words. To help the readers build their intuition, we review t he theory of automata on ˛nite words and contrast it with the theory of automata on in nite words. For a more advanced introduction to the theory of automata on in˛nite o bjects, the readers are referred to [Tho90]. 2 Automata Theory We are given a ˛nite nonempty alphabet . A ˛nite word is an element of , i.e., a ˛nite sequence ; : : : ; a of symbols from . An in˛nite word is an element of i.e., an -sequence ; a ; : : : of symbols from . Automata on ˛nite words de˛ne (˛nitary) languages , i.e., sets of ˛nite words, while automata on in˛nite words d e˛ne in˛nitary languages , i.e., sets of in˛nite words. 2.1 Automata on Finite Words ± Closure A (nondeterministic ˛nite) automaton is a tuple ; S; S ; ; F , where is a ˛nite nonempty alphabet is a ˛nite nonempty set of states is a nonempty set of initial states, is the set of accepting states, and is a transition function . Intuitively, s; a is the set of states that can move into when it is in state and it reads the symbol . Note that the automaton may be nondeterministic , since it may have many initial states and the transition function m ay specify many possible denotes the ˛rst in˛nite ordinal.

Page 3

transitions for each state and symbol. The automaton is deterministic if 1 and s; a j 1 for all states and symbols . An automaton is essentially an edge-labeled directed graph: the states of the automaton ar e the nodes, the edges are labeled by symbols in , a certain set of nodes is designated as initial, and a certai n set of nodes is designated as accepting. Thus, s; a means that that there is edge from to labeled with . When is deterministic, the transition function can be viewed as a partial mapping from to , and can then be extended to a partial mapping from to as follows: s; " )= and s; xw )= s; x ; w for and run of on a ˛nite word ; : : : ; a is a sequence ; : : : ; s of 1 states in such that , and ; a for 0 i < n . Note that a nondeterministic automaton can have many runs on a given inp ut word. In contrast, a deterministic automaton can have at most one run on a given in put word. The run is accepting if . One could picture the automaton as having a green light that is switched on whenever the automaton is in an accepting state a nd switched off whenever the automaton is in a non-accepting state. Thus, the run is ac cepting if the green light is on at the end of the run. The word is accepted by if has an accepting run on . When is deterministic, if and only if ; w , where The (˛nitary) language of , denoted , is the set of ˛nite words accepted by An important property of automata is their closure under Boo lean operations. We start by considering closure under union and intersection. Proposition 1. [RS59] Let ; A be automata. Then there is an automaton such that )= Proof: Let ; S ; S ; ; F and ; S ; S ; ; F . Without loss of generality, we assume that and are disjoint. Intuitively, the automaton nonde- terministically chooses or and runs it on the input word. Let =( ; S; S ; ; F , where , and s; a )= s; a if s; a if It is easy to see that )= We call in the proof above the union of and , denoted Proposition 2. [RS59] Let ; A be automata. Then there is an automaton such that )= Proof: Let ; S ; S ; ; F and ; S ; S ; ; F . Intuitively, the automaton runs both and on the input word. Let =( ; S; S ; ; F , where , and (( s; t ; a )= s; a t; a . It is easy to see that )= We call in the proof above the product of and , denoted Note that both the union and the product constructions are ef fective and polynomial in the size of the constituent automata.

Page 4

Let us consider now the issue of complementation. Consider rst deterministic automata. Proposition 3. [RS59] Let =( ; S; S ; ; F be a deterministic automaton, and let =( ; S; S ; ; S , then )= That is, it is easy to complement deterministic automata; we just have to complement the acceptance condition. This will not work for nondetermi nistic automata, since a nondeterministic automaton can have many runs on a given inp ut word; it is not enough that some of these runs reject (i.e., not accept) the input word, all runs should reject the input word. Thus, it seems that to complement nondetermi nistic automaton we ˛rst have to determinize it. Proposition 4. [RS59] Let be a nondeterministic automaton. Then there is a deter- ministic automaton such that )= Proof: Let ; S; S ; ; F . Then ; ; ; F . The state set of consists of all sets of states in and it has a single initial state. The set ;g is the collection of sets of states that intersect nontrivially. Finally, T ; a )= s; a for some Intuitively, collapses all possible runs of on a given input word into one run over a larger state set. This construction is called the subset construction. By combining Propositions 4 and 3 we can complement a nondeterministicau tomata. The construction is effective, but it involves an exponential blow-up, since determinization involves an exponential blow-up (i.e., if has states, then has 2 states). As shown in [MF71], this exponential blow-up for determinization and compleme ntation is unavoidable. For example, ˛x some 1. The set of all ˛nite words over the alphabet a; b that have an at the th position from the right is accepted by the automaton ; ; : : : ; n ; ; , where ; a ; b , and i; a )= i; b )= for 0 < i < n . Intuitively, guesses a position in the input word, checks that it contains , and then checks that it is at distance from the right end of the input. Suppose that we have a deterministic automaton ; S; ; ; F with fewer than 2 states that accepts this same language. Recall that can be viewed as a partial mapping from to . Since , there must be two words uav and ubv of length for which ; uav )= ; ubv . But then we would have that ; uav )= ; ubv ; that is, either both uav and ubv are members of or neither are, contradicting the assumption that consists of exactly the words with an at the th position from the right, since av bv 2.2 Automata on In˛nite Words ± Closure Suppose now that an automaton =( ; S; S ; ; F is given as input an in˛nite word ; a ; : : : over . A run of on is a sequence ; s ; : : : , where and ; a , for all 0. Since the run is in˛nite, we cannot de˛ne acceptance by the type of the ˛nal state of the run. Instead we have to consid er the limit behavior of

Page 5

the run. We de˛ne lim to be the set for in˛nitely many ’s , i.e., the set of states that occur in in˛nitely often. Since is ˛nite, lim is necessarily nonempty. The run is accepting if there is some accepting state that repeats in in˛nitely often, i.e., lim . If we picture the automaton as having a green light that is sw itched on precisely when the automaton is in an accepting state, the n the run is accepting if the green light is switched on in˛nitely many times. The in˛nite word is accepted by if there is an accepting run of on . The in˛nitary language of , denoted , is the set of in˛nite words accepted by Thus, can be viewed both as an automaton on ˛nite words and as an auto maton on in˛nite words. When viewed as an automaton on in˛nite word s it is called a uchi automaton [BŁuc62]. Do automata on in˛nite words have closure properties simila r to those of automata on ˛nite words? In most cases the answer is positive, but the p roofs may be more involved. We start by considering closure under union. Here the union construction does the right thing. Proposition 5. [Cho74] Let ; A be B uchi automata. Then )= One might be tempted to think that similarly we have that )= , but this is not the case. The accepting set of is the product of the accepting sets of and . Thus, accepts an in˛nite word if there are accepting runs and of and , respectively, on , where both runs go in˛nitely often and simultaneously through accepting states. This requirement is too strong. A s a result, could be a strict subset of . For example, de˛ne the two BŁuchi automata =( s; t ; ; and =( s; t ; ; with s; a )= and t; a )= . Clearly we have that )= )= but )= Nevertheless, closure under intersection does hold. Proposition 6. [Cho74] Let ; A be B uchi automata. Then there is a B uchi automaton such that )= Proof: Let =( ; S ; S ; ; F and =( ; S ; S ; ; F . Let =( ; S; S ; ; F where f f f , and ; t ; j (( s; t; i ; a if s; a t; a , and , unless 1 and , in which case 2, or 2 and , in which case 1. Intuitively, the automaton runs both and on the input word. Thus, the automaton can be viewed has having two ˚tracksº, one for each of and . In addition to remembering the state of each track, also has a pointer that points to one of the tracks (1 or 2). Whenever a track goes through an accept ing state, the pointer moves to the other track. The acceptance condition guarante es that both tracks visit accepting states in˛nitely often, since a run accepts iff it goes in˛nitely often through f . This means that the ˛rst track visits in˛nitely often an acc epting state with the pointer pointing to the ˛rst track. Whenever, howev er, the ˛rst track visits an accepting state with the pointer pointing to the ˛rst track, the pointer is changed to point to the second track. The pointer returns to point to the ˛rst t rack only if the second

Page 6

track visits an accepting state. Thus, the second track must also visit an accepting state in˛nitely often. Thus, BŁuchi automata are closed under both union and inters ection, though the con- struction for intersection is somewhat more involved than a simple product. The situation is considerably more involved with respect to closure under complementation. First, as we shall shortly see, BŁuchi automata are not closed under de terminization, i.e., non- deterministic BŁuchi automata are more expressive than det erministic BŁuchi automata. Second, it is not even obvious how to complement determinist ic BŁuchi automata. Con- sider the deterministic BŁuchi automaton ; S; S ; ; F . One may think that it suf˛ces to complement the acceptance condition, i.e., to re place by and de˛ne ; S; S ; ; S . Not going in˛nitely often through , however, is not the same as going in˛nitely often through . A run might go through both and in˛nitely often. Thus, may be a strict superset of . For example, Consider the BŁuchi automaton =( s; t ; ; with s; a and t; a )= . We have that )= )= Nevertheless, BŁuchi automata (deterministic as well as no ndeterministic) are closed under complementation. Proposition 7. [BŁuc62] Let be a B uchi automaton over an alphabet . Then there is a (possibly nondeterministic) B uchi automaton such that )= The construction in [BŁuc62] is doubly exponential. This is improved in [SVW87] to a singly exponential construction with a quadratic exponen t (i.e., if has states then has states, for some constant c > 1). In contrast, the exponent in the construction of Proposition 4 is linear. We will come back later to the comple xity of complementation. Let us return to the issue of determinization. We now show tha t nondeterministic BŁuchi automata are more expressive than deterministic BŁu chi automata. Consider the in˛nitary language =( , i.e., consists of all in˛nite words in which 0 occurs only ˛nitely many times. It is easy to see that can be de˛ned by a nondeterministic BŁuchi automaton. Let s; t ; ; , where s; s; s; t t; )= and t; )= . That is, the states are and with the initial state and the accepting state, As long as it is in the state , the automaton can read both inputs 0 and 1. At some point, however, makes a nondeterministic transition to the state , and from that point on it can read only the input 1. It is easy t o see that . In contrast, cannot be de˛ned by any deterministic BŁuchi automaton. Proposition 8. Let =( . Then there is no deterministic B uchi automaton such that Proof: Assume by way of contradiction that , where =( ; S; ; ; F for , and is deterministic. Recall that can be viewed as a partial mapping from to Consider the in˛nite word . Clearly, is accepted by , so has an accepting run on . Thus, has a ˛nite pre˛x such that ; u . Consider now the in˛nite word 01 . Clearly, is also accepted by , so has an accepting run on . Thus, has a ˛nite pre˛x such that ; u . In a

Page 7

similar fashion we can continue to ˛nd ˛nite words such that ; u : : : . Since is ˛nite, there are i; j , where 0 i < j , such that ; u : : : )= ; u : : : : : : . It follows that has an accepting run on : : : : : : But the latter word has in˛nitely many occurrences of 0, so it is not in Note that the complementary language =(( (the set of in˛nite words in which 0 occurs in˛nitely often) is acceptable by the deter ministic BŁuchi automaton =( s; t ; ; , where s; )= t; )= and s; )= t; )= . That is, the automaton starts at the state s and then it simpl y remembers the last symbol it read ( corresponds to 0 and corresponds to 1). Thus, the use of nondeterminism in Proposition 7 is essential. To understand why the subset construction does not work for B Łuchi automata, con- sider the following two automata over a singleton alphabet: =( s; t ; and =( s; t ; , where s; a )= s; t t; a )= s; a )= s; t , and t; a )= . It is easy to see that does not accept any in˛nite word, since no in˛nite run can visit the state . In contrast, accepts the in˛nite word since the run st is accepting. If we apply the subset construction to both aut omata, then in both cases the initial state is ; a )= s; t , and s; t ; a )= s; t Thus, the subset construction can not distinguish between and To be able to determinize automata on ˛nite words, we have to c onsider a more general acceptance condition. Let be a ˛nite nonempty set of states. A Rabin condi- tion is a subset of 2 , i.e., it is a collection of pairs of sets of states, written [( ; U ; : : : ; ; U )] (we drop the external brackets when the condition consists o a single pair). A Rabin automaton is an automaton on in˛nite words where the accep- tance condition is speci˛ed by a Rabin condition, i.e., it is of the form ; S; S ; ; G A run of is accepting if for some we have that lim and lim that is, there is a pair in where the left set is visited in˛nitely often by while the right set is visited only ˛nitely often by Rabin automata are not more expressive than BŁuchi automata Proposition 9. [Cho74] Let be a Rabin automaton, then there is a B uchi automaton such that )= Proof: Let ; S; S ; ; G , where [( ; U ; : : : ; ; U )] . It is easy to see that )= , where =( ; S; S ; ; ; U )) . Since BŁuchi au- tomata are closed under union, by Proposition 5, it suf˛ces t o prove the claim for Rabin conditions that consists of a single pair, say L; U The idea of the construction is to take two copies of , say and . The BŁuchi automaton starts in and stays there as long as it ˚wantsº. At some point it nondeterministicallymakes a transition into and it stays there avoiding and visiting in˛nitely often. Formally, =( ; S ; S ; ; L , where f g [ f s; a )= s; a for , and s; ; a )= s; a g [ s; a Note that the construction in the proposition above is effec tive and polynomial in the size of the given automaton.

Page 8

If we restrict attention, however, to deterministic automa ta, then Rabin automata are more expressive than BŁuchi automata. Recall the in˛nitary language =( We showed earlier that it is not de˛nable by a deterministic B Łuchi automaton. It is easily de˛nable, however, by a Rabin automaton. Let =( s; t ; ; )) where s; )= t; )= s; )= t; )= . That is, the automaton starts at the state s and then it simply remembers the last symbol it rea d ( corresponds to 0 and corresponds to 1). It is easy to see that The additional expressive power of Rabin automata is suf˛ci ent to provide closure under determinization. Proposition 10. [McN66] Let be a B uchi automaton. There is a deterministic Rabin automaton such that )= Proposition 10 was ˛rst proven in [McN66], where a doubly exponential construc- tion was provided. This was improved in [Saf88], where a singly exponential, with an al- most linear exponent, construction was provided (if has states, then has 2 log states and pairs). Furthermore, it was shown in [Saf88, EJ89]) how the d eterminiza- tion construction can be modi˛ed to yield a co-determinization construction, i.e., a con- struction of a deterministic Rabin automaton such that )= where is the underlying alphabet. The co-determinization constr uction is also singly exponential with an almost linear exponent (again, if has states, then has log states and pairs). Thus, combining the co-determinization construct ion with the polynomial translation of Rabin automata to BŁuchi automata (Proposition 9), we get a complementation construction whose complexity is s ingly exponential with an almost linear exponent. This improves the previously menti oned bound on complemen- tation (singly exponential with a quadratic exponent) and i s essentially optimal [Mic88]. In contrast, complementation for automata on ˛nite words in volves an exponential blow- up with a linear exponent (Section 2.1). Thus, complementat ion for automata on in˛nite words is provably harder than complementation for automata on ˛nite words. Both constructions are exponential, but in the ˛nite case the exp onent is linear, while in the in˛nite case the exponent is nonlinear. 2.3 Automata on Finite Words ± Algorithms An automaton is ˚interestingº if it de˛nes an ˚interestingº language, i.e., a language that is neither empty nor contains all possible words. An aut omaton is nonempty if ; it is nonuniversal if . One of the most fundamental algorithmic issues in automata theory is testing whether a given automat on is ˚interestingº, i.e., nonempty and nonuniversal. The nonemptiness problem for automata is to decide, given an automaton , whether is nonempty. The nonuniversality problem for automata is to decide, given an automaton , whether is nonuniversal. It turns out that testing nonemptiness is easy, while testing nonuniversality is har d. Proposition 11. [RS59, Jon75] 1. The nonemptiness problem for automata is decidable in lin ear time. 2. The nonemptiness problem for automata is NLOGSPACE-comp lete.

Page 9

Proof: Let =( ; S; S ; ; F be the given automaton. Let s; t be states of . We say that is directly connected to if there is a symbol such that s; a . We say that is connected to if there is a sequence ; : : : ; s 1, of states such that , and is directly connected to for 1 i < m . Essentially, is connected to if there is a path in from to , where is viewed as an edge-labeled directed graph. Note that the edge labels are ignored in this de˛nition. It is easy to see that is nonempty iff there are states and such that is connected to . Thus, automata nonemptiness is equivalent to graph reachability . The claims now follow from the following observations: 1. A breadth-˛rst-search algorithm can construct in linear time the set of all states conncected to a state in [CLR90]. is nonempty iff this set intersects nontrivially. 2. Graph reachability can be tested in nondeterministic logarithmic space. The al- gorithm simply guesses a state , then guesses a state that is directly connected to , then guesses a state that is directly connected to , etc., until it reaches a state . (Recall that a nondeterministic algorithm accepts if ther is sequence of guesses that leads to acceptance. We do not care h ere about sequences of guesses that do not lead to acceptance [GJ79].) At each ste p the algorithm needs to remember only the current state and the next state; thus, i f there are states the algorithm needs to keep in memory log bits, since log bits suf˛ce to describe one state. On the other hand, graph reachability is also NLOG SPACE-hard [Jon75]. Proposition 12. [MS72] 1. The nonuniversality problem for automata is decidable in exponential time. 2. The nonuniversality problem for automata is PSPACE-comp lete. Proof: Note that iff iff , where is the complementary automaton of (see Section 2.1). Thus, to test for nonuniversality, it suf˛ces to test for nonemptiness. Recall that is exponentially bigger than Since nonemptiness can be tested in linear time, it follows t hat nonuniversality can be tested in exponential time. Also, since nonemptiness can be tested in nondeterministic logarithmic space, nonuniversality can be tested in polyno mial space. The latter argument requires some care. We cannot simply con struct and then test it for nonemptiness, since is exponentially big. Instead, we construct ˚on-the-ˇyº; whenever the nonemptiness algorithm wants to move from a sta te of to a state the algorithm guesses and checks that it is directly connected to . Once this has been veri˛ed, the algorithm can discard . Thus, at each step the algorithm needs to keep in memory at most two states of and there is no need to generate all of at any single step of the algorithm. This yields a nondeterministic polynomial space algorithm . To eliminate nonde- terminism, we appeal to a well-known theorem of Savitch [Sav 70] which states that

Page 10

N S P AC E )) D S P AC E , for log ; that is, any nondetermin- istic algorithm that uses at least logarithmic space can be s imulated by a determin- istic algorithm that uses at most quadratically larger amou nt of space. In particular, any nondeterministic polynomial-space algorithm can be si mulated by a deterministic polynomial-space algorithm. To prove PSPACE-hardness, it can be shown that any PSPACE-ha rd problem can be reduced to the nonuniversality problem. That is, there is a l ogarithmic-space algorithm that given a polynomial-space-bounded Turing machine and a word outputs an automaton M ;w such that accepts iff M ;w is non-universal [MS72, HU79]. 2.4 Automata on In˛nite Words ± Algorithms The results for BŁuchi automata are analogous to the results in Section 2.3. Proposition 13. 1. [EL85b, EL85a] The nonemptiness problem for B uchi automata is decidable in linear time. 2. [VW94] The nonemptiness problem for B uchi automata is NLOGSPACE-complete. Proof: Let ; S; S ; ; F be the given automaton. We claim that is nonempty iff there are states and such that is connected to and is connected to itself. Suppose ˛rst that is nonempty. Then there is an accepting run ; s ; : : : of on some input word. Clearly, is directly connected to for all 0. Thus, is connected to whenever i < j . Since is accepting, some occurs in in˛nitely often; in particular, there are i; j , where 0 < i < j , such that . Thus, is connected to and is also connected to itself. Conversely, suppose that there are states and such that is connected to and is connected to itself. Since is connected to , there are a sequence of states ; : : : ; s and a sequence of symbols ; : : : ; a such that and ; a for 1 Similarly, since is connected to itself, there are a sequence of states ; t ; : : : ; t and a sequence of symbols ; : : : ; b such that and ; b for . Thus, ; s ; : : : ; s )( ; t ; : : : ; t is an accepting run of on ; : : : ; a )( ; : : : ; b , so is nonempty. Thus, BŁuchi automata nonemptiness is also reducible to gra ph reachability. 1. A depth-˛rst-search algorithm can construct a decomposi tion of the graph into strongly connected components [CLR90]. is nonempty iff from a component that intersects nontriviallyit is possible to reach a nontrivial component that intersects nontrivially. (A strongly connected component is nontrivi al if it contains an edge, which means, since it is strongly connected, that it contain s a cycle). 2. The algorithm simply guesses a state , then guesses a state that is directly connected to , then guesses a state that is directly connected to , etc., until it reaches a state . At that point the algorithm remembers and it continues to move nondeterministically from a state to a state that is directly connected to until it reaches again. Clearly, the algorithm needs only a logarithmic memo ry, since it needs to remember at most a description of three stat es at each step.

Page 11

NLOGSPACE-hardness follows from NLOGSPACE-hardness of no nemptiness for automata on ˛nite words. Proposition 14. [SVW87] 1. The nonuniversality problem for B uchi automata is decidable in exponential time. 2. The nonuniversality problem for B uchi automata is PSPACE-complete. Proof: Again iff iff , where is the complementary automaton of (see Section 2.2). Thus, to test for nonuniversality, it suf˛ces to test for nonemptiness. Since is exponentially bigger than and nonemptiness can be tested in linear time, it follows that no nuniversality can be tested in exponential time. Also, since nonemptiness can be tested in nondeterministic logarithmic space, nonuniversality can be tested in polynomial space. A gain, the polynomial-space algorithm constructs ˚on-the-ˇyº. PSPACE-hardness follows easily from the PSPACE-hardness o f the universality problem for automata on ˛nite words [Wol82]. 2.5 Automata on Finite Words ± Alternation Nondeterminism gives a computing device the power of existe ntial choice. Its dual gives a computing device the power of universal choice. (Compare t his to the complexity classes NP and co-NP [GJ79]). It is therefore natural to cons ider computing devices that have the power of both existential choice and universal choi ce. Such devices are called alternating . Alternation was studied in [CKS81] in the context of Turing machines and in [BL80, CKS81] for ˛nite automata. The alternation for malisms in [BL80] and [CKS81] are different, though equivalent. We follow here th e formalism of [BL80]. For a given set , let be the set of positive Boolean formulas over (i.e., Boolean formulas built from elements in using and ), where we also allow the formulas true and false . Let . We say that satis˛es a formula 2 B if the truth assignment that assigns true to the members of and assigns false to the members of satisfes . For example, the sets ; s and ; s both satisfy the formula , while the set ; s does not satisfy this formula. Consider a nondeterministic automaton =( ; S; S ; ; F . The transition func- tion maps a state and an input symbol to a set of states. Each element in this set is a possible nondeterministic choice for the aut omaton’s next state. We can represent using ; for example, s; a ; s ; s can be written as s; a )= . In alternating automata, s; a can be an arbitrary formula from . We can have, for instance, a transition s; a )=( meaning that the automaton accepts the word aw , where is a symbol and is a word, when it is in the state , if it accepts the word from both and or from both and

Page 12

. Thus, such a transition combines the features of existenti al choice (the disjunction in the formula) and universal choice (the conjunctions in th e formula). Formally, an alternating automaton is a tuple ; S; s ; ; F , where is a ˛nite nonempty alphabet, is a ˛nite nonempty set of states, is the initial state (notice that we have a unique initial state), is a set of accepting states, and ! B is a transition function. Because of the universal choice in alternating transitions , a run of an alternating automaton is a tree rather than a sequence. A tree is a (˛nite or in˛nite) connected directed graph, with one node designated as the root and denoted by , and in which every non-root node has a unique parent ( is the parent of and is a child of if there is an edge from to ) and the root has no parent. The level of a node , denoted is its distance from the root ; in particular, 0. A branch ; x ; : : : of a tree is a maximal sequence of nodes such that is the root and is the parent of for all i > 0. Note that can be ˛nite or in˛nite. A -labeled tree , for a ˛nite alphabet , is a pair ; , where is a tree and is a mapping from nodes to that assigns to every node of a label in . We often refer to as the labeled tree. A branch ; x ; : : : of de˛nes an in˛nite word )= ; : : : consisting of the sequence of labels along the branch. Formally, a run of on a ˛nite word ; a ; : : : ; a is a ˛nite -labeled tree such that )= and the following holds: if i < n )= , and s; a )= , then has children ; : : : ; x for some j , and ; : : : ; r satis˛es For example, if ; a is , then the nodes of the run tree at level 1 include the label or the label and also include the label or the label . Note that the depth of (i.e., the maximal level of a node in ) is at most , but not all branches need to reach such depth, since if ; a true , then does not need to have any children. On the other hand, if < n and , then we cannot have s; a )= false , since false is not satis˛able. The run tree is accepting if all nodes at depth are labeled by states in . Thus, a branch in an accepting run has to hit the true transition or hit an accepting state after reading all the input word. What is the relationship between alternating automata and n ondeterministic au- tomata? It turns out that just as nondeterministic automata have the same expressive power as deterministic automata but they are exponentially more succinct, alternating automata have the same expressive power as nondeterministi c automata but they are exponentially more succinct. We ˛rst show that alternating automata are at least as expres sive and as succinct as nondeterministic automata. Proposition 15. [BL80, CKS81, Lei81] Let be a nondeterministic automaton. Then there is an alternating automaton such that )= Proof: Let =( ; S; S ; ; F . Then =( ; S [ f ; s ; ; F , where is a new state, and is de˛ned as follows, for and ; b )= ;t t;b

Page 13

s; b )= s;b (We take an empty disjunction in the de˛nition of to be equivalent to false .) Es- sentially, the transitions of are viewed as disjunctions in . A special treatment is needed for the initial state, since we allow a set of initial s tates in nondeterministic automata, but only a single initial state in alternating aut omata. Note that has essentially the same size as ; that is, the descriptions of and have the same length. We now show that alternating automata are not more expressiv e than nondetermin- istic automata. Proposition 16. [BL80, CKS81, Lei81] Let be an alternating automaton. Then there is a nondeterministic automaton such that )= Proof: Let =( ; S; s ; ; F . Then =( ; S ff gg ; ; F , where , and T ; a )= satis˛es t; a (We take an empty conjunction in the de˛nition of to be equivalent to true ; thus, ; 2 ; a .) Intuitively, guesses a run tree of . At a given point of a run of , it keeps in its memory a whole level of the run tree of . As it reads the next input symbol, it guesses the next level of the run tree of The translation from alternating automata to nondetermini stic automata involves an exponential blow-up. As shown in [BL80, CKS81, Lei81], this blow-up is unavoidable. For example, ˛x some 1, and let a; b . Let be the set of all words that have two different symbols at distance from each other. That is, uav bw u; w and g [ f ubv aw u; w and It is easy to see that is accepted by the nondeterministic automaton =( ; p; q g [ ; : : : ; n g ; ; ; , where p; a )= p; ; a ig p; b )= p; ; b ig a; i ; x )= fh a; i ig and b; i ; x )= fh b; i ig for and 0 < i < n a; n ; a )= a; n ; b )= b; n ; b )= b; n ; a )= , and q ; x )= for Intuitively, guesses a positionin the input word, reads the input symbol a t that position, moves positions to the right, and checks that it contains a differe nt symbol. Note that has 2 2 states. By Propositions 15 and 17 (below), there is an alter nating automaton with 2 3 states that accepts the complementary language Suppose that we have a nondeterministic automaton nd =( ; S; S ; nd ; F with fewer than 2 states that accepts . Thus, accepts all words w w , where Let ; : : : ; s an accepting run of nd on w w . Since , there are two distinct word u; v such that . Thus, ; : : : ; s ; s ; : : : ; s is an accepting run of nd on uv , but uv 62 since it must have two different symbols at distance from each other. One advantage of alternating automata is that it is easy to co mplement them. We ˛rst need to de˛ne the dual operation on formulas in . Intuitively, the dual of a

Page 14

formula is obtained from by switching and , and by switching true and false . For example, )= . (Note that we are considering formulas in so we cannot simply apply negation to these formulas.) Forma lly, we de˛ne the dual operation as follows: , for true false false true )=( , and )=( Suppose now that we are given an alternating automaton ; S; s ; ; F De˛ne =( ; S; s ; S , where s; a )= s; a for all and . That is, is the dualized transition function. Proposition 17. [BL80, CKS81, Lei81] Let be an alternating automaton. Then )= By combining Propositions 11 and 16, we can obtain a nonempti ness test for alter- nating automata. Proposition 18. [CKS81] 1. The nonemptiness problem for alternating automata is dec idable in exponential time. 2. The nonemptiness problem for alternating automata is PSP ACE-complete. Proof: All that remains to be shown is the PSPACE-hardness of nonemp tiness. Recall that PSPACE-hardness of nonuniversality was shown in Propo sition 12 by a generic reduction. That is, there is a logarithmic-space algorithm that given a polynomial-space- bounded Turing machine and a word outputs an automaton M ;w such that accepts iff M ;w is nonuniversal. By Proposition 15, there is an alternating automaton such that )= M ;w and has the same size as M ;w . By Proposition 17, )= . Thus, M ;w is nonuniversal iff is nonempty. 2.6 Automata on In˛nite Words - Alternation We saw earlier that a nondeterministicautomaton can be view ed both as an automaton on ˛nite words and as an automaton on in˛nite words. Similarly, an alternating automaton can also be viewed as an automaton on in˛nite words, in which c ase it is called an alternating B uchi automaton [MS87]. Let ; S; s ; ; F be an alternating BŁuchi automaton. A run of on an in˛nite word ; a ; : : : is a (possibly in˛nite) -labeled tree such that )= and the following holds: if )= , and s; a )= , then has children ; : : : ; x , for some j , and ; : : : ; r xk satis˛es

Page 15

The run is accepting if every in˛nite branch in includes in˛nitely many labels in Note that the run can also have ˛nite branches; if )= , and s; a )= true then does not need to have any children. We with alternating automata, alternating BŁuchi automata are as expressive as non- deterministic BŁuchi automata. We ˛rst show that alternati ng automata are at least as expressive and as succinct as nondeterministic automata. T he proof of the following proposition is identical to the proof of Proposition 19. Proposition 19. [MS87] Let be a nondeterministic B uchi automaton. Then there is an alternating B uchi automaton such that )= As the reader may expect by now, alternating BŁuchi automata are not more expressive than nondeterministic BŁuchi automata. The proof of this fa ct, however, is more involved than the proof in the ˛nite-word case. Proposition 20. [MH84] Let be an alternating B uchi automaton. Then there is a nondeterministic B uchi automaton such that )= Proof: As in the ˛nite-word case, guesses a run of . At a given point of a run of , it keeps in its memory a whole level of the run of (which is a tree). As it reads the next input symbol, it guesses the next level of the run tree of . The nondeterministic automaton, however, also has to keep information about occu rrences of accepting states in order to make sure that every in˛nite branch hits acceptin g states in˛nitely often. To that end, partitions every level of the run of into two sets to distinguish between branches that hit recently and branches that did not hit recently. Let =( ; S; s ; ; F . Then =( ; S ; S ; ; F , where (i.e., each state is a pair of sets of states of ), (i.e., the single initial state is pair consisting of the singleton set and the empty set), f;g , and for (( U; V ; a )= ; V there exist X ; Y such that satis˛es t; a satis˛es t; a F ; and (( ; V ; a )= ; V there exists such that satis˛es t; a F ; and The proof that this construction is correct requires a caref ul analysis of accepting runs of An important feature of this construction is that the blowup is exponential. While complementation of alternating automata is easy (Pro position 17), this is not the case for alternating BŁuchi automata. Here we run into th e same dif˛culty that we ran into in Section 2.2: not going in˛nitely often through accep ting states is not the same as going in˛nitely often through non-accepting states. >From Propositions 7, 19 and 20.

Page 16

it follows that alternating BŁuchi automata are closed unde r complement, but the precise complexity of complementation in this case is not known. Finally, by combining Propositions 13 and 20, we can obtain a nonemptiness test for alternating BŁuchi automata. Proposition 21. 1. The nonemptiness problem for alternating B uchi automata is decidable in exponen- tial time. 2. The nonemptiness problem for alternating B uchi automata is PSPACE-complete. Proof: All that remains to be shown is the PSPACE-hardness of nonemp tiness. We show that the nonemptiness problem for alternating automata is r educible to the nonemptiness problem for alternating BŁuchi automata. Let ; S; s ; ; F be an alternating automaton. Consider the alternating BŁuchi automaton ; S; s ; , where s; a )= s; a for and , and s; a )= true for and We claim that iff . Suppose ˛rst that for some . Then there is an accepting run of on . But then is also an accepting run of on w u for all , because s; a )= true for and , so w u . Suppose, on the other hand, that for some . Then there is an accepting run of on . Since has no accepting state, cannot have in˛nite branches, so by KŁonig’s Lemma it must be ˛nite. Thus , there is a ˛nite pre˛x of such that is an accepting run of on , so 3 Linear Temporal Logic and Automata on In˛nite Words Formulas of linear-time propositional temporal logic (LTL) are built from a set P r op of atomic propositions and are closed under the application of Boolean connectives, the unary temporal connective (next), and the binary temporal connective (until) [Pnu77, GPSS80]. LTL is interpreted over computations . A computation is a function P r op , which assigns truth values to the elements of P r op at each time instant (natural number). For a computation and a point , we have that: ; i for P r op iff ; i iff ; i and ; i ; i iff not ; i ; i X ' iff ; i ; i U iff for some , we have ; j and for all k, k < j , we have ; k Thus, the formula true U ' , abbreviated as F ' , says that holds eventually , and the formula , abbreviated G' , says that holds henceforth . For example, the formula request request grant )) says that whenever a request is made it holds continuously until it is eventually granted. We will say tha satis˛es a formula denoted , iff ; Computations can also be viewed as in˛nite words over the alp habet 2 P r op . We shall see that the set of computations satisfying a given formula a re exactly those accepted

Page 17

by some ˛nite automaton on in˛nite words. This fact was prove n ˛rst in [SPH84]. The proof there is by induction on structure of formulas. Unfort unately, certain inductive steps involve an exponential blow-up (e.g., negation corre sponds to complementation, which we have seen to be exponential). As a result, the comple xity of that translation is nonelementary , i.e., it may involve an unbounded stack of exponentials (th at is, the complexity bound is of the form where the height of the stack is .) The following theorem establishes a very simple translatio n between LTL and alter- nating BŁuchi automata. Theorem 22. [MSS88, Var94] Given an LTL formula , one can build an alternating uchi automaton =( ; S; s ; ; F , where P r op and is in , such that is exactly the set of computations satisfying the formula Proof: The set of states consists of all subformulas of and their negation (we identify the formula :: with ). The initial state is itself. The set of accepting states consists of all formulas in of the form U . It remains to de˛ne the transition function In this construction, we use a variation of the notion of dual that we used in Sec- tion 2.5. Here, the dual of a formula is obtained from by switching and by switching true and false , and, in addition, by negating subformulas in , e.g., X q is _ : X q . More formally, , for true false false true )=( , and )=( We can now de˛ne p; a )= true if p; a )= false if 62 ; a )= ; a ; a ; a )= ; a X ; a )= U ; a )= ; a ; a U Note that ; a is de˛ned by induction on the structure of Consider now a run of . It is easy to see that can have two types of in˛nite branches. Each in˛nite branch is labeled from some point on b y a formula of the form U or by a formula of the form U . Since U ; a )= ; a ; a U )) , an in˛nite branch labeled from some point by U ensures that U indeed fails at that point, since fails from that point on. On the other hand, an in˛nite branch labeled from some point by U does not ensure that U holds at that point, since it does not ensure that eventually holds. Thus, while we should allow in˛nite

Page 18

branches labeled by U , we should not allow in˛nite branches labeled by U This is why we de˛ned to consists of all formulas in of the form U Example 1. Consider the formula U q . The alternating BŁuchi automaton associated with is p;q '; '; X p; p; p; p; q; ; '; ; f: where is described in the following table. s; p; q s; s; s; true true false _ : false _ : false false true true true true false false true false true false false true false true In the state , if does not hold in the present state, then requires both to be satis˛ed in the present state (that is, has to be satis˛ed in next state), and to be satis˛ed in the next state. As 62 should eventually reach a state that satis˛es Note that many of the states, e.g., the subformulas and , are not reachable ; i.e., they do not appear in any run of By applying Proposition 20, we now get: Corollary 23. [VW94] Given an LTL formula , one can build a B uchi automaton =( ; S; S ; ; F , where P r op and is in , such that is exactly the set of computations satisfying the formula The proof of Corollary 23 in [VW94] is direct and does not go th rough alternating BŁuchi automata. The advantage of the proof here is that it se parates the logic from the combinatorics. Theorem 22 handles the logic, while Prop osition 20 handles the combinatorics. Example 2. Consider the formula F Gp , which requires to hold from some point on. The BŁuchi automaton associated with is =( ; ; , where is described in the following table. s; s; 0,1 The automaton can stay forever in the state 0. Upon reading , however, can choose to go to the state 1. Once has made that transition, it has to keep reading otherwise it rejects. Note that has to make the transition to the state 1 at some point, since the state 0 is not accepting. Thus, accepts precisely when holds from some point on.

Page 19

4 Applications 4.1 Satis˛ability An LTL formula is satis˛able if there is some computation such that . An unsatis˛able formula is uninteresting as a speci˛cation, s o unsatis˛ability most likely indicates an erroneous speci˛cation. The satis˛ability problem for LTL is to decide, given an LTL formula , whether is satis˛able. Theorem 24. [SC85] The satis˛ability problem for LTL is PSPACE-complete. Proof: By Corollary 23, given an LTL formula , we can construct a BŁuchi automaton , whose size is exponential in the length of , that accepts precisely the computations that satisfy . Thus, is satis˛able iff is nonempty. This reduces the satis˛ability problem to the nonemptiness problem. Since nonemptiness of BŁuchi automata can be tested in nondeterministic logarithmic space (Proposit ion 13) and since is of exponential size, we get a polynomial-space algorithm (aga in, the algorithm constructs ˚on-the-ˇyº). To prove PSPACE-hardness, it can be shown that any PSPACE-ha rd problem can be reduced to the satis˛ability problem. That is, there is a log arithmic-space algorithm that given a polynomial-space-bounded Turing machine and a word outputs an LTL formula M ;w such that accepts iff M ;w is satis˛able. An LTL formula is valid if for every computation we have that . A valid formula is also uninteresting as a speci˛cation. The validity problem for LTL is to decide, given an LTL formula , whether is valid. It is easy to see that is valid iff is not satis˛able. Thus, the validity problem for LTL is also PSPACE-complete. 4.2 Veri˛cation We focus here on ˛nite-state programs, i.e., programs in which the variables range over ˛nite domains. The signi˛cance of this class follows from th e fact that a signi˛cant number of the communication and synchronization protocols studied in the literature are in essence ˛nite-state programs [Liu89, Rud87]. Since e ach state is characterized by a ˛nite amount of information, this information can be des cribed by certain atomic propositions . This means that a ˛nite-state program can be speci˛ed using propositional temporal logic. Thus, we assume that we are given a ˛nite-sta te program and an LTL formula that speci˛es the legal computations of the program . The problem is to check whether all computations of the program are legal. Before go ing further, let us de˛ne these notions more precisely. A ˛nite-state program over a set P r op of atomic propositions is a structure of the form =( ; w ; R; V , where is a ˛nite set of states, is the initial state, is a total accessibility relation, and P r op assigns truth values to propositions in P r op for each state in . The intuition is that describes all the states that the program could be in (where a state includes th e content of the memory, registers, buffers, location counter, etc.), describes all the possible transitions between states (allowing for nondeterminism), and relates the states to the propositions (e.g., it tells us in what states the proposition request is true). The assumption that is total

Page 20

(i.e., that every state has a child) is for technical conveni ence. We can view a terminated execution as repeating forever its last state. Let be an in˛nite sequence ; u : : : of states in such that , and Ru for all 0. Then the sequence ; V : : : is a computation of . We say that satis˛es an LTL formula if all computations of satisfy . The veri˛cation problem is to check whether satis˛es The complexity of the veri˛cation problem can be measured in three different ways. First, one can ˛x the speci˛cation and measure the complexity with respect to the size of the program. We call this measure the program-complexity measure. More precisely, the program complexity of the veri˛cation probl em is the complexity of the sets satis˛es for a ˛xed . Secondly, one can ˛x the program and measure the complexity with respect to the size of the speci˛cation. We call this measure the speci˛cation-complexity measure. More precisely, the speci˛cation complexity of th veri˛cation problem is the complexity of the sets satis˛es for a ˛xed Finally, the complexity in the combined size of the program a nd the speci˛cation is the combined complexity Let be a complexity class. We say that the program complexity of t he veri˛cation problem is in if satis˛es g 2 for any formula . We say that the program complexity of the veri˛cation problem is hard for if satis˛es is hard for for some formula . We say that the program complexity of the veri˛cation probl em is complete for if it is in and is hard for . Similarly, we say that the speci˛cation complexity of the veri˛cation problem is in if satis˛es g 2 for any program , we say that the speci˛cation complexity of the veri˛cation problem is hard for if satis˛es is hard for for some program , and we say that the speci˛cation complexity of the veri˛cation problem is complete for if it is in and is hard for We now describe the automata-theoretic approach to the veri ˛cation problem. A ˛nite-state program ; w ; R; V can be viewed as a BŁuchi automaton ; W ; ; W , where P r op and s; a iff s; s and As this automaton has a set of accepting states equal to the wh ole set of states, any in˛nite run of the automaton is accepting. Thus, is the set of computations of Hence, for a ˛nite-state program and an LTL formula , the veri˛cation problem is to verify that all in˛nite words accepted by the automaton satisfy the formula By Corollary 23, we know that we can build a BŁuchi automaton that accepts exactly the computations satisfying the formula . The veri˛cation problem thus reduces to the automata-theoretic problem of checking that all computati ons accepted by the automaton are also accepted by the automaton , that is . Equivalently, we need to check that the automaton that accepts is empty, where )= )= First, note that, by Corollary 23, )= and the automaton has states. (A straightforward approach, starting with the aut omaton and then using Proposition 7 to complement it, would result in a doubl y exponential blow-up.) To get the intersection of the two automata, we use Propositi on 6. Consequently, we can build an automaton for having j states. We need to check this automaton for emptiness. Using Proposition 13, w e get the following results.

Page 21

Theorem 25. [LP85, SC85, VW86] 1. The program complexity of the veri˛cation problem is comp lete for NLOGSPACE. 2. The speci˛cation complexity of the veri˛cation problem i s complete for PSPACE. 3. Checking whether a ˛nite-state program satis˛es an LTL formula can be done in time j or in space (( log We note that a time upper bound that is polynomial in the size o f the program and exponential in the size of the speci˛cation is considered he re to be reasonable, since the speci˛cation is usually rather short [LP85]. For a practica l veri˛cation algorithm that is based on the automata-theoretic approach see [CVWY92]. 4.3 Synthesis In the previous section we dealt with veri˛cation : we are given a ˛nite-state program and an LTL speci˛cation and we have to verify that the program meets the speci˛cation. A frequent criticism against this approach, however, is tha t veri˛cation is done after sig- ni˛cant resources have already been invested in the develop ment of the program. Since programs invariably contain errors, veri˛cation simply be comes part of the debugging process. The critics argue that the desired goal is to use the speci˛cation in the program development process in order to guarantee the design of corr ect programs. This is called program synthesis . It turns out that to solve the program-synthesis problem we need to use automata on in˛nite trees. Rabin Tree Automata Rabin tree automata run on in˛nitelabeled trees witha unifo rm branching degree (recall the de˛nition of labeled trees in S ection 2.5). The (in˛nite) ary tree is the set ; : : : ; k , i.e., the set of all ˛nite sequences over ; : : : ; k The elements of are the nodes of the tree. If and xi are nodes of , then there is an edge from to xi , i.e., is the parent of xi and xi is the child of . The empty sequence is the root of . A branch ; x ; : : : of is an in˛nite sequence of nodes such that , and is the parent of for all 0. A -labeled -ary tree , for a ˛nite alphabet , is a mapping that assigns to every node a label. We often refer to labeled trees as trees ; the intention will be clear from the context. A branch ; x ; : : : of de˛nes an in˛nite word )= ; : : : consisting of the sequence of labels along the branch. -ary Rabin tree automaton is a tuple ; S; S ; ; G , where is a ˛nite alphabet, is a ˛nite set of states, is a set of initial states, is a Rabin condition, and is a transition function. The automaton takes as input -labeled -ary trees. Note that s; a is a set of -tuples for each state and symbol . Intuitively, when the automaton is in state and it is reading a node , it nondeterministically chooses a -tuple ; : : : ; s in s; )) and then makes copies of itself and moves to the node xi in the state for ; : : : ; k . A run of on a -labeled -ary tree is an -labeled -ary tree such that the root is labeled by an initial state and the transitions ob ey the transition function that is, , and for each node we have ; : : : ; r xk i 2 )) The run is accepting if satis˛es for every branch ; x ; : : : of . That is,

Page 22

for every branch ; x ; : : : , there is some pair L; U such that for in˛nitely many ’s, but for only ˛nitely many ’s. Note that different branches might be satis˛ed by different pairs in . The language of , denoted is the set of trees accepted by . It is easy to see that Rabin automata on in˛nite words are essentially 1-ary Rabin tree automata. The nonemptiness problem for Rabin tree automata is to decide, given a Rabin tree automaton , whether is nonempty. Unlike the nonemptiness problem for automata on ˛nite and in˛nite words, the nonemptiness probl em for tree automata is highly nontrivial. It was shown to be decidable in [Rab69], b ut the algorithm there had nonelementary time complexity; i.e., its time complexity could not be boun ded by any ˛xed stack of exponential functions. Later on, elementary a lgorithms were described in [HR72, Rab72]. The algorithm in [HR72] runs in doubly expo nential time and the algorithm in [Rab72] runs in exponential time. Several year s later, in [Eme85, VS85], it was shown that the nonemptiness problem for Rabin tree aut omata is in NP. Finally, in [EJ88], it was shown that the problem is NP-complete. There are two relevant size parameters for Rabin tree automa ta. The ˛rst is the transition size , which is size of the transition function (i.e., the sum of th e sizes of the sets s; a for and ); the transition size clearly takes into account the the number of states in . The second is the number of pairs in the acceptance conditio For our application here we need a complexity analysis of the nonemptiness problem that takes into account separately the two parameters. Proposition 26. [EJ88, PR89] For Rabin tree automata with transition size and pairs, the nonemptiness problem can be solved in time mn In other words, the nonemptiness problem for Rabin tree auto mata can be solved in time that is exponential in the number of pairs but polynomial in t he transition size. As we will see, this distinction is quite signi˛cant. Realizability The classical approach to program synthesis is to extract a p rogram from a proof that the speci˛cation is satis˛able. In [EC82, MW84] , it is shown how to extract programs from (˛nite representations of) models of the spec i˛cation. In the late 1980s, several researchers realized that the classical approach i s well suited to closed systems, but not to open systems [Dil89, PR89, ALW89]. In open systems the program in teracts with the environment; such programs are called reactive programs [HP85]. A correct reactive program should be able to handle arbitrary actions of the environment. If one applies the techniques of [EC82, MW84] to reactive programs , one obtains programs that can handle only certain actions of the environment. In [ PR89, ALW89, Dil89], it is argued that the right way to approach synthesis of reactive p rograms is to consider the situation as an in˛nite game between the environment and the program. We are given a ˛nite set of states and a valuation P r op . The intuition is that describes all the observable states that the system can be in. (We will see later why the emphasis here on observability.) A behavior over is an in˛nite word over the alphabet . The intended meaning is that the behavior ; w ; : : : describes a sequence of states that the system goes through, where the tr ansition from to was caused by the environment when is odd and by the program when is even. That is,

Page 23

the program makes the ˛rst move (into the ˛rst state), the env ironment responds with the second move, the program counters with the third move, and so on. We associate with the computation )= ; V ; : : : , and say that satis˛es an LTL formula if satis˛es . The goal of the program is to satisfy the speci˛cation in the face of every possible move by the environment. The program h as no control over the environment moves; it only controls its own moves. Thus, the situation can be viewed as an in˛nite game between the environment and the program, w here the goal of the program is to satisfy the speci˛cation . In˛nite games were introduced in [GS53] and they are of fundamental importance in descriptive set theor y [Mos80]. Histories are ˛nite words in . The history of a run ; w ; : : : at the even point 0, denoted hist ; i , is the ˛nite word ; w ; : : : ; w consisting of all states moved to by the environment; the history is the empty s equence for 0. program is a function from histories to states. The idea is that if the program is scheduled at a point at which the history is , then the program will cause a change into the state . This captures the intuition that the program acts in reaction to the environment’s actions. A behavior over is a run of the program if hist ; i )) for all even . That is, all the state transitions caused by the program are consistent with the program . A program satis˛es the speci˛cation if every run of over satis˛es . Thus, a correct program can be then viewed as a winning strategy in the game against the environment. We say that is realizable with respect to and if there is a program that satis˛es , in which case we say that realizes . (In the sequel, we often omit explicit mention of and when it is clear from the context.) It turns out that satis˛ability of is not suf˛cient to guarantee realizability of Example 3. Consider the case where P r op , and . Consider the formula Gp . This formula requires that always be true, and it is clearly satis˛able. There is no way, however, for th e program to enforce this requirement, since the environment can always moves to the s tate 0, making false. Thus, Gp is not realizable. On the other hand, the formula GF p , which requires to hold in˛nitely often, is realizable; in fact, it is realized by the simple program that maps every history to the state 1. This shows that realizability i s a stronger requirement than satis˛ability. Consider now the speci˛cation . By Corollary 23, we can build a BŁuchi automaton =( ; S; S ; ; F , where P r op and is in 2 , such that is exactly the set of computations satisfying the formula . Thus, given a state set and a valuation P r op , we can also construct a BŁuchi automaton ; S; S ; ; F such that is exactly the set of behaviors satisfying the formula , by simply taking s; w )= s; V )) . It follows that we can assume without loss of generality that the winning condition for the game betwee n the environment and the program is expressed by a BŁuchi automaton : the program wins the game if every run of is accepted by . We thus say that the program realizes a BŁuchi automaton if all its runs are accepted by . We also say then that is realizable. It turns out that the realizability problem for BŁuchi autom ata is essentially the solv- ability problem described in [Chu63]. (The winning condition in [Chu63] is e xpressed

Page 24

in S1S, the monadic second-order theory of one successor fun ction, but it is known [BŁuc62] that S1S sentences can be translated to BŁuchi auto mata.) The solvability prob- lem was studied in [BL69, Rab72]. It is shown in [Rab72] that t his problem can be solved by using Rabin tree automata. Consider a program . Suppose without loss of generality that ; : : : ; k , for some k > 0. The program can be represented by a -labeled -ary tree . Consider a node : : : i , where 1 for ; : : : ; m . We note that is a history in , and de˛ne . Conversely, a -labeled -ary tree de˛nes a program . Consider a history : : : i , where 1 for ; : : : ; m . We note that is a node of , and de˛ne . Thus, -labeled -ary trees can be viewed as programs. It is not hard to see that the runs of correspond to the branches of . Let ; x ; : : : be a branch, where and for 0. Then ; i ; i ; : : : is a run of , denoted . Conversely, if ; i ; : : : is a run of , then contains a branch ; x ; : : : , where , and )= for 0. One way to visualize this is to think of the edge from the parent to its child xi as labeled by . Thus, the run is the sequence of edge and node labels along We thus refer to the behaviors for branches of a -labeled -ary tree as the runs of , and we say that realizes a BŁuchi automaton if all the runs of are accepted by . We have thus obtained the following: Proposition 27. A program realizes a B uchi automaton iff the tree realizes We have thus reduced the realizability problem for LTL speci ˛cations to an automata- theoretic problem: given a BŁuchi automaton , decide if there is a tree that realizes . Our next step is to reduce this problem to the nonemptiness p roblem for Rabin tree automata. We will construct a Rabin automaton that accepts precisely the trees that realize . Thus, iff there is a tree that realizes Theorem 28. Given a B uchi automaton with states over an alphabet ; : : : ; k , we can construct a -ary Rabin tree automaton with transition size log and pairs such that is precisely the set of trees that realize Proof: Consider an input tree . The Rabin tree automaton needs to verify that for every branch of we have that . Thus, needs to ˚run in parallelº on all branches of . We ˛rst need to deal with the fact that the labels in contain information only about the actions of (while the information on the actions of the environment is implicit in the edges). Suppose that =( ; S; S ; ; F . We ˛rst de˛ne a BŁuchi automaton that emulates by reading pairs of input symbols at a time. Let =( ; S f ; S f ; ; S f , where s; a; b )= fh t; i j ; b for some s; a g [ fh t; i j ; b for some s; a g [ fh t; i j ; b for some s; a

Page 25

Intuitively, applies two transitions of while remembering whether either transition visited . Note that this construction doubles the number of states. I t is easy to prove the following claim: Claim accepts the in˛nite word ; w ; w ; : : : over the alphabet iff accepts the in˛nite word ; w ; w ; w ; : : : over In order to be able to run in parallel on all branches, we apply Proposition 10 to and obtain a deterministic Rabin automaton such that As commented in Section 2.2, has 2 log states and pairs. Let ; Q; ; ; G We can now construct a Rabin tree automaton that ˚runs in parallelº on all branches of . Let =( ; Q; ; ; G , where is de˛ned as follows: q ; a )= q ; a; q ; a; k Intuitively, emulates by feeding it pairs consisting of a node label and an edge label. Note that if q ; a; i )= for some 1 , then q ; a )= Claim is precisely the set of trees that realize It remains to analyze the size of . It is clear that it has 2 log states and pairs. Since it is deterministic, its transition size is log We can now apply Proposition 26 to solve the realizability pr oblem. Theorem 29. [ALW89, PR89] The realizability problem for B uchi automata can be solved in exponential time. Proof: By Theorem 28, given a BŁuchi automaton with states over an alphabet ; : : : ; k , we can construct a -ary Rabin tree automaton with transition size log and and pairs such that is precisely the set of trees that realize . By Proposition 26, we can test the nonemptiness of in time log Corollary 30. [PR89] The realizability problem for LTL can be solved in doubly exp o- nential time. Proof: By Corollary 23, given an LTL formula , one can build a BŁuchi automaton with 2 states such that is exactly the set of computations satisfying the formula . By combining this with the bound of Theorem 29, we get a time b ound of In [PR89], it is shown that the doubly exponential time bound of Corollary 30 is essen- tially optimal. Thus, while the realizability problem for L TL is decidable, in the worst case it can be highly intractable. Example 4. Consider again the situation where P r op )= and )= . Let be the formula Gp . We have =( ; ; W , where )= , and all other transitions are empty (e.g., )= , etc.). Note that is deterministic. We can emulate by an automaton that reads pairs of symbols: ; W f fh ig ; ; W f , where fh ig and all other transitions are empty. Finally, we construct t he Rabin tree automaton

Page 26

; W f ; L; U )) , where s; a is empty for all states and symbol . Clearly, )= , which implies that Gp is not realizable. We note that Corollary 30 only tells us how to decide whether a n LTL formula is realizable or not. It is shown in [PR89], however, that the al gorithm of Proposition 26 can provide more than just a ˚yes/noº answer. When the Rabin auto maton is nonempty, the algorithm returns a ˛nite representation of an in˛nite t ree accepted by . It turns out that this representation can be converted into a program that realizes the speci˛cation. It even turns out that this program is a ˛nite-state program. This means that there are a ˛nite set , a function , a function , and a function such that for all and we have: )= )) hw )= ; w Thus, instead of remembering the history (which requires an unbounded memory), the program needs only to remember . It performs its action and, when it sees the environment’s action , it updates its memory to ; w . Note that this ˚memoryº is internal to the program and is not pertinent to the speci˛cation. This is in contrast to the observable states in that are pertinent to the speci˛cation. Acknowledgements I am grateful to Oystein Haugen, Orna Kupferman, and Faron Mo ller for their many comments on earlier drafts of this paper. References [ALW89] M. Abadi, L. Lamport, and P. Wolper. Realizable and u nrealizable concurrent pro- gram speci˛cations. In Proc. 16th Int. Colloquium on Automata, Languagesand Pro- gramming , volume 372, pages 1±17. Lecture Notes in Computer Science, Springer- Verlag, July 1989. [BL69] J.R. BŁuchi and L.HG. Landweber. Solving sequential conditions by ˛nite-state strate- gies. Trans. AMS , 138:295±311, 1969. [BL80] J.A. Brzozowski and E. Leiss. Finite automata, and se quential networks. Theoretical Computer Science , 10:19±35, 1980. [BŁuc62] J.R. BŁuchi. On a decision method in restricted sec ond order arithmetic. In Proc. Internat. Congr. Logic, Method and Philos. Sci. 1960 , pages 1±12, Stanford, 1962. Stanford University Press. [Cho74] Y. Choueka. Theories of automata on -tapes: A simpli˛ed approach. J. Computer and System Sciences , 8:117±141, 1974. [Chu63] A. Church. Logic, arithmetics, and automata. In Proc. International Congress of Mathematicians, 1962 , pages 23±35. institut Mittag-Lefˇer, 1963. [CKS81] A.K. Chandra, D.C. Kozen, and L.J. Stockmeyer. Alte rnation. Journal of the Asso- ciation for Computing Machinery , 28(1):114±133, 1981. [CLR90] T.H. Cormen, C.E. Leiserson, and R.L. Rivest. Introduction to Algorithms . MIT Press, 1990.

Page 27

[CVWY92] C. Courcoubetis, M.Y. Vardi, P. Wolper, and M. Yann akakis. Memory ef˛cient algorithms for the veri˛cation of temporal properties. Formal Methods in System Design , 1:275±288, 1992. [Dil89] D.L. Dill. Trace theory for automatic hierarchical veri˛cation of spe ed independent circuits . MIT Press, 1989. [EC82] E.A. Emerson and E.M. Clarke. Using branching time lo gic to synthesize synchro- nization skeletons. Science of Computer Programming , 2:241±266, 1982. [EH86] E.A. Emerson and J.Y. Halpern. Sometimes and not neve r revisited: On branching versus linear time. Journal of the ACM , 33(1):151±178, 1986. [EJ88] E.A. Emerson and C. Jutla. The complexity of tree auto mata and logics of programs. In Proceedings of the 29th IEEE Symposium on Foundations of Com puter Science pages 328±337, White Plains, October 1988. [EJ89] E.A. Emerson and C. Jutla. On simultaneously determi nizing and complementing automata. In Proceedingsof the 4th IEEE Symposium on Logic in Computer Sc ience pages 333±342, 1989. [EL85a] E.A. Emerson and C.-L. Lei. Modalities for model che cking: Branching time logic strikes back. In Proceedings of the Twelfth ACM Symposium on Principles of Pr o- gramming Languages , pages 84±96, New Orleans, January 1985. [EL85b] E.A. Emerson and C.-L. Lei. Temporal model checking under generalized fairness constraints. In Proc. 18th Hawaii International Conference on System Scien ces pages 277±288, Hawaii, 1985. [Eme85] E.A. Emerson. Automata, tableaux, and temporal log ics. In Logic of Programs volume 193 of Lecture Notes in Computer Science , pages 79±87. Springer-Verlag, Berlin, 1985. [GJ79] M. Garey and D. S. Johnson. Computers and Intractability: A Guide to the Theory of NP-completeness . W. Freeman and Co., San Francisco, 1979. [GPSS80] D. Gabbay, A. Pnueli, S. Shelah, and J. Stavi. On the temporal analysis of fairness. In Proceedingsof the 7th ACM Symposiumon Principles of Progra mming Languages pages 163±173, January 1980. [GS53] D. Gale and F. M. Stewart. In˛nite games of perfect inf ormation. Ann. Math. Studies 28:245±266, 1953. [HP85] D. Harel and A. Pnueli. On the development of reactive systems. In K. Apt, editor, Logics and Models of Concurrent Systems , volume F-13 of NATO Advanced Summer Institutes , pages 477±498. Springer-Verlag, 1985. [HR72] R. Hossley and C.W. Rackoff. The emptiness problem fo r automata on in˛nite trees. In Proc. 13th IEEE Symp. on Switching and Automata Theory , pages 121±124, 1972. [HU79] J.E. Hopcroft and J.D. Ullman. Introduction to Automata Theory, Languages and Computation . Addison-Wesley, New York, 1979. [Jon75] N.D. Jones. Space-bounded reducibility among comb inatorial problems. Journal of Computer and System Sciences , 11:68±75, 1975. [Kur94] Robert P. Kurshan. Computer-Aided Veri˛cation of Coordinating Processes: Th Automata-Theoretic Approach . Princeton University Press, Princeton, New Jersey, 1994. [Lam80] L. Lamport. Sometimes is sometimes ˚not neverº - on t he temporal logic of programs. In Proceedingsof the 7th ACM Symposiumon Principles of Progra mming Languages pages 174±185, January 1980. [Lei81] Leiss. Succinctrepresentation of regular languag esby boolean automata. Theoretical Computer Science , 13:323±330, 1981. [Liu89] M.T. Liu. Protocol engineering. Advances in Computing , 29:79±195, 1989.

Page 28

[LP85] O. Lichtenstein and A. Pnueli. Checking that ˛nite st ate concurrent programs satisfy their linear speci˛cation. In Proceedings of the Twelfth ACM Symposium on Principles of Programming Languages , pages 97±107, New Orleans, January 1985. [LPZ85] O. Lichtenstein, A. Pnueli, and L. Zuck. The glory of the past. In Logics of Pro- grams , volume 193 of Lecture Notes in Computer Science , pages 196±218, Brooklyn, 1985. Springer-Verlag, Berlin. [McN66] R. McNaughton. Testing and generating in˛nite sequ ences by a ˛nite automaton. Information and Control , 9:521±530, 1966. [MF71] A.R. Meyer and M.J. Fischer. Economy of description b y automata, grammars, and formal systems. In Proc. 12th IEEE Symp. on Switching and Automata Theory , pages 188±191, 1971. [MH84] S. Miyano and T. Hayashi. Alternating ˛nite automata on -words. Theoretical Computer Science , 32:321±330, 1984. [Mic88] M. Michel. Complementation is more dif˛cult with au tomata on in˛nite words. CNET, Paris, 1988. [Mos80] Y.N. Moschovakis. Descriptive Set Theory . North Holland, 1980. [MP92] Z. Manna and A. Pnueli. The Temporal Logic of Reactive and Concurrent Systems: Speci˛cation . Springer-Verlag, Berlin, 1992. [MS72] A.R. Meyer and L.J. Stockmeyer. The equivalence prob lem for regular expressions with squaring requires exponential time. In Proc. 13th IEEE Symp. on Switching and Automata Theory , pages 125±129, 1972. [MS87] D.E. Muller and P.E. Schupp. Alternating automata on in˛nite trees. Theoretical Computer Science , 54,:267±276, 1987. [MSS88] D. E. Muller, A. Saoudi, and P. E. Schupp. Weak altern ating automata give a simple explanation of why most temporal and dynamic logics are deci dable in exponential time. In Proceedings 3rd IEEE Symposium on Logic in Computer Science , pages 422±427, Edinburgh, July 1988. [MW84] Z. Manna and P. Wolper. Synthesis of communicating pr ocesses from temporal logic speci˛cations. ACM Transactions on Programming Languages and Systems 6(1):68±93, January 1984. [OL82] S. Owicki and L. Lamport. Proving liveness propertie s of concurrent programs. ACM Transactions on Programming Languages and Systems , 4(3):455±495, July 1982. [Pei85] R. Peikert. -regular languages and propositional temporal logic. Tech nical Report 85-01, ETH, 1985. [Pnu77] A. Pnueli. The temporal logic of programs. In Proc. 18th IEEE Symposium on Foundation of Computer Science , pages 46±57, 1977. [PR89] A. Pnueli and R. Rosner. On the synthesis of a reactive module. In Proceedings of the Sixteenth ACM Symposium on Principles of Programming La nguages , Austin, Januery 1989. [Rab69] M.O. Rabin. Decidability of second order theories a nd automata on in˛nite trees. Transaction of the AMS , 141:1±35, 1969. [Rab72] M.O. Rabin. Automata on in˛nite objects and Church s problem. In Regional Conf. Ser. Math., 13 , Providence, Rhode Island, 1972. AMS. [RS59] M.O. Rabin and D. Scott. Finite automata and their dec ision problems. IBM J. of Research and Development , 3:115±125, 1959. [Rud87] H. Rudin. Network protocols and tools to help produc e them. Annual Review of Computer Science , 2:291±316, 1987. [Saf88] S. Safra. On the complexity of omega-automata. In Proceedings of the 29th IEEE Symposium on Foundations of Computer Science , pages 319±327, White Plains, October 1988.

Page 29

[Sav70] W.J. Savitch. Relationship between nondeterminis tic and deterministic tape com- plexities. J. on Computer and System Sciences , 4:177±192, 1970. [SC85] A.P. Sistla and E.M. Clarke. The complexity of propos itional linear temporal logic. Journal of the Association for Computing Machinery , 32:733±749, 1985. [Sis83] A.P. Sistla. Theoretical issues in the design and analysis of distribute d systems . PhD thesis, Harvard University, 1983. [SPH84] R. Sherman, A. Pnueli, and D. Harel. Is the interesti ng part of process logic un- interesting: a translation from PL to PDL. SIAM J. on Computing , 13(4):825±839, 1984. [SVW87] A.P. Sistla, M.Y. Vardi, and P. Wolper. The compleme ntation problem for BŁuchi automata with applications to temporal logic. TheoreticalComputerScience ,49:217± 237, 1987. [Tho90] W. Thomas. Automata on in˛nite objects. Handbook of theoretical computer science pages 165±191, 1990. [Var94] M.Y. Vardi. Nontraditional applications of automa ta theory. In TheoreticalAspects of Computer Software, Proc. Int. Symposium (TACS’94) , volume 789 of Lecture Notes in Computer Science , pages 575±597. Springer-Verlag, Berlin, 1994. [VS85] M.Y. Vardi and L. Stockmeyer. Improved upper and lowe r bounds for modal logics of programs. In Proc 17th ACM Symp. on Theory of Computing , pages 240±251, 1985. [VW86] M.Y. Vardi and P. Wolper. An automata-theoretic appr oach to automatic program veri˛cation. In Proceedings of the First Symposium on Logic in Computer Scie nce pages 322±331, Cambridge, June 1986. [VW94] M.Y. Vardi and P. Wolper. Reasoning about in˛nite com putations. Information and Computation , 115(1):1±37, 1994. [Wol82] P. Wolper. Synthesis of Communicating Processes from Temporal Logic S peci˛ca- tions . PhD thesis, Stanford University, 1982. [WVS83] P. Wolper, M.Y. Vardi, and A.P. Sistla. Reasoning ab out in˛nite computation paths. In Proc. 24th IEEE Symposium on Foundations of Computer Scienc , pages 185±194, Tucson, 1983.