Interpreting a successful testing process: risk and actual coverage
|
|
- Solomon O’Connor’
- 5 years ago
- Views:
Transcription
1 Interpreting a successful testing process: risk and actual coverage Mariëlle Stoelinga and Mark Timmer Faculty of Computer Science, University of Twente, The Netherlands {marielle, timmer}@cs.utwente.nl Abstract. Testing is inherently incomplete; no test suite will ever be able to test all possible usage scenarios of a system. It is therefore vital to assess the implication of a system passing a test suite. This paper quantifies that implication by means of two distinct, but related, measures: the risk quantifies the confidence in a system after it passes a test suite, i.e. the number of faults still expected to be present (weighted by their severity); the actual coverage quantifies the extent to which faults have been shown absent, i.e. the fraction of possible faults that has been covered. We provide algorithms that calculate these metrics for a given test suite, as well as optimisation algorithms that yield the best test suite for a given optimisation criterion. 1 Introduction Software becomes more and more complex, making thorough testing an indispensable part of the development process. The U.S. National Institute of Standards and Technology has assessed that software errors cost the American economy almost sixty billion dollars annually. More than a third of these costs could be eliminated if testing occurred earlier in the development process [14]. An important fact about testing is that it is inherently incomplete, since testing everything would require infinitely many input scenarios. On the other hand, passing a well-designed test suite does increase the confidence in the correctness of the tested product. Therefore, it is important to assess the quality of a test suite. Two fundamental concepts have been put forward to evaluate test suite quality: (1) coverage metrics determine which portion of the requirements and/or implementation-under-test has been exercised by the test suite; (2) riskbased metrics assess the risk of putting the tested product into operation. Although existing coverage measures, such as code coverage in black-box testing [3, 12] and state and/or transition coverage in white-box testing [8, 13, 19], give an indication of the quality of a test suite, it is not necessarily true that higher coverage implies that more, or more severe, errors are detected. This is because these metrics do not take into account where in the system errors are most likely to occur. Risk-based testing methods do aim at reducing the expected number of errors, or their severity. However, these are often informal [15], based on heuristics [2], or indicate which components should be tested best [1], but seldomly quantify the risk after a successful testing process in a precise way.
2 This paper presents a framework in which test coverage and risk can be defined, computed and optimised. Key properties are a rigorous mathematical treatment based on solid probabilistic models, and the result that higher coverage (resp. lower risk) implies a lower expected number of faults. Starting point in our theory is a weighted fault specification (WFS), which consists of a system specification describing the desired system behaviour, a weight function that describes the severity of the errors, and an error function describing the probability that a certain error has been made. From the WFS, we derive its underlying probability model, i.e. a random variable that exactly describes the distribution of erroneous implementations. This allows us to define risk and actual coverage in an easy and precise way. Given a WFS, we define the risk of a test suite as the expected fault weight that remains after this test suite passes. We also introduce actual coverage for a test suite, which quantifies the risk reduction after passing the test suite. Whereas the risk is based on errors contained within the entire system, actual coverage only relates to the part of a system we tested. This coincides with the traditional interpretation of coverage. In this way our methods refine [5], which introduces a concept we would call potential coverage, since it considers which faults can be detected during testing. Our measures, on the other hand, take into account the faults that are actually covered during a test execution. We build up our framework in three steps of increasing complexity. First, we define our theory for systems modelled as finite functions. While being an intermediate step, this model can be used for testing individual functions or modules. Second, the notions are generalised to nondeterministic functions. This adds a factor of uncertainty, since different executions of the same test case may yield different results. Thus, observing a correct response once does not imply correctness. We therefore add a failure function to the WFS, describing the probability that incorrectly implemented behaviour yields a failure. Finally, we generalise our notions to input-output labelled transition systems (IOLTSs) [18], which are a canonical model for embedded systems testing. This adds a second factor of complexity, since their behavior is state-dependent, which makes it necessary to consider system traces rather than simple input/output pairs. For optimisation an estimation of the output probabilities, describing the expected probability with which the system chooses between specified outputs, has to be provided. Finally, while error probabilities, failure probabilities and output probabilities are important ingredients in our framework, techniques for obtaining these probabilities fall outside the scope of this paper. However, there is extensive literature on factors that determine them. For instance, as described in [6], estimating the error probabilities can be based on the software change history. They can also be based on McCabe s cyclomatic complexity number [10], Halstead s set of Software Science metrics [7], and requirements volatility [9]. The failure probabilities can be obtained by applying one of the many analysis techniques described in [11] and [20], and the output probabilities might be obtained from the developers or derived by measurements (e.g. during earlier test executions).
3 Organisation of the paper. The measures for risk and actual coverage are discussed for deterministic functions in Section 2, for nondeterministic functions in Section 3, and for IOLTSs in Section 4. Finally, Section 5 provides conclusions and future work. This paper is based on the Master s thesis of Timmer [17]. 2 Finite functions Given a set L, the set of all sequences (traces) over L is denoted by L, and the set of non-empty traces by L +. If σ, ρ L, then σ is a prefix of ρ (denoted by σ ρ) if there is a τ L such that στ = ρ. If τ L +, then σ is a proper prefix of ρ (σ ρ). We use P(L) to denote the power set of L and Distr(L) for the set of all probability distributions over L, i.e. all functions f : L [0, 1] such that s S f(s) = 1. Given a sequence X = x 1,..., x n, we write x i for the i th element in X and y X if y = x i for some i. We call X = n the size of X. Definition 1 (Finite function). A function f : L I L O is finite if both its domain L I and its co-domain L O are finite. We often write f spec for a function representing a specification. A (potentially incorrect) implementation of f spec is a finite function f impl : L I L O with the same domain and co-domain as f spec. Example 1. A standard technique in testing is equivalence partitioning, that partitions the inputs of a function into equivalence classes such that one input is representative for all others in its class [12]. Using this technique, functions like sin, max, and abs can be modelled as finite functions. Consider the function abs : R R, mapping each input on its absolute value. We identify the three equivalence classes 0, a, and a, with a > 0. Using these classes, abs can be modelled by f spec : {0, 1, 1} {0, 1} with f(0) = 0, f( 1) = 1, and f(1) = 1. This technique could also be used for modules (sets of functions), by observing a module as a Cartesian product of functions. Since it is uncertain which faults are introduced, developing an implementation can be described by a random experiment. For each input, we specify the probability that it is implemented incorrectly; its error probability. These probabilities are assumed to be independent. Additionally, a fault weight is specified for each input, denoting the severity of an incorrect implementation with respect to this input. The higher a fault weight, the higher the severity. A specification together with fault weights and error probabilities constitutes a weighted fault specification. Definition 2 (Weighted fault specification). A weighted fault specification (WFS) is a tuple W = f spec, w, p err, with f spec : L I L O a finite specification, w : L I R 0 a weight function assigning a fault weight to each input, such that 0 < a L I w(a) <. This constraint allows us to define a coverage notion that is relative to the total weight a L I w(a),
4 p err : L I [0, 1] an error function assigning to each input a L I the probability p err (a) that it is implemented incorrectly. An implementation of W is an implementation of f spec. The fault weight of an implementation f impl is the accumulated fault weight of all incorrectly implemented inputs. Definition 3 (Fault weight). Let W = f spec, w, p err be a WFS and f impl an implementation of W. Then, the fault weight of f impl is defined by w(f impl ) = w(a). a L I f impl (a) f spec(a) A test case t for a specification consists of an input and its expected output. An execution of t consists of an input and the actually observed output. Obviously, an execution is correct if the obtained output is the expected output. Similar notions exist for test suites, i.e., sequences of test cases. Definition 4 (Test case, suite, execution, and verdict). (i) A test case for a finite specification f spec : L I L O is a pair t = (a, f spec (a)) with a L I. A test suite T (L I L O ) for f spec is a finite sequence of test cases for f spec. We often write a T as a shorthand for (a, f spec (a)) T. (ii) An execution of a test case t = (a, b) against an implementation f impl is a pair e = (a, f impl (a)). An execution E = e 1,..., e n of a test suite against an implementation is the sequence of its test case executions. (iii) The verdict functions v t : (L I L O ) {pass, fail} and v T : (L I L O ) {pass, fail} are defined by v t (e) = { pass fail if b = f spec (a) otherwise v T (E) = { pass fail if i : v t (e i ) = pass otherwise For a test suite T = (a 1, f spec (a 1 )),..., (a n, f spec (a n )), the verdict of executing T against f impl is given by v T (f impl ) = v T ( (a 1, f impl (a 1 )),..., (a n, f impl (a n )) ). 2.1 Underlying probability model Since we are only interested in which inputs are handled incorrectly and not in what their incorrect response is, we partition all implementations into classes of implementations that respond correctly to exactly the same inputs. Definition 5 (Classification relation). Let f spec : L I L O be a finite specification, and f and g two implementations of f spec, then the relation fspec is defined by f fspec g iff a L I : f(a) = f spec (a) g(a) = f spec (a). We use [f ] fspec to denote the equivalence class of f with respect to fspec, and leave out the subscript f spec whenever no confusion arises.
5 We can lift the fault weight and the verdict function to classes of implementations by saying w([f impl ]) = w(f impl ) and v T ([f impl ]) = v T (f impl ). The error function p err induces a random variable F W over the equivalence classes of fspec. Because of the independence of error probabilities, its probability distribution is straightforward. Definition 6 (F W ). Let W = f spec, w, p err be a WFS, then we define F W : Ω (L I L O )/ to be the random variable describing to which equivalence class (according to ) an arbitrary implementation of W belongs. Let f impl : L I L O be an implementation of W, then P[F W = [f impl ]] = p err (a) (1 p err (a)). a L I f impl (a) f spec(a) a L I f impl (a)=f spec(a) 2.2 Risk Having defining a formal framework, we can now define the measures we are actually interested in. First of all, when a test suite passes, we want to estimate how many faults have remained undetected. To also incorporate the severity of these faults, we define the risk of an implementation as the expected fault weight after a test suite passes. Definition 7 (Risk). Let W = f spec, w, p err be a WFS and T a test suite for f spec, then the risk of an arbitrary implementation of W after it passes T is defined by risk W (T ) = E[w(F W ) v T (F W ) = pass]. For this conditional probability to be defined, P[v T (F W ) = pass] needs to be nonzero, i.e. a T : p err (a) < 1. Since we only calculate the risk after testing was successful (so it is possible to pass T ), this imposes no extra restriction. The risk can be computed easily using the formula below. Intuitively, we are certain that the inputs that were tested by T are implemented correctly because of determinism. For the inputs that were not tested, the probability of their presence is still their error probability. Theorem 1 shows that our measure is indeed a form of risk-based testing, since risk is commonly defined as probability times impact. Theorem 1. Let W = f spec, w, p err be a WFS and T a test suite for f spec. Then risk W (T ) = w(a) p err (a). a L I a T Proof. risk W (T ) = E[w(F W ) v T (F W ) = pass]
6 = = = [f impl ] (L I L O)/ [f impl ] (L I L O)/ [f impl ] (L I L O)/ = a L I a T = a L I a T = a L I a T w([f impl ]) P[F W = [f impl ] v T (F W ) = pass] w(a) a L I f impl (a) f spec(a) a L I f impl (a) f spec(a) a T [f impl ] (L I L O)/ f impl (a) f spec(a) w(a) P[F W (a) f spec (a)] w(a) p err (a) w(a) P[F W = [f impl ] v T (F W ) = pass] w(a) P[F W = [f impl ]] P[F W = [f impl ]] Note that the initial risk is risk W ( ) = a L I w(a) p err (a). Test optimisation with respect to risk. Given a WFS W = f spec, w, p err we can derive optimal test suites with respect to two (related) criteria. First, we can compute a test suite T opt-rfw of size k with a minimal risk, i.e. we obtain a minimal expected fault weight after passing T opt-rfw. That is, T opt-rfw = argmin risk W (T ), T = t 1,...,t k where argmin x f(x) denotes the value of x such that f(x) is minimal. T opt-rfw can be computed by first ordering the inputs a L I descendingly based on the product w(a) p err (a), thus obtaining a sequence a 1,..., a n such that w(a i ) p err (a i ) w(a i+1 ) p err (a i+1 ) for all 1 i < n. Then, it immediately follows from Theorem 1 that T opt-rfw = (a 1, f spec (a 1 )),..., (a k, f spec (a k )). Hence, T opt-rfw is obtained by selecting the first k test cases in the sequence. Second, given a threshold x R 0, we can compute the smallest test suite such that its risk is below x, i.e. T opt-rfw T opt-rfw = argmin T (L I L O) risk W (T ) x T.
7 Clearly, T opt-rfw can be computed by a procedure similar to the one for T opt-rfw. To be precise, we order the inputs as above and, choosing the smallest l such that risk W (T opt-rfw ) x, we take T opt-rfw = (a 1, f spec (a 1 )),..., (a l, f spec (a l )). Note that since both methods are based on sorting the n inputs, their time complexity is in O(n log n). 2.3 Actual coverage Whereas risk gives us information about the entire system, coverage usually only relates to the part of a system we tested. The absolute actual coverage (abscov) of a test suite T is defined as the accumulated fault weight of the inputs that are known to be implemented correctly after T passes. Due to determinism these are just the inputs a T. To assess the quality of a test suite, we calculate its absolute actual coverage relative to the total amount of fault weight that could potentially be present in the system. This measure will be called its relative actual coverage (relcov). Definition 8 (Coverage measures). Let W = f spec, w, p err be a WFS, and T a test suite for f spec, then abscov W (T ) = a T w(a) totcov W = a L I w(a) relcov W (T ) = abscov W (T ) totcov W Note that since weight functions never sum up to zero or infinity, relative actual coverage is properly defined. Test optimisation with respect to relcov. Again, we can derive optimal test suites for a given WFS W based on a fixed test suite size or a threshold for the measure. Note, however, that for actual coverage we have to maximise instead of minimise. The computation procedures are similar to the ones for optimisation with respect to the risk (given in Section 2.2). To be precise, we can compute a test suite T opt-cov of size k with maximal relcov, i.e. T opt-cov = argmax relcov W (T ). T = t 1,...,t k As before, T opt-cov can be computed by ordering the inputs a L I as a sequence a 1,..., a n, but now such that w(a i ) w(a i+1 ). Then, T opt-cov = (a 1, f spec (a 1 )),..., (a k, f spec (a k )). Second, given a threshold x R 0, we can compute the smallest test suite such that its relative actual coverage is below x, i.e. T opt-cov T opt-cov = argmin T. T (L I L O) relcov W (T ) x
8 Again, T opt-cov can be computed by ordering the inputs as above and taking T opt-cov = (a 1, f spec (a 1 )),..., (a l, f spec (a l )) with l the smallest integer such that relcov W (T opt-cov ) x. The time complexity of these two methods is also in O(n log n). 3 Nondeterministic finite functions Specifications now map each input on a set of correct responses, whereas implementations map each input on a probability distribution over all outputs. Definition 9 (Nondeterministic finite function). A nondeterministic finite function is a function f : L I P(L O ) with L I and L O finite. We write f spec for a nondeterministic finite function representing a specification. An implementation of f spec is a finite function f impl : L I Distr(L O ). We use f impl [a] to denote the set of possible responses of f impl to the input a, i.e. f impl [a] = {b L O (f impl (a))(b) > 0}. We add a failure function to the weighted fault specification, stating the probability that an implementation that incorrectly implements an input a responds incorrectly to it. Definition 10 (Nondeterministic weighted fault specification). A nondeterministic weighted fault specification (NWFS) is a tuple f spec, w, p err, p fail, with w and p err as before, and f spec : L I P(L O ) a nondeterministic finite specification, p fail : L I [0, 1] a failure function assigning to each input a L I the probability p fail (a) that a random implementation f impl with f impl [a] f spec (a) responds incorrectly to a. The fault weight on an implementation, test executions, and the verdict function are defined as before, except that we now consider implementations correct if they produce a subset of the correct responses to an input [18]. Test cases and test suites remain unchanged. Definition 11 (Fault weight). Let W = f spec, w, p err, p fail be an NWFS and f impl an implementation of W, then the fault weight of f impl is defined by w(f impl ) = w(a). a L I f impl [a] f spec(a) Definition 12 (Test execution and verdict). An execution of a test case t = (a, f spec (a)) is a pair e = (a, b) with b f impl (a). The verdict function v t : (L I L O ) {pass, fail} is defined by { pass if b fspec (a) v t (e) =. fail otherwise We use nr(a, T ) for the number of times (a, f spec (a)) T.
9 3.1 Underlying probability model The classification relation also has to incorporate the fact that implementations are now considered correct if they produce a subset of the correct responses. Definition 13 (Classification relation). Let f spec : L I P(L O ) be a nondeterministic finite specification, and f and g two implementations of f spec. Then f fspec g iff a L I : f[a] f spec (a) g[a] f spec (a). The error function still induces the random variable F W (introduced in Definition 6), although its range is now (L I Distr(L O ))/ fspec. We use F W [a] f spec (a) as a shorthand for the event F W = [f impl ] such that f impl [a] f spec (a). For each test suite T the failure function and error function also induce a random variable V W,T ; the verdict of T against a random implementation of W. Definition 14 (V W,T ). Let W = f spec, w, p err, p fail be an NWFS, then we define V W,T : Ω {pass, fail} to be the random variable denoting the result of executing T against an arbitrary implementation of W. For all test suites T = t 1,..., t n and inputs a L I we use VW,T a as a shorthand for V W,T with T = t i T t i = (a, f spec (a)), performing only the test cases for a. Lemma 1. Let W = f spec, w, p err, p fail be an NWFS and a L I, then Proof. P[V a W,T = pass] = (1 p fail (a)) nr(a,t ) p err (a) + 1 p err (a). P[V a W,T = pass] = P[V a W,T = pass F W [a] f spec (a)] P[F W [a] f spec (a)] + P[V a W,T = pass F W [a] f spec (a)] P[F W [a] f spec (a)] = 1 (1 p err (a)) + (1 p fail (a)) nr(a,t ) p err (a) By the assumed independence of fault presence between the different inputs, the next proposition immediately follows. Proposition 1. Let W = f spec, w, p err, p fail be an NWFS and T a test suite for f spec, then P[V W,T = pass] = (1 p fail (a)) nr(a,t ) p err (a) + 1 p err (a). a L I a T 3.2 Risk We will now introduce the concept of risk for NWFSs. It is still the expected fault weight after a test suite passes.
10 Definition 15 (Risk). Let W = f spec, w, p err, p fail be an NWFS and T a test suite for f spec, then the risk of an arbitrary implementation of W after passing T is defined by risk W (T ) = E[w(F W ) V W,T = pass]. Before discussing how to calculate the risk, we first look at the probability that an input is implemented incorrectly, even though a test suite passes; its posterior error probability (PEP). Definition 16 (PEP). Let W = f spec, w, p err, p fail be an NWFS, T a test suite for f spec, and a L I. Then, the posterior error probability (PEP) of a after T passes is defined by PEP W (a, T ) = P[F W [a] f spec (a) V W,T = pass]. Using Bayes formula the next proposition can be obtained. Proposition 2. Let W = f spec, w, p err, p fail be an NWFS, T a test suite for f spec, and a L I. Then Proof. PEP W (a, T ) = (1 p fail (a)) nr(a,t ) p err (a) (1 p fail (a)) nr(a,t ) p err (a) + 1 p err (a). PEP W (a, T ) = P[F W [a] f spec (a) V W,T = pass] = P[F W [a] f spec (a) V a W,T = pass] = P[V a W,T = pass F W [a] f spec (a)] P[F W [a] f spec (a)] P[V a T = pass] = P[V a W,T = pass F W [a] f spec (a)] p err (a) P[V a W,T = pass] = (1 p fail (a)) nr(a,t ) p err (a) (1 p fail (a)) nr(a,t ) p err (a) + 1 p err (a) Since PEP W (a, T ) only depends on W, a and nr(a, T ), we use PEP W (a, n) to denote PEP W (a, T ) with nr(a, T ) = n. Because PEP W (a, T ) is exactly the probability that w(a) should be included in risk W (T ), the following theorem holds. Theorem 2. Let W = f spec, w, p err, p fail be an NWFS and T a test suite for f spec. Then, the risk of an implementation of W after T passes is given by risk W (T ) = a L I w(a) PEP W (a, T ).
11 Proof. risk W (T ) = E[w(F W ) V W,T = pass] = w([f impl ]) P[F W = [f impl ] V W,T = pass] = [f impl ] (L I L O)/ [f impl ] (L I L O)/ = w(a) a L I a L I f impl [a] f spec(a) [f impl ] (L I L O)/ f impl [a] f spec(a) = a L I w(a) P[F W [a] f spec (a) V W,T = pass] w(a) P[F W = [f impl ] V W,T = pass] P[F W = [f impl ] V W,T = pass] = a L I w(a) PEP W (a, T ) Note that in case of determinism p fail (a) = 1 for all a L I. Substituting this, risk W (T ) reduces to the formula of Theorem 1, as expected. Test optimisation with respect to risk. Given an NWFS W = f spec, w, p err, p fail we can again compute T opt-rfw = argmin T = t1,...,t k risk W (T ), i.e. a test suite of size k with minimal risk. Theorem 2 implies that, if an input a is included n times in T, it contributes w(a) PEP W (a, n) to the risk. Therefore, including it once more reduces the risk by w(a) (PEP W (a, n) PEP W (a, n + 1)). We let r(a, n) denote this value. We can now construct T opt-rfw recursively. Initially, T opt-rfw =. Then, we include the input a L I such that r(a, nr(a, T )) is maximal. After all, this means that including a in T decreases the risk as much as possible. We continue adding inputs a i with maximal r(a i, nr(a i, T )) until T opt-rfw = k. Each time an input is added to T opt-rfw, only one calculation has to be redone, yielding a time complexity of O(n log n). To obtain T opt-rfw = argmin T (LI L O),risk W (T ) x T we perform the same procedure, but now continuing adding inputs until risk W (T opt-rfw ) x. Obviously this procedure has also time complexity O(n log n). Note that this procedure is a generalisation of the procedure for deterministic systems, since in that case r(a, 0) = w(a) p err (a) and r(a, n) = 0 for n Actual coverage For WFSs, actual coverage denoted the amount of fault weight of which we know that it is absent after T passes. However, for NWFSs we often only reduce the
12 probability of the presence of faults because of nondeterminism. We introduce relative error probability reduction (REPR) to denote the extent to which the error probability of a fault decreases as a result of passing a test suite. Definition 17 (Coverage measures). Let W = f spec, w, p err, p fail be an NWFS and T a test suite for f spec. Then we define abscov W (T ) = a T w(a) REPR W (a, T ), with REPR W (a, T ) = p err(a) PEP W (a, T ) p err (a). Total coverage and relative coverage remain unchanged. We use REPR W (a, n) to denote REPR W (a, T ) with nr(a, T ) = n. Note that for deterministic systems PEP W (a, T ) = 0 for all a T, and obviously PEP W (a, T ) = p err (a) for all a T, obtaining Definition 8. Using the definition of PEP, basic calculus yields the next proposition. Proposition 3. Let W = f spec, w, p err, p fail be an NWFS and T a test suite for f spec, then abscov W (T ) = ( (1 p fail (a)) nr(a,t ) ) w(a) 1. 1 p a T err (a) + (1 p fail (a)) nr(a,t ) p err (a) Test optimisation with respect to relcov. Given an NWFS W = f spec, w, p err, p fail we can again derive T opt-cov = argmax T = t1,...,t k relcov W (T ) and T opt-cov = argmin T (LI L O),relCov W (T ) x T. The computations are similar to the ones discussed in Section 3.2. However, by Definition 17 it now holds that r(a, n) = w(a) (REPR W (a, n + 1) REPR W (a, n)). Clearly, these computations are still in O(n log n). 4 Labelled transition systems An IOLTS describes system behaviour by means of states and transitions. Transitions are always caused by either an input action or an output action. Definition 18 (IOLTS). An IOLTS A is a tuple (S, s 0, L, ), where S is a finite set of states, s 0 S is the initial state, L is a finite set of actions, partitioned into a set L I of inputs (suffixed by a question mark) and a set L O of outputs (suffixed by an exclamation mark), S L S is the transition relation, which is required to be deterministic, i.e., (s, a, s ) (s, a, s ) = s = s.
13 We write A spec to denote a specification. An IOLTS A impl is a (potentially incorrect) implementation of A spec if it has the same alphabet and it is input-enabled, i.e., for all s S and a L I, there exists an s S with (s, a, s ). The set of all possible implementations of A is denoted by IMPL A. Definition 19 (Paths and traces). Let A = (S, s 0, L, ) be an IOLTS, then a path in A is a finite sequence π = s 0 a 1 s 1 a 2... a n s n, with s 0 = s 0 and i [0..n 1] : (s i, a i+1, s i+1 ). The set of all paths in A is denoted by paths A. Each path π has a trace associated with it, denoted by trace(π) and given by the sequence of the actions of π. From all paths in A we can deduce all traces in A: traces A = {trace(π) π paths A }. We use A[σ] to denote the set of output actions that A can provide after σ, i.e. A[σ] = {b! L O σb! traces A }. Based on IOLTSs, we can again define a weighted fault specification. Definition 20 (IOLTS weighted fault specifications). An IOLTS weighted fault specification (IOWFS) is a tuple W = A spec, w, p err, p fail, with A spec = (S, s 0, L, ) an IOLTS, w : L R 0 a weight function assigning a fault weight to each trace σ L, such that 0 < σ L w(σ) <. The fault weight w(σ) denotes the severity of a possible erroneous output after σ, p err : L [0, 1] an error function assigning to each trace σ L the probability p err (σ) that an implementation can provide an incorrect output after σ, i.e. that for an arbitrary A impl it holds that A impl [σ] A spec (σ), p fail : L [0, 1] a failure function assigning to each trace σ L the probability p fail (σ) that an arbitrary implementation A impl with A impl [σ] A spec (σ) responds incorrectly to σ. An implementation of W is an implementation of A impl. Since w, p err and p fail all have an infinite domain, it is difficult to specify them. For this purpose fault automata can be used [5]. These are IOLTSs combined with fault weights, that can be transformed into a weight function by either discounting or truncation. It is straightforward to extend fault automata to also include p err and p fail (no discounting for them is necessary). Alternatively, if a test suite T is given, we only specify w, p err and p fail for the traces in T. Following ioco theory [18], we require test cases for IOLTSs to be fail fast; they stop directly after observing a failure. Test cases continually either perform an input action or observe which output action a system provides. Definition 21 (Test cases and test suites). (i) A test case t for an IOLTS A spec = (S, s 0, L, ) is a prefix-closed, finite subset of L, such that - if σa? t, then b L : b a? σb t, - if σa! t, then b! L O : σb! t, - if σ traces Aspec, then σ L + : σσ t.
14 A test suite T is a tuple of test cases, denoted by t 1,..., t n. (ii) An execution of t is a trace σ t such that there is no ρ t for which σ ρ. The set of all executions of t is exec t = {σ t ρ t : σ ρ}. An observing trace of t is a trace σ t such that b L O : σb t. (iii) The verdict function v t : exec t {pass, fail} is defined for all σ exec t by { pass if σ tracesaspec v t (σ) = fail otherwise (iv) For each execution E = σ 1,..., σ n and trace σ L, we define obs(σ, E) = {i b L O : σb σ i } to be the number of times E observes after σ. We use obs(σ, T ) to denote the maximum number of times an execution of T might observe after σ, i.e. obs(σ, T ) = {i b L O : σb t i }. The set of all observing traces of T is given by obs T = t i T {σ t i b L O : σb t i }. The definitions of the classification relation and the fault weight of an implementation can be lifted easily to IOLTSs (by substituting traces for inputs) Underlying probability model The function p err of an IOWFS induces a random variable A W over the equivalence classes of Aspec, similar to the random variable F W for finite specifications. Definition 22 (A W ). Let W = A spec, w, p err, p fail be an IOWFS, then we define A W : Ω IMPL Aspec / to be the random variable describing to which equivalence class an arbitrary implementation of W belongs. Let A impl be an implementation of W, then P[A W = [A impl ]] = p err (σ) (1 p err (σ)). σ L A impl [σ] A spec[σ] σ L A impl [σ] A spec[σ] Although the infinite number of equivalence classes implies a probability of 0 for each specific one, this will turn out not to be a problem. We again introduce a random variable denoting the execution of a test suite, similar to V W,T. For the current model, however, we need to know which execution it yields instead of only whether is passes. After all, because of nondeterminism the same test suite might test different traces during different executions. Definition 23 (R W,T ). Let W = A spec, w, p err, p fail be an IOWFS, then we define R W,T : Ω L to be the random variable denoting the result of executing T against an arbitrary implementation of W. Note that the distribution of R W,T depends on the distribution of outputs the system provides. However, we do not need the explicit distribution of R W,T here. A random variable whose distribution will be needed in some of the proofs to follow is R σ,n W,T : Ω {pass, fail}, denoting the result of observing n times after σ on an arbitrary implementation of W tested by T. Note that both its purpose and its probability distribution are very similar to those of VW,T a.
15 Lemma 2. Let W = A spec, w, p err, p fail be an IOWFS, T a test suite for A spec, σ L and n N, then Proof. P[R σ,n W,T = pass] = (1 p fail(σ)) n p err (σ) + 1 p err (σ). P[R σ,n W,T = pass] = P[Rσ,n W,T = pass A W [σ] A spec [σ]] P[A W [σ] A spec [σ]] + P[R σ,n W,T = pass A W [σ] A spec [σ]] P[A W [σ] A spec [σ]] = 1 (1 p err (σ)) + (1 p fail (σ)) n p err (σ) 4.2 Risk For IOLTSs the risk is defined for a test suite and an execution. Again, we look at the posterior error probability of a trace, but now after a correct execution. Definition 24 (Risk). Let W = A spec, w, p err, p fail be an IOWFS, T a test suite for A spec, and E a correct execution of T. Then, the risk of an arbitrary implementation of W after T yielded E is defined by risk W (T, E) = E[w(A W ) R W,T = E]. Definition 25 (PEP). Let W = A spec, w, p err, p fail be an IOWFS, T a test suite for A spec, E a correct execution of T, and σ L. Then, the posterior error probability (PEP) of σ after T yields E is defined by PEP W (σ, T, E) = P[A W [σ] A spec [σ] R W,T = E]. Some executions might have performed an input action after σ, that way not being able to detect potential incorrect behaviour directly after σ. Therefore, we have to count the number of executions reaching σ and observing afterwards. Keeping this in mind, it is easy to see that the next proposition holds. Proposition 4. Let W = A spec, w, p err, p fail be an IOWFS, T a test suite for A spec, E = σ 1,... σ n a correct execution of T, and σ L. Then Proof. PEP W (σ, T, E) = PEP W (σ, T, E) = P[A W [σ] A spec [σ] R W,T = E] = P[A W [σ] A spec [σ] R σ,obs(σ,e) W,T = P[Rσ,obs(σ,E) W,T (1 p fail (σ)) obs(σ,e) p err (σ) (1 p fail (σ)) obs(σ,e) p err (σ) + 1 p err (σ). = pass] = pass A W [σ] A spec [σ]] P[A W [σ] A spec [σ]] P[R σ,obs(σ,e) W,T = pass]
16 = P[Rσ,obs(σ,E) W,T = pass A W [σ] A spec [σ]] p err (σ) = pass] = P[R σ,obs(e,σ) W,T (1 p fail (σ)) obs(σ,e) p err (σ) (1 p fail (σ)) obs(σ,e) p err (σ) + 1 p err (σ) Since PEP W (σ, T, E) only depends on W, σ and obs(σ, E), we use PEP W (σ, n) to denote PEP W (σ, T, E) with obs(σ, E) = n. The value of risk W (T, E) can in principle again be computed by ranging over all traces, summing their fault weight multiplied by their PEP: risk W (T, E) = σ L w(σ) PEP W (σ, T, E). However, as there are infinitely many traces, this is not computable in practice. As a solution, we first compute the initial risk, formally risk W (, ) = σ L w(σ) p err(σ). The algorithm introduced in [5] to compute the total coverage σ L w(σ) can be used for this, by multiplying each fault weight in the fault automaton by the corresponding error probability. Then, for all traces σ obs T we subtract the risk reduction that was obtained by E. Theorem 3. Let W = A spec, w, p err, p fail be an IOWFS, T a test suite for A spec, and E a correct execution of T. Then Proof. risk W (T, E) = risk W (, ) risk W (T, E) σ obs T w(σ) (p err (σ) PEP W (σ, T, E)). = E[w(A W ) R W,T = E] = w([a impl ]) P[A W = [A impl ] R W,T = E] = [A impl ] IMPL Aspec / [A impl ] IMPL Aspec / = w(σ) σ L = σ L A impl [σ] A spec[σ] [A impl ] IMPL Aspec / A impl [σ] A spec[σ] σ L w(σ) P[A W [σ] A spec [σ] R W,T = E] w(σ) P[A W = [A impl ] R W,T = E] P[A W = [A impl ] R W,T = E]
17 = σ L w(σ) PEP W (σ, T, E) = σ L w(σ) (PEP W (σ,, ) (PEP W (σ,, ) PEP W (σ, T, E))) = risk W (, ) σ L w(σ) (PEP W (σ,, ) PEP W (σ, T, E)) = risk W (, ) σ obs T w(σ) (PEP W (σ,, ) PEP W (σ, T, E)) = risk W (, ) σ obs T w(σ) (p err (σ) PEP W (σ, T, E)) Test optimisation with respect to risk. Due to nondeterministic outputs, the risk has been defined for IOLTSs based on a test suite and an execution. However, to find an optimal test suite with respect to the risk, we need to estimate the risk without knowing the execution in advance. It is therefore necessary to estimate the output behaviour of a system. We introduce the random variable O σ W : Ω L O to denote the output of the system when observing after σ. We use p out (σ, b) as shorthand for P[O σ W = b]. Definition 26 (O σ W ). Let W = A spec, w, p err, p fail be an IOWFS and σ L, then we define O σ W : Ω L O to be the random variable describing the output obtained when observing after σ. Let a! L O and b? L I, then p out (σ, a!) = P[O σ W = a!] and for technical reasons p out(σ, b?) = 1. Using p out, it trivial to find the probability p reach (σ) that a trace σ is reached during a test case in which it is included. Proposition 5. Let W = A spec, w, p err, p fail be an IOWFS, then for all σ = a 1 a 2... a n L it holds that p reach (σ) = n p out (a 1... a i 1, a i ), i=1 using the convention that a 1... a 0 is the empty string ɛ. Now, letting risk W (T ) be the random variable denoting the risk of a random execution of T, the following proposition holds. Proposition 6. Let W = A spec, w, p err, p fail be an IOWFS and T a test suite for A spec. Then E[risk W (T )] = risk W (, ) obs(σ,t ) ( ) w(σ) obs(σ, T ) p reach (σ) i i σ obs T i=0 (1 p reach (σ)) obs(σ,t ) i (p err (σ) PEP W (σ, i)).
18 To obtain this formula, we started with Theorem 3 and replaced p err (σ) PEP W (σ, T, E) (the error probability reduction for E) by the expected error probability reduction for an arbitrary execution. This reduction depends on the number of times the execution observes after σ, which is by definition between 0 and obs(σ, T ). The probability of observing i times is equal to obtaining i successes in a binomially distributed experiment with n = obs(σ, T ) and p = p reach (σ). Using these observations, the familiar formula for the binomial distribution, and Theorem 3, Proposition 6 follows. Since each trace σ contributes w(σ) p err (σ) to risk W (, ), Proposition 6 implies that, if obs(σ, T ) = n, the contribution of σ to the expected risk is c(σ, n) = w(σ) n i=0 ( ) n p reach (σ) i (1 p reach (σ)) n i PEP W (σ, i). i Note that c(σ, n) only denotes the contribution of σ itself, not of its prefixes. Adding another test case to T that observes after σ yields an expected risk reduction (ERR) of r(σ, n) = c(σ, n) c(σ, n + 1). We can now construct T opt-rfw,d = argmin T = t1,...,t k E[risk W (T )], i.e. a test suite of size k with minimal expected risk. As an extra restriction, we limit the depth of each test case to d to obtain a finite test suite. To compute the best test case of depth d (i.e. the one with maximal ERR) we first derive a recursive equation for the maximal ERR obtained by such a test case. In order to express this as a function of the maximal ERR of a test case of depth d 1, we also have to keep track of the trace observed thus far. We therefore let M T (σ, d) denote the maximal ERR that can be obtained by adding a test case of depth d with a history σ to some test suite T. Note that we are interested in M T (ɛ, d). First of all, trivially M T (σ, 0) = 0. For an arbitrary depth d > 0, M T (σ, d) is calculated by looking at the first step. If we start by performing an input a? L I, we obtain no ERR directly and are left with M T (σa?, d 1). If we observe, we directly obtain an ERR of r(σ, obs(σ, T )). Then, with probability p out (σ, b!) the system performs a b!, leaving us with M T (σb!, d 1). Formalising these observations, we obtain M T (σ, d) 0 if d = 0 max M T (σa?, d 1), = a? L I max r(σ, obs(σ, T )) + (p out (σ, b!) M T (σb!, d 1)) if d > 0 b! L O To construct T opt-rfw,d, we start with T = and compute M T (ɛ, d): the maximum expected risk reduction to be obtained by a test case of depth d. Algorithmically, we start bottom-up by calculating M T (σ d 1, 1) for all σ d 1 L of length d 1. Based on these values, we can calculate M T (σ d 2, 2) for all σ d 2 L of length d 2. Working our way up, we arrive at M T (ɛ, d). Dur-
19 ing the calculations we record for each M T (ρ, l) whether it was obtained by observation or an input (and which one). Then, the first step of the best test case t 1 is just the action that was chosen for M T (ɛ, d). Then, suppose that a L was performed, we perform an input or observe, according to which was chosen M T (a, d 1), and so on. To find the best test case t 2 to add to T after if already contains t 1, we set T = {t 1 } are repeat the procedure described above. Since only a part of the state space changes, this can be calculated efficiently. We just continue in this way until T = k and then set T opt-rfw,d = T. The complexity of finding the first test case is in O( L k ). In worst-case (only output actions exist) the complexity of finding the second test case is also in O( L k ), but in practice this might be better. However, the complexity could be improved to polynomial if optimisation is done directly based on the fault automaton instead of the unfolded state space. Fore more details, we refer to [5]. 4.3 Actual coverage Actual coverage for IOLTSs is similar to actual coverage for functions. We reuse the concept of relative error probability reduction in the definition for absolute actual coverage. Total coverage and relative actual coverage are unchanged (see [5] for an algorithm to compute totcov for IOLTSs). Definition 27 (Absolute actual coverage). Let W = A spec, w, p err, p fail be an IOWFS, T a test suite for A spec, and E an execution of T. Then we define abscov W (T, E) = σ L w(σ) REPR(σ, T, E). Now again we can easily see that the following proposition holds. Proposition 7. Let W = A spec, w, p err, p fail be an IOWFS, T a test suite for A spec, and E a correct execution of T. Then abscov W (T, E) = ( (1 p fail (σ)) obs(e,σ) ) w(σ) 1. 1 p err (σ) + (1 p fail (σ)) obs(e,σ) p err (σ) σ obs(t ) Optimisation with respect to relcov can be done in exactly the same way as optimisation with respect to risk. 5 Conclusions and future work While testing is an important part of today s software development process, little research has been devoted to the interpretation of a successful testing process. In this paper, we introduced a weighted fault specification (WFS) to describe the required behaviour of a system and the estimation of its probabilistic behaviour. Based on such an WFS, we presented two measures: risk and actual coverage.
20 Risk denotes the confidence the system after testing is successful, whereas actual coverage denotes how much was tested. We presented optimisation strategies with respect to both measures, enabling the tester to obtain a test suite of a given size that will obtain minimal risk or maximal actual coverage. Our work gives rise to several directions for future research. First, it is crucial to validate our framework by means of creating tool support and applying this in a series of case studies. Second, it might be useful to include fault dependencies in our models. Third, it is interesting to investigate the possibilities of on-the-fly test derivations (as e.g. performed by the tool TorX [4]) based on risk or actual coverage. This could yield a tool that, during testing, calculates probabilities and decides which branches to take for an optimal result. References [1] S. Amland. Risk-based testing: risk analysis fundamentals and metrics for software testing including a financial application case study. Journal of Systems and Software, Volume 53, Issue 3, pages , [2] J. Bach. Heuristic risk-based testing. Software Testing and Quality Engineering Magazine, November [3] T. Ball. A Theory of Predicate-Complete Test Coverage and Generation. In Proc. of the 3rd Int. Symp. on Formal Methods for Components and Objects (FMCO 04), volume 3657 of LNCS, pages 1 22, [4] A. Belinfante, J. Feenstra, R.G. de Vries, J. Tretmans, N. Goga, L.M.G. Feijs, S. Mauw, and L. Heerink. Formal test automation: A simple experiment. In Proc. of the 12th Int. Workshop on Testing Communicating Systems (IWTCS 99), volume 147 of IFIP Conference Proceedings, pages Kluwer, [5] L. Brandán Briones, E. Brinksma, and M.I.A. Stoelinga. A semantic framework for test coverage. In Proc. of the 4th Int. Symp. on Automated Technology for Verification and Analysis, volume 4218 of LNCS, pages Springer, [6] T.L. Graves, A.F. Karr, J.S. Marron, and H. Siy. Predicting fault incidence using software change history. IEEE Trans. on Softw. Eng., 26(7): , [7] M.H. Halstead. Elements of Software Science. Elsevier Science, [8] D. Lee and M. Yannakakis. Principles and methods of testing finite state machines - a survey. In Proc. of the IEEE, volume 84, pages , [9] Y.K. Malaiya and J. Denton. Requirements volatility and defect density. In Proc. of the 10th Int. Symp. on Software Reliability Engineering (ISSRE 99), pages IEEE Computer Society, [10] T. J. McCabe. A complexity measure. IEEE Transactions on Software Engineering, 2(4): , [11] K. W. Miller, L. J. Morell, R. E. Noonan, S. K. Park, D. M. Nicol, B. W. Murrill, and M. Voas. Estimating the probability of failure when testing reveals no failures. IEEE Transactions on Software Engineering, 18(1):33 43, [12] G. J. Myers, C. Sandler, T. Badgett, and T. M. Thomas. The Art of Software Testing, Second Edition. Wiley, [13] L. Nachmanson, M. Veanes, W. Schulte, N. Tillmann, and W. Grieskamp. Optimal strategies for testing nondeterministic systems. SIGSOFT Software Engineering Notes, 29(4):55 64, 2004.
21 [14] M. Newman. Software errors cost u.s. economy 59.5 billion annually, NIST assesses technical needs of industry to improve software-testing. Press Release, http: // [15] Felix Redmill. Exploring risk-based testing and its implications. Software Testing, Verification and Reliability, 14(1):3 15, [16] M. Timmer. Evaluating and predicting actual test coverage. Master s thesis, University of Twente, The Netherlands, [17] G. J. Tretmans. Test Generation with Inputs, Outputs and Repetitive Quiescence. Software Concepts and Tools, 17(3): , [18] H. Ural. Formal methods for test sequence generation. Computer Communications, 15(5): , [19] J. Voas, L. Morell, and K. Miller. Predicting where faults can hide from testing. IEEE Software, 8(2):41 48, 1991.
A Semantic Framework for Test Coverage
A Semantic Framework for Test Coverage Laura Brandán Briones +, Ed Brinksma +, and Mariëlle Stoelinga + + Faculty of Computer Science, University of Twente, The Netherlands Embedded Systems Institute,
More informationProbabilistic testing coverage
Probabilistic testing coverage NICOLAE GOGA Eindhoven University of Technology P.O. Box 513, 5600 MB Eindhoven THE NETHERLANDS Abstract: This paper describes a way to compute the coverage for an on-the-fly
More informationTesting Distributed Systems
Testing Distributed Systems R. M. Hierons Brunel University, UK rob.hierons@brunel.ac.uk http://people.brunel.ac.uk/~csstrmh Work With Jessica Chen Mercedes Merayo Manuel Nunez Hasan Ural Model Based Testing
More informationTESTING is one of the most important parts of the
IEEE TRANSACTIONS 1 Generating Complete Controllable Test Suites for Distributed Testing Robert M. Hierons, Senior Member, IEEE Abstract A test suite is m-complete for finite state machine (FSM) M if it
More informationDISTINGUING NON-DETERMINISTIC TIMED FINITE STATE MACHINES
DISTINGUING NON-DETERMINISTIC TIMED FINITE STATE MACHINES Maxim Gromov 1, Khaled El-Fakih 2, Natalia Shabaldina 1, Nina Yevtushenko 1 1 Tomsk State University, 36 Lenin Str.. Tomsk, 634050, Russia gromov@sibmail.com,
More informationEnhancing Active Automata Learning by a User Log Based Metric
Master Thesis Computing Science Radboud University Enhancing Active Automata Learning by a User Log Based Metric Author Petra van den Bos First Supervisor prof. dr. Frits W. Vaandrager Second Supervisor
More informationThe State Explosion Problem
The State Explosion Problem Martin Kot August 16, 2003 1 Introduction One from main approaches to checking correctness of a concurrent system are state space methods. They are suitable for automatic analysis
More informationWhat You Must Remember When Processing Data Words
What You Must Remember When Processing Data Words Michael Benedikt, Clemens Ley, and Gabriele Puppis Oxford University Computing Laboratory, Park Rd, Oxford OX13QD UK Abstract. We provide a Myhill-Nerode-like
More informationEmbedded systems specification and design
Embedded systems specification and design David Kendall David Kendall Embedded systems specification and design 1 / 21 Introduction Finite state machines (FSM) FSMs and Labelled Transition Systems FSMs
More informationIN THIS paper we investigate the diagnosability of stochastic
476 IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL 50, NO 4, APRIL 2005 Diagnosability of Stochastic Discrete-Event Systems David Thorsley and Demosthenis Teneketzis, Fellow, IEEE Abstract We investigate
More informationCompositional Testing with IOCO
Compositional Testing with IOCO Machiel van der Bijl and Arend Rensink Software Engineering, Department of Computer Science, University of Twente P.O. Box 217, 7500 AE Enschede, The Netherlands email:
More informationSoftware Specification 2IX20
Software Specification 2IX20 Julien Schmaltz (with slides jointly with J. Tretmans, TNO&RUN) Lecture 11: Introduction to Model-Based Testing Context & Motivation Testing Testing: checking or measuring
More informationChapter 7 Turing Machines
Chapter 7 Turing Machines Copyright 2011 The McGraw-Hill Companies, Inc. Permission required for reproduction or display. 1 A General Model of Computation Both finite automata and pushdown automata are
More informationRandomized Algorithms
Randomized Algorithms Prof. Tapio Elomaa tapio.elomaa@tut.fi Course Basics A new 4 credit unit course Part of Theoretical Computer Science courses at the Department of Mathematics There will be 4 hours
More informationUsing a Minimal Number of Resets when Testing from a Finite State Machine
Using a Minimal Number of Resets when Testing from a Finite State Machine R. M. Hierons a a Department of Information Systems and Computing, Brunel University, Uxbridge, Middlesex, UB8 3PH, United Kingdom
More informationFailure Diagnosis of Discrete-Time Stochastic Systems subject to Temporal Logic Correctness Requirements
Failure Diagnosis of Discrete-Time Stochastic Systems subject to Temporal Logic Correctness Requirements Jun Chen, Student Member, IEEE and Ratnesh Kumar, Fellow, IEEE Dept. of Elec. & Comp. Eng., Iowa
More informationOn Real-time Monitoring with Imprecise Timestamps
On Real-time Monitoring with Imprecise Timestamps David Basin 1, Felix Klaedtke 2, Srdjan Marinovic 1, and Eugen Zălinescu 1 1 Institute of Information Security, ETH Zurich, Switzerland 2 NEC Europe Ltd.,
More informationMetainduction in Operational Set Theory
Metainduction in Operational Set Theory Luis E. Sanchis Department of Electrical Engineering and Computer Science Syracuse University Syracuse, NY 13244-4100 Sanchis@top.cis.syr.edu http://www.cis.syr.edu/
More informationDESCRIPTIONAL COMPLEXITY OF NFA OF DIFFERENT AMBIGUITY
International Journal of Foundations of Computer Science Vol. 16, No. 5 (2005) 975 984 c World Scientific Publishing Company DESCRIPTIONAL COMPLEXITY OF NFA OF DIFFERENT AMBIGUITY HING LEUNG Department
More informationCompositional Testing with IOCO
Compositional Testing with IOCO Machiel van der Bijl 1, Arend Rensink 1 and Jan Tretmans 2 1 Software Engineering, Department of Computer Science, University of Twente P.O. Box 217, 7500 AE Enschede, The
More informationAlan Bundy. Automated Reasoning LTL Model Checking
Automated Reasoning LTL Model Checking Alan Bundy Lecture 9, page 1 Introduction So far we have looked at theorem proving Powerful, especially where good sets of rewrite rules or decision procedures have
More informationTowards a Denotational Semantics for Discrete-Event Systems
Towards a Denotational Semantics for Discrete-Event Systems Eleftherios Matsikoudis University of California at Berkeley Berkeley, CA, 94720, USA ematsi@eecs. berkeley.edu Abstract This work focuses on
More informationTesting for Refinement in CSP
Author manuscript, published in "Formal Methods and Software Engineering, ICFEM 2007, Boca-Raton : United States (2007)" Testing for Refinement in CSP Ana Cavalcanti 1 and Marie-Claude Gaudel 2 1 University
More informationPDF hosted at the Radboud Repository of the Radboud University Nijmegen
PDF hosted at the Radboud Repository of the Radboud University Nijmegen The following full text is a preprint version which may differ from the publisher's version. For additional information about this
More informationsystem perform its tasks (performance testing), how does the system react if its environment does not behave as expected (robustness testing), and how
Test Generation with Inputs, Outputs, and Repetitive Quiescence Jan Tretmans Tele-Informatics and Open Systems Group Department of Computer Science University of Twente P.O. Box 17, NL-7500 AE Enschede
More informationA General Testability Theory: Classes, properties, complexity, and testing reductions
A General Testability Theory: Classes, properties, complexity, and testing reductions presenting joint work with Luis Llana and Pablo Rabanal Universidad Complutense de Madrid PROMETIDOS-CM WINTER SCHOOL
More informationCS243, Logic and Computation Nondeterministic finite automata
CS243, Prof. Alvarez NONDETERMINISTIC FINITE AUTOMATA (NFA) Prof. Sergio A. Alvarez http://www.cs.bc.edu/ alvarez/ Maloney Hall, room 569 alvarez@cs.bc.edu Computer Science Department voice: (67) 552-4333
More informationResolution of Initial-State in Security Applications of DES
Resolution of Initial-State in Security Applications of DES Christoforos N. Hadjicostis Abstract A non-deterministic labeled finite automaton is initial-state opaque if the membership of its true initial
More informationAn On-the-fly Tableau Construction for a Real-Time Temporal Logic
#! & F $ F ' F " F % An On-the-fly Tableau Construction for a Real-Time Temporal Logic Marc Geilen and Dennis Dams Faculty of Electrical Engineering, Eindhoven University of Technology P.O.Box 513, 5600
More informationLecture Notes on Inductive Definitions
Lecture Notes on Inductive Definitions 15-312: Foundations of Programming Languages Frank Pfenning Lecture 2 September 2, 2004 These supplementary notes review the notion of an inductive definition and
More informationOperational Semantics
Operational Semantics Semantics and applications to verification Xavier Rival École Normale Supérieure Xavier Rival Operational Semantics 1 / 50 Program of this first lecture Operational semantics Mathematical
More informationAutomated Synthesis of Tableau Calculi
Automated Synthesis of Tableau Calculi Renate A. Schmidt 1 and Dmitry Tishkovsky 1 School of Computer Science, The University of Manchester Abstract This paper presents a method for synthesising sound
More informationTemporal logics and explicit-state model checking. Pierre Wolper Université de Liège
Temporal logics and explicit-state model checking Pierre Wolper Université de Liège 1 Topics to be covered Introducing explicit-state model checking Finite automata on infinite words Temporal Logics and
More informationA Goal-Oriented Algorithm for Unification in EL w.r.t. Cycle-Restricted TBoxes
A Goal-Oriented Algorithm for Unification in EL w.r.t. Cycle-Restricted TBoxes Franz Baader, Stefan Borgwardt, and Barbara Morawska {baader,stefborg,morawska}@tcs.inf.tu-dresden.de Theoretical Computer
More informationCompositionality for Probabilistic Automata
Compositionality for Probabilistic Automata Nancy Lynch 1, Roberto Segala 2, and Frits Vaandrager 3 1 MIT Laboratory for Computer Science Cambridge, MA 02139, USA lynch@theory.lcs.mit.edu 2 Dipartimento
More informationNotes on State Minimization
U.C. Berkeley CS172: Automata, Computability and Complexity Handout 1 Professor Luca Trevisan 2/3/2015 Notes on State Minimization These notes present a technique to prove a lower bound on the number of
More informationOn Stateless Multicounter Machines
On Stateless Multicounter Machines Ömer Eğecioğlu and Oscar H. Ibarra Department of Computer Science University of California, Santa Barbara, CA 93106, USA Email: {omer, ibarra}@cs.ucsb.edu Abstract. We
More informationAvoiding Coincidental Correctness in Boundary
Avoiding Coincidental Correctness in Boundary Value Analysis R. M. Hierons School of Information Systems, Computing, and Mathematics Brunel University, UK In partition analysis we divide the input domain
More informationFirst-Order Logic. Chapter Overview Syntax
Chapter 10 First-Order Logic 10.1 Overview First-Order Logic is the calculus one usually has in mind when using the word logic. It is expressive enough for all of mathematics, except for those concepts
More informationFuzzy Limits of Functions
Fuzzy Limits of Functions Mark Burgin Department of Mathematics University of California, Los Angeles 405 Hilgard Ave. Los Angeles, CA 90095 Abstract The goal of this work is to introduce and study fuzzy
More informationPartial model checking via abstract interpretation
Partial model checking via abstract interpretation N. De Francesco, G. Lettieri, L. Martini, G. Vaglini Università di Pisa, Dipartimento di Ingegneria dell Informazione, sez. Informatica, Via Diotisalvi
More informationPSL Model Checking and Run-time Verification via Testers
PSL Model Checking and Run-time Verification via Testers Formal Methods 2006 Aleksandr Zaks and Amir Pnueli New York University Introduction Motivation (Why PSL?) A new property specification language,
More informationLecture 23 : Nondeterministic Finite Automata DRAFT Connection between Regular Expressions and Finite Automata
CS/Math 24: Introduction to Discrete Mathematics 4/2/2 Lecture 23 : Nondeterministic Finite Automata Instructor: Dieter van Melkebeek Scribe: Dalibor Zelený DRAFT Last time we designed finite state automata
More informationA unifying theory of control dependence and its application to arbitrary program structures
A unifying theory of control dependence and its application to arbitrary program structures Sebastian Danicic a Richard W. Barraclough b Mark Harman c John D. Howroyd a Ákos Kiss d Michael R. Laurence
More informationSolving Classification Problems By Knowledge Sets
Solving Classification Problems By Knowledge Sets Marcin Orchel a, a Department of Computer Science, AGH University of Science and Technology, Al. A. Mickiewicza 30, 30-059 Kraków, Poland Abstract We propose
More informationcse303 ELEMENTS OF THE THEORY OF COMPUTATION Professor Anita Wasilewska
cse303 ELEMENTS OF THE THEORY OF COMPUTATION Professor Anita Wasilewska LECTURE 6 CHAPTER 2 FINITE AUTOMATA 2. Nondeterministic Finite Automata NFA 3. Finite Automata and Regular Expressions 4. Languages
More informationNondeterministic finite automata
Lecture 3 Nondeterministic finite automata This lecture is focused on the nondeterministic finite automata (NFA) model and its relationship to the DFA model. Nondeterminism is an important concept in the
More informationBias and No Free Lunch in Formal Measures of Intelligence
Journal of Artificial General Intelligence 1 (2009) 54-61 Submitted 2009-03-14; Revised 2009-09-25 Bias and No Free Lunch in Formal Measures of Intelligence Bill Hibbard University of Wisconsin Madison
More informationOn the coinductive nature of centralizers
On the coinductive nature of centralizers Charles Grellois INRIA & University of Bologna Séminaire du LIFO Jan 16, 2017 Charles Grellois (INRIA & Bologna) On the coinductive nature of centralizers Jan
More informationDegradable Agreement in the Presence of. Byzantine Faults. Nitin H. Vaidya. Technical Report #
Degradable Agreement in the Presence of Byzantine Faults Nitin H. Vaidya Technical Report # 92-020 Abstract Consider a system consisting of a sender that wants to send a value to certain receivers. Byzantine
More informationFinite-State Transducers
Finite-State Transducers - Seminar on Natural Language Processing - Michael Pradel July 6, 2007 Finite-state transducers play an important role in natural language processing. They provide a model for
More informationTimed Testing with TorX
Timed Testing with TorX Henrik Bohnenkamp and Axel Belinfante Formal Methods and Tools Department of Computer Science, University of Twente Postbus 217, NL-7500 AE Enschede, The Netherlands {bohnenka belinfan}@cs.utwente.nl
More informationImpartial Anticipation in Runtime-Verification
Impartial Anticipation in Runtime-Verification Wei Dong 1, Martin Leucker 2, and Christian Schallhart 2 1 School of Computer, National University of Defense Technology, P.R.China 2 Institut für Informatik,
More informationNondeterministic Finite Automata
Nondeterministic Finite Automata Not A DFA Does not have exactly one transition from every state on every symbol: Two transitions from q 0 on a No transition from q 1 (on either a or b) Though not a DFA,
More informationEnumeration Schemes for Words Avoiding Permutations
Enumeration Schemes for Words Avoiding Permutations Lara Pudwell November 27, 2007 Abstract The enumeration of permutation classes has been accomplished with a variety of techniques. One wide-reaching
More informationThe Target Discounted-Sum Problem
The Target Discounted-Sum Problem Udi Boker The Interdisciplinary Center (IDC), Herzliya, Israel Email: udiboker@idc.ac.il Thomas A. Henzinger and Jan Otop IST Austria Klosterneuburg, Austria Email: {tah,
More informationA Disambiguation Algorithm for Weighted Automata
A Disambiguation Algorithm for Weighted Automata Mehryar Mohri a,b and Michael D. Riley b a Courant Institute of Mathematical Sciences, 251 Mercer Street, New York, NY 10012. b Google Research, 76 Ninth
More informationOn Rice s theorem. Hans Hüttel. October 2001
On Rice s theorem Hans Hüttel October 2001 We have seen that there exist languages that are Turing-acceptable but not Turing-decidable. An important example of such a language was the language of the Halting
More informationSorting suffixes of two-pattern strings
Sorting suffixes of two-pattern strings Frantisek Franek W. F. Smyth Algorithms Research Group Department of Computing & Software McMaster University Hamilton, Ontario Canada L8S 4L7 April 19, 2004 Abstract
More informationUnranked Tree Automata with Sibling Equalities and Disequalities
Unranked Tree Automata with Sibling Equalities and Disequalities Wong Karianto Christof Löding Lehrstuhl für Informatik 7, RWTH Aachen, Germany 34th International Colloquium, ICALP 2007 Xu Gao (NFS) Unranked
More informationCombining Shared Coin Algorithms
Combining Shared Coin Algorithms James Aspnes Hagit Attiya Keren Censor Abstract This paper shows that shared coin algorithms can be combined to optimize several complexity measures, even in the presence
More informationFORMULATION OF THE LEARNING PROBLEM
FORMULTION OF THE LERNING PROBLEM MIM RGINSKY Now that we have seen an informal statement of the learning problem, as well as acquired some technical tools in the form of concentration inequalities, we
More informationModel-Based Estimation and Inference in Discrete Event Systems
Model-Based Estimation and Inference in Discrete Event Systems State Estimation and Fault Diagnosis in Automata Notes for ECE 800 (Spring 2013) Christoforos N. Hadjicostis Contents 1 Finite Automata:
More informationAlgebraic Trace Theory
Algebraic Trace Theory EE249 Roberto Passerone Material from: Jerry R. Burch, Trace Theory for Automatic Verification of Real-Time Concurrent Systems, PhD thesis, CMU, August 1992 October 21, 2002 ee249
More informationVariable Automata over Infinite Alphabets
Variable Automata over Infinite Alphabets Orna Grumberg a, Orna Kupferman b, Sarai Sheinvald b a Department of Computer Science, The Technion, Haifa 32000, Israel b School of Computer Science and Engineering,
More informationDecidable and Expressive Classes of Probabilistic Automata
Decidable and Expressive Classes of Probabilistic Automata Yue Ben a, Rohit Chadha b, A. Prasad Sistla a, Mahesh Viswanathan c a University of Illinois at Chicago, USA b University of Missouri, USA c University
More information2. Notation and Relative Complexity Notions
1. Introduction 1 A central issue in computational complexity theory is to distinguish computational problems that are solvable using efficient resources from those that are inherently intractable. Computer
More informationFormal Testing from Timed Finite State Machines
Formal Testing from Timed Finite State Machines Mercedes G. Merayo a, Manuel Núñez a and Ismael Rodríguez a a Departamento de Sistemas Informáticos y Computación Universidad Complutense de Madrid, E-28040
More informationLanguage Stability and Stabilizability of Discrete Event Dynamical Systems 1
Language Stability and Stabilizability of Discrete Event Dynamical Systems 1 Ratnesh Kumar Department of Electrical Engineering University of Kentucky Lexington, KY 40506-0046 Vijay Garg Department of
More informationParallel Learning of Automatic Classes of Languages
Parallel Learning of Automatic Classes of Languages Sanjay Jain a,1, Efim Kinber b a School of Computing, National University of Singapore, Singapore 117417. b Department of Computer Science, Sacred Heart
More informationIntelligent Agents. Formal Characteristics of Planning. Ute Schmid. Cognitive Systems, Applied Computer Science, Bamberg University
Intelligent Agents Formal Characteristics of Planning Ute Schmid Cognitive Systems, Applied Computer Science, Bamberg University Extensions to the slides for chapter 3 of Dana Nau with contributions by
More informationCDS 270 (Fall 09) - Lecture Notes for Assignment 8.
CDS 270 (Fall 09) - Lecture Notes for Assignment 8. ecause this part of the course has no slides or textbook, we will provide lecture supplements that include, hopefully, enough discussion to complete
More informationMotivation for introducing probabilities
for introducing probabilities Reaching the goals is often not sufficient: it is important that the expected costs do not outweigh the benefit of reaching the goals. 1 Objective: maximize benefits - costs.
More informationTheory of Computation
Thomas Zeugmann Hokkaido University Laboratory for Algorithmics http://www-alg.ist.hokudai.ac.jp/ thomas/toc/ Lecture 3: Finite State Automata Motivation In the previous lecture we learned how to formalize
More information2. Elements of the Theory of Computation, Lewis and Papadimitrou,
Introduction Finite Automata DFA, regular languages Nondeterminism, NFA, subset construction Regular Epressions Synta, Semantics Relationship to regular languages Properties of regular languages Pumping
More informationNotes on generating functions in automata theory
Notes on generating functions in automata theory Benjamin Steinberg December 5, 2009 Contents Introduction: Calculus can count 2 Formal power series 5 3 Rational power series 9 3. Rational power series
More informationIntroduction to Temporal Logic. The purpose of temporal logics is to specify properties of dynamic systems. These can be either
Introduction to Temporal Logic The purpose of temporal logics is to specify properties of dynamic systems. These can be either Desired properites. Often liveness properties like In every infinite run action
More informationcse303 ELEMENTS OF THE THEORY OF COMPUTATION Professor Anita Wasilewska
cse303 ELEMENTS OF THE THEORY OF COMPUTATION Professor Anita Wasilewska LECTURE 1 Course Web Page www3.cs.stonybrook.edu/ cse303 The webpage contains: lectures notes slides; very detailed solutions to
More informationX-machines - a computational model framework.
Chapter 2. X-machines - a computational model framework. This chapter has three aims: To examine the main existing computational models and assess their computational power. To present the X-machines as
More informationApproximation Metrics for Discrete and Continuous Systems
University of Pennsylvania ScholarlyCommons Departmental Papers (CIS) Department of Computer & Information Science May 2007 Approximation Metrics for Discrete Continuous Systems Antoine Girard University
More informationCONCATENATION AND KLEENE STAR ON DETERMINISTIC FINITE AUTOMATA
1 CONCATENATION AND KLEENE STAR ON DETERMINISTIC FINITE AUTOMATA GUO-QIANG ZHANG, XIANGNAN ZHOU, ROBERT FRASER, LICONG CUI Department of Electrical Engineering and Computer Science, Case Western Reserve
More informationarxiv: v2 [cs.fl] 29 Nov 2013
A Survey of Multi-Tape Automata Carlo A. Furia May 2012 arxiv:1205.0178v2 [cs.fl] 29 Nov 2013 Abstract This paper summarizes the fundamental expressiveness, closure, and decidability properties of various
More informationLearning k-edge Deterministic Finite Automata in the Framework of Active Learning
Learning k-edge Deterministic Finite Automata in the Framework of Active Learning Anuchit Jitpattanakul* Department of Mathematics, Faculty of Applied Science, King Mong s University of Technology North
More informationA MODEL-THEORETIC PROOF OF HILBERT S NULLSTELLENSATZ
A MODEL-THEORETIC PROOF OF HILBERT S NULLSTELLENSATZ NICOLAS FORD Abstract. The goal of this paper is to present a proof of the Nullstellensatz using tools from a branch of logic called model theory. In
More informationDecentralized Control of Discrete Event Systems with Bounded or Unbounded Delay Communication
Decentralized Control of Discrete Event Systems with Bounded or Unbounded Delay Communication Stavros Tripakis Abstract We introduce problems of decentralized control with communication, where we explicitly
More informationA Note on the Complexity of Network Reliability Problems. Hans L. Bodlaender Thomas Wolle
A Note on the Complexity of Network Reliability Problems Hans L. Bodlaender Thomas Wolle institute of information and computing sciences, utrecht university technical report UU-CS-2004-001 www.cs.uu.nl
More informationAcceptance Criteria for Critical Software Based on Testability Estimates and Test Results
Acceptance Criteria for Critical Software Based on Testability Estimates and Test Results Antonia Bertolino Istituto di Elaborazione della Informazione, CR Pisa, Italy Lorenzo Strigini Centre for Software
More informationOn Recognizable Languages of Infinite Pictures
On Recognizable Languages of Infinite Pictures Equipe de Logique Mathématique CNRS and Université Paris 7 JAF 28, Fontainebleau, Juin 2009 Pictures Pictures are two-dimensional words. Let Σ be a finite
More informationAlgebraic Trace Theory
Algebraic Trace Theory EE249 Presented by Roberto Passerone Material from: Jerry R. Burch, Trace Theory for Automatic Verification of Real-Time Concurrent Systems, PhD thesis, CMU, August 1992 October
More informationLTL is Closed Under Topological Closure
LTL is Closed Under Topological Closure Grgur Petric Maretić, Mohammad Torabi Dashti, David Basin Department of Computer Science, ETH Universitätstrasse 6 Zürich, Switzerland Abstract We constructively
More informationMetamorphosis of Fuzzy Regular Expressions to Fuzzy Automata using the Follow Automata
Metamorphosis of Fuzzy Regular Expressions to Fuzzy Automata using the Follow Automata Rahul Kumar Singh, Ajay Kumar Thapar University Patiala Email: ajayloura@gmail.com Abstract To deal with system uncertainty,
More informationClosure Under Reversal of Languages over Infinite Alphabets
Closure Under Reversal of Languages over Infinite Alphabets Daniel Genkin 1, Michael Kaminski 2(B), and Liat Peterfreund 2 1 Department of Computer and Information Science, University of Pennsylvania,
More informationLecture Notes on Software Model Checking
15-414: Bug Catching: Automated Program Verification Lecture Notes on Software Model Checking Matt Fredrikson André Platzer Carnegie Mellon University Lecture 19 1 Introduction So far we ve focused on
More informationAn integration testing method that is proved to find all faults
An integration testing method that is proved to find all faults Florentin Ipate & Mike Holcombe Formal Methods and Software Engineering (FORMSOFT) Group Department of Computer Science University of Sheffield,
More informationChapter 3 Deterministic planning
Chapter 3 Deterministic planning In this chapter we describe a number of algorithms for solving the historically most important and most basic type of planning problem. Two rather strong simplifying assumptions
More informationDeterministic Finite Automata. Non deterministic finite automata. Non-Deterministic Finite Automata (NFA) Non-Deterministic Finite Automata (NFA)
Deterministic Finite Automata Non deterministic finite automata Automata we ve been dealing with have been deterministic For every state and every alphabet symbol there is exactly one move that the machine
More informationRelational completeness of query languages for annotated databases
Relational completeness of query languages for annotated databases Floris Geerts 1,2 and Jan Van den Bussche 1 1 Hasselt University/Transnational University Limburg 2 University of Edinburgh Abstract.
More informationDecentralized Control of Discrete Event Systems with Bounded or Unbounded Delay Communication 1
Decentralized Control of Discrete Event Systems with Bounded or Unbounded Delay Communication 1 Stavros Tripakis 2 VERIMAG Technical Report TR-2004-26 November 2004 Abstract We introduce problems of decentralized
More informationNew Minimal Weight Representations for Left-to-Right Window Methods
New Minimal Weight Representations for Left-to-Right Window Methods James A. Muir 1 and Douglas R. Stinson 2 1 Department of Combinatorics and Optimization 2 School of Computer Science University of Waterloo
More informationHow to Pop a Deep PDA Matters
How to Pop a Deep PDA Matters Peter Leupold Department of Mathematics, Faculty of Science Kyoto Sangyo University Kyoto 603-8555, Japan email:leupold@cc.kyoto-su.ac.jp Abstract Deep PDA are push-down automata
More information