Ontology-based Unit Test Generation

Size: px
Start display at page:

Download "Ontology-based Unit Test Generation"

Transcription

1 Ontology-based Unit Test Generation by Valeh Hosseinzadeh Nasser B.Sc., Amirkabir University of Technology, 2007 A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF Master of Computer Science In the Graduate Academic Unit of Faculty of Computer Science Supervisor(s): Examining Board: Weichang Du, Ph.D., Computer Science Dawn MacIsaac, Ph.D., Computer Science Przemyslaw R. Pochec, Ph.D., Computer Science, Chair Harold Boley, Ph.D., Computer Science Yevgen Biletskiy, Ph.D., Electrical and Computer Engineering This thesis is accepted Dean of Graduate Studies THE UNIVERSITY OF NEW BRUNSWICK September, 2009 Valeh Hosseinzadeh Nasser, 2009

2 Dedication To my beloved husband, who has always been an inspiration to me and my dear parents and brother, who have given me great support and love. Valeh H. Nasser ii

3 Abstract Various software systems have different test requirements. In order to specify adequate levels of testing, coverage criteria are used. The knowledge that is referred by coverage criteria for test case selection is defined in test oracles. This thesis is devoted to application of knowledge engineering techniques to facilitate the enrichment of test oracles with test experts mental model of error-prone aspects of software and definitions of custom coverage criteria. The test oracles are represented in ontologies, which are highly extensible. The coverage criteria are written in a rule language, using the vocabulary defined by the test oracle ontology. This approach makes it possible for the test experts to add the knowledge to the test oracles and compose new coverage criteria. To decouple the knowledge that is represented in test oracles from the test selection algorithms, reasoning is used for test case selection. Prevalent test case generation technologies are then used for generating the test cases. The focus of this thesis is on unit testing based on UML state machines. iii

4 Acknowledgements I would like to express profound gratitude to my supervisors, Dr. Weichang Du and Dr. Dawn MacIsaac, for their recommendations, encouragement and support throughout the course of this thesis. I am also highly thankful to Dr. Harold Boley for his invaluable suggestions. Valeh H. Nasser iv

5 Table of Contents Dedication ii Abstract iii Acknowledgments iv Table of Contents ix List of Tables x 1 Introduction Test Generation and the Role of Test Experts Thesis Scope Structure of Thesis List of Figures 1 2 Background Specification of What Needs to Be Tested UML State Machines Coverage Criteria for State-machine-based Unit Testing Mapping UML to OWL Identification of Test Objectives Through Reasoning v

6 2.3 Generation of Test Cases with Artificial Intelligence Planning PDDL A Mapping between PDDL and UML State Machine An Ontology-based Method for Software Testing Method Overview Syntax and Semantics of Specifications Behavioral Model Ontology Expert Knowledge Ontology Test Objectives Coverage Criteria Rules Abstract Test Suite Ontology Redundancy Checking Rule Templates Implementation Knowledge Ontology Executable Test Suite Transformation Phases Test Objective Generation Phase Redundancy Checking Phase Abstract Test Suite Ontology Generation Phase Executable Test Suite Generation Phase A Simple Example Elevator Door Example Specifications Behavioral Model Ontology Coverage Criteria Rule Test Objectives Expert Knowledge Ontology vi

7 Abstract Test Suite Ontology Redundancy Checking Rule Templates Implementation Knowledge Ontology Executable Test Suite Transformation Phases Test Objective Generation Phase Redundancy Checking Phase Abstract Test Suite Ontology Generation Phase Executable Test Suite Generation Phase Summary System Design System Overview Design Classes Test Objective Generation Subsystem Redundancy Checking Subsystem Test case Generation Subsystem System Operation System Operation Scenario Summary System Implementation Realization of Design Classes Detailed Design The teststructuregenerator.generator Package The teststructuregenerator.assessment Package The teststructuregenerator.common Package The testcasegenerator.plannerinit Package vii

8 5.2.5 The testcasegenerator.plannerinit.datastructures Package The testcasegenerator.plannerinit.datastructures. PDDL package The testcasegenerator.plannerrunner Package The testcasegenerator.testwriter Package Summary System Demonstration and Evaluation Case Study Case Study: Traffic Light Class Generated Test Suites Limitations Extensibility Examples of Extension of Test Oracle with Expert Knowledge Unit Testing Coverage Criteria from the Literature Test Design Based on an Error Taxonomy Summary Conclusions 99 References 103 Appendix 109 A The Syntax of the Specifications 109 A.1 State Machine OWL Ontology TBox A.2 Syntax of Coverage Criteria Rules A.3 Expert Knowledge Ontology TBox A.4 Syntax of Test Objectives viii

9 A.5 Test Suite OWL Ontology TBox A.6 Redundancy Checking Rule Template Syntax A.7 Implementation Knowledge OWL Ontology TBox A.8 The structure of the JUnit code B Elevator Door Class Ontologies 120 B.1 Door State Machine OWL Ontology ABox B.2 Door Test Suite OWL Ontology ABox B.3 Door Implementation Knowledge Ontology B.4 Door JUnit Test Suite C Code for Using Jena API, OO JDREW API and POSL generation126 C.1 Reading an OWL File with Jena API C.2 Writing a POSL File C.3 Creating the OWL Test suite with Jena API C.4 Using OO JDREW for Reasoning D Traffic Light Example 132 D.1 Traffic Light State Machine OWL Ontology ABox D.2 Test Objectives and Corresponding Paths for the All Transition Coverage D.3 Test Objectives and Corresponding Paths for the All Transition Pair Coverage E Unit Testing Coverage Criteria from the Literature in POSL 141 Vita ix

10 List of Tables 2.1 Several UML state-machine-based coverage criteria Mapping of the UML elements to OWL in ODM Specification of the UML Transition class and Effect property in OWL (from [1]) Mapping the UML state machine specification to PDDL Semantics of classes of state machine ontology TBox Semantics of properties of state machine ontology TBox Examples of test objectives Semantics of classes of abstract test suite ontology TBox Semantics of properties of abstract test suite ontology TBox Semantics of classes of implementation knowledge ontology TBox Semantics of properties of implementation knowledge ontology TBox 36 A.1 Mapping between SHOIQ (D) and Horn Logic statements (from Grosof et al. [2]) A.2 Syntax of test objectives x

11 List of Figures 1.1 Scope space of testing activity Data flow diagram of an automated test case generator Levels of control of test experts over automated test generation RuleML example from [3] The UML state machine superstructure overview from [4] A PDDL 2.1 example [5] An example of a UML state machine and the equivalent PDDL specification Phases of transformation of specifications Part of TBox of state machine ontology A general expert knowledge ontology Part of TBox of the abstract test suite ontology Part of TBox of implementation knowledge ontology Elevator door class and its state machine Part of ABox of the door state machine Part of ABox of implementation knowledge ontology of the Door class High level data flow diagram of the system Technologies for realizing of the data flow diagram of the system xi

12 4.3 The class diagram and activity diagram of the test objective generation subsystem The class diagram and activity diagram of the redundancy checking subsystem Test case generation subsystem The activity diagram of the test case generation subsystem The high-level activity diagram of system operation Mapping of design classes to implementation classes The teststructuregenerator.generator package The classes of the teststructuregenerator.assessment package The classes of the testcasegenerator.plannerinit package The testcasegenerator.plannerinit package The testcasegenerator.plannerinit.datastructures package The classes of the testcasegenerator.plannerinit.datastructures package The testcasegenerator.plannerrunner package The testcasegenerator.testwriter package The classes of the testcasegenerator.testwriter package Traffic light state machine Traffic light state machine ontology Traffic light test suite ontology Traffic light implementation knowledge Expert knowledge ontology TBox: use of an unreliable library Expert knowledge ontology TBox: Boundary values Extraction of expert knowledge from a portion of Beizer bug taxonomy [6] A.1 An ontology describing the vocabulary of a coverage criteria rule xii

13 Chapter 1 Introduction The goal of software engineering is production of software that conforms to quality and functional requirements [7]. A crucial software engineering activity is software testing, which examines the conformance of software to requirements specifications. This activity can be very costly. The quality of a test suite has a direct relation to the number of errors it reveals, while it is negatively affected by its size. In order to reduce costs and elevate the quality of the testing activity, automated testing has been promoted since the 1970s [8]. In automated testing, offering a test expert the opportunity to input their knowledge can assist in the generation of a high quality test suite. This knowledge can include error-prone aspects of software and a specification of what needs to be tested, based on known priorities. This thesis investigates the use of knowledge engineering to increase control of a test expert over the generated test suite. 1.1 Test Generation and the Role of Test Experts The scope of a testing activity can be specified in a three dimensional space as depicted in Figure 1.1. The three axes specify different aspects of a testing method 1

14 and scope can be specified by a set of points in that space. The X-axis specifies what is being tested, which can be a unit, integration of units, or the system. The Y-axis specifies the software artifact based on which the test cases are generated. This source, which specifies the behavior of the system under test is called a test oracle [9]. Test oracles can be code (in white-box testing) [10, 11], design (in gray-box or modelbased testing) [11, 12], or requirements (in black-box testing) [10, 11]. The Z-axis is the coverage criteria specification, which denotes criteria for tests to be generated. The specification of coverage criteria must be based on knowledge available through the test oracle. For instance, if the coverage criteria requires that all concurrency relations be tested, this information must be available in the test oracle. The focus of this work is on model-based unit testing, which is testing the smallest unit of the system under test based on its abstract model specification. Particularly, scope is limited to exploiting knowledge engineering to enhance generation of unit tests from a UML state machine representation of a unit under test. Z GUI Concurrency Code Coverage Boundary Testing Exceptions Unit code Design Requirements Y Integration System X X: The software under test Y: The software artifact on which tests generated are based (Test Oracle) Z: The specification of what tests should be generated (Coverage Criteria) Figure 1.1: Scope space of testing activity An automated test generator generates a test suite, which is a collection of test cases based on the test oracle and coverage criteria specifications as depicted in Figure 1.2. Different test case generators provide different means for a test expert to 2

15 intervene and control what test cases are generated as depicted in Figure 1.3. One method is allowing test experts to identify test cases directly. While this method may be required in some cases, it is not efficient. A second method is to provide support for test experts to choose among a selection of coverage criteria but this method can be overly restrictive. A third method which is promoted in this work, is to provide a language for test experts to specify their own coverage criteria rules and to extend test oracles with extra knowledge that may be necessary to address the criteria. Test Oracle: Specification of software under test Example: The UML state machine of a class Test-case Generator Test Suite Coverage Criteria: Specification of what tests should be generated Example: Cover every transition of the state machine Figure 1.2: Data flow diagram of an automated test case generator Identify test cases explicitly Choose coverage criteria rules Extend test oracle and compose coverage criteria rules Figure 1.3: Levels of control of test experts over automated test generation The third method (i.e. extending test oracles and defining custom coverage criteria) which provides the highest level of control to the test experts, can enhance 3

16 the quality of generated test suites, owing to the fact that test oracles are abstract representations of software and removal of essential knowledge which can be caused by a poor abstraction can be a barrier to identification of error-prone test cases [13]. Benz [13] demonstrates that utilization of a test oracle that includes the error-prone aspects of software and domain-specific coverage criteria can enhance the quality of the generated test suite. Error prone aspects, also used in Risk Based Testing [14], are software elements that are likely to produce an error and can be: (1) domain specific (such as concurrent methods, database replications, network connection), (2) based on general test guidelines and experience (such as boundary values), or (3) system specific and revealed in interactions of testers with developers and designers (such as use of an unreliable library). 1.2 Thesis Scope The objective of this research is to provide a method for test experts to extend test oracles and specify custom coverage criteria for unit testing. For this purpose, the application of knowledge engineering in model-based unit testing is explored and an ontology-based test case generation method is developed. The test oracle considered is a UML state machine, also called a state-chart, which is a design-level abstraction of a unit s behavior. The use of knowledge engineering allows decoupling the test oracle and coverage criteria specifications from test selection algorithms. to extend test oracles and define coverage criteria. Hence, it makes it possible Both test oracles and expert knowledge are defined in ontologies. Coverage criteria rules are defined based on the vocabulary specified in these ontologies. Then reasoning algorithms are used for test case selection. Finally, prevalent test case generation technologies are used for generating the test cases. 4

17 1.3 Structure of Thesis The rest of this thesis is organized as follows: Chapter 2 is background and literature review. Chapter 3 introduces the ontology based methodology for test case generation. Chapter 4 describes design of a system, which is based on the ontology-based test case generation method. Chapter 5 delineates implementation of the system prototype. Chapter 6 demonstrates the performance of the system in test case generation for a simple class and evaluates its extendability. Chapter 7 concludes this work. 5

18 Chapter 2 Background The ontology-based methodology for software testing is designed based on the principle of separation of concerns. An automated tool for software testing can be decomposed into three separate concerns: specification of what needs to be tested, identification of test objectives, and generation of test cases for the identified test objectives. Specification of what needs to be tested is addressed by the definition of test oracles and coverage criteria, which specify the correct behavior of the software and requirements for the generated test suite [15], respectively. This aspect of software testing is crucial, because it impacts the quality of the generated test suite. In the ontology based testing method, ontologies and rules are used for the specification pf what needs to be tested. The second aspect of test generation is identification of test objectives based on the specification of what needs to be tested. A test objective delineates a single test case. There are several approaches to identification of test objectives: Explicit identification of test objectives by a test expert [16]; the use of identification algorithms with rules implicitly built into them [17]; provision of a language for defining coverage criteria rules and use of identification algorithms that rely on the specifica- 6

19 tion language [18]; and translating coverage criteria into temporal logic for use with a model checker to identify test objectives [19]. The second aspect can be tightly coupled with the first aspect, because the identification algorithm is often tightly coupled with the specification of what needs to be tested. In order to decouple these two aspects, in an ontology-based testing method, ontology-based reasoning is used for identification of test objectives. The third aspect in test generation is generation of test cases for the identified test objectives. Test cases can be generated based on a test oracle with several approaches. One approach is to use graph traversal algorithms [17, 20]. In this approach the test oracle is translated into a graph. A graph traversal algorithm is then used to generate paths in the graph. Another approach is using model-checking tools [19]. A model checker is set to find a path (a test case) in the model with the specified requirements (delineated by a test objective). A third approach is using AI planners [21]. Artificial intelligence (AI) planners are programs that given a domain definition analogous to a state machine and a problem definition, which describes the properties of the goal state of the state machine, generate a plan to take the state machine to the goal state [22]. To use AI planners for test case generation, a test oracle (e.g. a state machine) is translated into a domain definition. The test objective, which specifies the test case to be generated is translated into a problem specification. Then an AI Planner is used to generate plans to reach the identified goals in the specified domains. In the following sections the technologies that are used to address the aspects of test case generation in the ontology-based test generation method are described. 7

20 2.1 Specification of What Needs to Be Tested In this work the test oracle is an ontology-based representation of a UML state machine, which can be generated from existing UML state machines represented in XMI format. The method strives to support a variety of coverage criteria, which are represented in a rule-based language UML State Machines A UML state machine is a model that is used for specifying the transitions in the state of a unit [4]. A basic UML state machine has a set of transitions and a set of states, one of which is a start state and one or more are final states. A transition has a source state and a destination state. It also has an event that triggers the transition, a guard which specifies conditions under which the transition can be triggered, and an action which specifies the behavior of the system when the transition is passed. XML Metadata Interchange (XMI) [23] is an XML-based standard format supported by the OMG, which is used for exchanging UML diagrams including UML state machines Coverage Criteria for State-machine-based Unit Testing Many methods that use UML state machines for test case generation are based on some coverage criteria. A coverage criterion is an indicator of how much testing is enough. Zhu et. al. [15] identifies two roles for coverage adequacy criteria rules: (1) they are the explicit specification for test selection; (2) they determine what needs to be observed. A coverage criterion specifies requirements for the generated test suite; for instance for each state in the state machine one test must exist in the test suite. 8

21 In [15], Zhu et. al categorize coverage criteria as structural testing, fault-based testing, and error-based testing. Structural testing coverage criteria uses structural features of the system under test (such as All Transition [17], All Transition Pair [17], Full Predicate [17], Faulty Transition Pair[20], All Content Dependence[24], Session Oriented[21], and 2-Way criterion[21]). Error-based testing coverage criteria use knowledge of error-prone locations (such as the criteria used in Boundary Testing [25]). Fault-based testing uses measurement of fault detecting ability of the test suite (such as mutation-based plannable criteria [21]). Table 2.1, summarizes the listed UML-state-machine-based coverage criteria, including criteria reviewed by McQuillan et. al. [26] Mapping UML to OWL The ontology-based test case generation method uses an ontology-based representation of UML diagrams for test case generation. To do this, the UML state machines are represented in OWL ontologies. Based on Gruber [29], Studer et. al. [30] define an ontology as follows An ontology is a formal, explicit specification of a shared conceptualization. Conceptualization refers to an abstract model of some phenomenon in the world by having identified the relevant concepts of that phenomenon. Explicit means that the type of concepts used, and the constraints on their use are explicitly defined. Formal refers to the fact that the ontology should be machine-readable. Shared reflects the notion that an ontology captures consensual knowledge, that is, it is not private of some individual, but accepted by a group. One state-of-the-art ontology language that is widely used to specify ontologies is OWL-DL [31] which is based on description logic. OWL stands for Web Ontology 9

22 Table 2.1: Several UML state-machine-based coverage criteria All-Transitions Coverage criterion (AT) [17]: For each transition tr in the state machine, there exists a test t in the test suite such that t causes tr to be traversed. All-Transition Pair coverage (ATP) [17]: For each pair of adjacent transitions (tr, tr ) in the state machine, there exists a test t in the test suite such that t causes tr and tr to be traversed in sequence. Full Predicate coverage (FP) [17]: For each clause c in each precondition p on transitions of the state machine there exists a test t 1 in the test suite T such that t 1 causes c and p to evaluate to true and there exists test t 2 in T such that t 2 causes c and p to evaluate to false. Complete Sequence (TT) [17, 27]: For each complete sequence s defined by the test engineer there exists a test t in the test suite such that t causes s to be traveresed. Belli et. al. impose a restriction on the length of the paths to make them finite [20]. Faulty Transition Pair coverage (FTP) [20]: Faulty transitions, which are transitions that are illegal in a state and lead to an error state, are added to the state machine. Then, similar to All Transition Pair coverage, for all transition pairs (tr, tr ), where tr is an illegal transition, there exists a test t in the test suite such that t causes tr and tr to be traversed in sequence. All Content Dependence Relationships coverage [24]: A function f 2 has a content dependence relationship with a function f 1, if and only if, the value of a variable, which is defined in f 1 is used in f 2. For each content dependence relationship r, there exists a test t in the test suite such that t tests r. Session Oriented criterion [21]: A transition tr is a self-loop, if both endpoints of tr are in the same node of the state machine. For a node s, V is the set of system state variables, which are updated by the transitions enabled in s. If it is possible to partition V into V 1 and V 2, such that T r 1 be the set of self-loop transitions that update the variables in V 1, and T r 2 be the set of non-self-loop transitions that update the variables in V 2, then s is a candidate for Session Oriented criterion. The transitions in set T r 1 need to be sequenced before those in T r 2. Once a transition tr T r 2 is sequenced (ending in a state s which is not equal to s), a path from s back to node s must exist in the test to execute the transitions in T r 1 and verify the state. 2-Way criterion [21]: Two self-loop transitions tr 1 and tr 2 in a given node are independent if (1) the results represented by tr 1 and tr 2 are not exceptions, and (2) tr 1 (tr 2 ) is not a reader operation for tr 2 (tr 1 ). For each pair of independent self-loop transitions tr 1 and tr 2, a test case with the sequences < tr 1, tr 2 > and < tr 2, tr 1 > must exist. User-Defined Test Objective[18]: The User-Defined criterion specifies some of the states, their values, the transitions, and paths of the state machine to be included in the test suite and forces others to be excluded. Boundary Testing criteria [25, 28]: A boundary state is state, where at least one state variable has a value at an extremum - minimum or maximum - of its subdomains. When the system is in a boundary state, the operations in a model must be tested with boundary inputs. 10

23 Natural Language: "A customer is premium if their spending has been min 5000 euro in the previous year." RuleML: <Implies> <head> <Atom> <Rel>premium</Rel> <Var>customer</Var> </Atom> </head> <body> <Atom> <Rel>spending</Rel> <Var>customer</Var> <Ind>min 5000 euro</ind> <Ind>previous year</ind> </Atom> </body> </Implies> Figure 2.1: RuleML example from [3] Language and is a W3C standard for representing ontologies. Protégé [32], a tool developed by Stanford University can be used for composing OWL ontologies. In description logic the terminology, which includes concepts and the relations between them, is defined in a TBox (Terminological Box), and the instances, which are the individuals and the relations between them are defined in an ABox (Assertional Box). The Jena API [33] and the OWL API [34] are two open source Java interfaces that enable reading OWL files, their in memory manipulation, and writing them to file. In a knowledge based system, rules can be used to derive implicit knowledge in given knowledge via a reasoning algorithm. The Rule Markup Language, RuleML, is an XML-based markup language for specifying rules [35]. An example of a rule in RuleML 0.91 is provided in Figure (from [3]). The spositional Slotted Language (POSL) is a shorthand notation for RuleML [36]. The Ontology Definition Metamodel (ODM) [37] [37], which was not finalized at the time of writing this thesis is a specification adopted by the OMG and defines a set of mapping rules between UML models and OWL ontologies. The UML formal 11

24 super structure specifications [4] describe the elements in UML diagrams formally. The ODM sketches how this formal specification can be mapped to the OWL representation, but it does not directly provide the ontology. However, a set of ontologies that represent UML diagrams and roughly conform to the ODM are provided by Lehtihet [1]. Table 2.2 summarizes an overview of mappings of the UML representation to the OWL representation. Based on these transformation rules, the UML superstructure elements, (or the elements of the UML metamodel) are mapped to the OWL elements. The following example describes how a portion of the UML superstructure is mapped to its OWL representation. FinalState, State, Namespace, Vertex, and RedefinableElement are classes in the UML superstructure. The following generalization relationships hold among these classes: A FinalState is a State. A State is a Namespace, RedefinableElement, and a Vertex. A RedefinableElement has a boolean attribute isleaf. Kernel and BehaviorStateMachines are two packages. The Namespace and RedefinableElement classes are from the Kernel Package and the State, FinalState, and Vertex classes are from the BehaviorStateMachines package. The mapping of this portion of the UML superstructure to OWL is done as follows: According to the rule #2, for each of the classes in the hierarchy, an OWLClass is generated, which bear the corresponding generalization relationship. According to the rule #3, for the isleaf attribute, an OWL property is generated. According to rule #1, for the BehaviorStateMachines and Kernel packages, two ontologies are generated, which include OWLClasses which their corresponding UML classes are owned by the corresponding packages. 12

25 Figure 2.2: The UML state machine superstructure overview from [4] 13

26 Table 2.2: Mapping of the UML elements to OWL in ODM # The UML Superstructure Element Representation in OWL 1 Package Ontology 2 Class OWLClass 3 Attribute Property 4 Binary Association Object Property 5 Association Classes N-ary Associations 6 Multiplicity OWLRestriction 7 Association Generalization SubPropertyOf or SubClassOf Figure 2.2 illustrates elements contained in the UML behavioral state machine package of the UML superstructure. A state machine has a number of regions, which contain transitions and vertexes which can be states. The transitions and states have incoming and outgoing, source and target associations with each other. A transition can have an association with a guard, a trigger, and an effect. A state can be a final state. A state has an association with a constraint which specifies the condition that holds when the system is in that state. The OWL specification for the UML state machine is generated based on the mapping rules in Table 2.2. As an example, the OWL code that defines the Transition class and the Effect property of a transition are listed in Table 2.3 [1]. 2.2 Identification of Test Objectives Through Reasoning Reasoning is concerned with derivation of implicit knowledge in knowledge represented in a knowledge representation language. In the ontology-based test case generation method, the explicit knowledge is represented in test oracle ontologies and coverage criteria rules; the implicit knowledge is a collection of test objectives; reasoning is used to derive test objectives from the test oracle ontologies and coverage criteria rules. The use of reasoning on the ontologies for test objective generation delivers a 14

27 Table 2.3: Specification of the UML Transition class and Effect property in OWL (from [1]) <owl:class rdf:id= Transition > <rdfs:subclassof rdf:resource= &Kernel;RedefinableElement /> <rdfs:subclassof rdf:resource= &Kernel;NamedElement /> <rdfs:subclassof> <owl:restriction> <owl:onproperty rdf:resource= #Transition.guard /> <owl:maxcardinality rdf:datatype= &xsd;int >1</owl:maxCardinality> </owl:restriction> </rdfs:subclassof> <rdfs:subclassof> <owl:restriction> <owl:onproperty rdf:resource= #Transition.trigger /> <owl:maxcardinality rdf:datatype= &xsd;int >1</owl:maxCardinality> </owl:restriction> </rdfs:subclassof> <rdfs:subclassof> <owl:restriction> <owl:onproperty rdf:resource= #Transition.effect /> <owl:maxcardinality rdf:datatype= &xsd;int >1</owl:maxCardinality> </owl:restriction> </rdfs:subclassof> </owl:class> <owl:objectproperty rdf:id= Transition.effect > <rdfs:domain rdf:resource= #Transition /> <rdfs:range rdf:resource= &BasicBehaviors;Behavior /> <rdfs:comment rdf:datatype= &xsd;string >subsets ownedelement</rdfs:comment> </owl:objectproperty> highly extensible test oracle and coverage criteria. Software can be decomposed into algorithms and the knowledge manipulated by the algorithms. Knowledge engineering helps decrease the dependency of algorithms on the knowledge, by encoding the knowledge into ontologies and rules, and providing generic reasoning algorithms, which operate on the knowledge. Hence, ontologies can be modified without changing the algorithms, and vice versa. The implemention of the method in this work uses a reasoner, OO jdrew [38, 39] that supports RuleML. 15

28 2.3 Generation of Test Cases with Artificial Intelligence Planning AI planning is an area in artificial intelligence and is concerned with finding a plan to solve a problem within a domain [22]. A domain is specified as a set of state variables and actions that manipulate state of the system. A problem specifies a start state, a goal state, and possibly some constraints on the generated plan. A plan is a set of actions that takes the system from the start state to the goal state and conforms to the constraints. AI planners are algorithms that generate a plan based on the specified domain and problems. One of the applications of AI planning is in test case generation [40, 16, 21]. To apply AI planning in test case generation, the problem of generating a test case based on a test oracle (e.g. state machine) is translated into the problem of finding a plan to solve a problem within a domain specification. For this purpose, the test oracle and specifications and the specifications of the required test cases are translated into the domain description and problem description. The simplest test case is a list of actions (path) from the initial (start) state to the goal (final) state of the domain (state machine). To define more complex test cases additional predicates are added to the domain and problem description to put constraint on the generated plan. In UML-state-machine-based test case generation, UML state machine specification is mapped to problem and domain description. The specifications of the domain and problem are written in a planning language. The conceptual model of many planning languages represent the system as a state transition system [41]. Planning Domain Description Language (PDDL) [42] is a planning language that has become the de-facto standard language for AI planning. It was originally designed in 1998 for the International Planning Competition and has hence been maintained for it. PDDL can be mapped to a state machine. 16

29 2.3.1 PDDL 2.1 PDDL 2.1 [5] is the planning language used in the 3rd International Planning Competition. The goal of PDDL 2.1 is to support encoding realistic problems. In this regard, one of the features added to PDDL in this extension is support for numbers. The support for numbers and numeric operations are an indispensable part of state machine specification for many software systems. Also, PDDL 2.1 defines a metric that is required to be maximized or minimized in a plan. This can be used to specify constraints on the cost of the generated paths. One of the planners that use PDDL 2.1 is Metric-FF [43]. Metric-FF performed outstandingly in IPC 3 in domains with numeric variables (or fluents) [44]. Below a general overview of the used features of PDDL 2.1 is provided. Figure 2.3 shows an example of a domain and a problem specification [5]. In PDDL 2.1 an AI planning problem is decomposed into two parts: domain description and problem description. The components of each of them are described using the vehicle example below. The name of the domain is metricv ehicle. A PDDL domain describes the types, predicates, functions and actions in a system for which a plan is to be devised. In PDDL, everything is in Lisp prefix notation. The keywords are preceded by : (such as :requirements). The requirements declaration specifies what constructs of PDDL language is used by the domain description. The types declaration describe the types of objects in the environment; in this domain there are two types defined: vehicle and location. The predicates are constructs with boolean values; For instance at is a predicate that takes two arguments v and p; for each pair of objects of type vehicle and location, it returns either true or f alse. In PDDL, variables start with? and types are denoted by a - before the type name. The functions are constructs with numeric results (numericf luents). For instance the f uel-level specifies a numeric value for each vehicle v. The set of predicates and functions 17

30 (define (domain metricvehicle) (:requirements :strips :typing :fluents) (:types vehicle location) (:predicates (at?v - vehicle?p - location) (accessible?v - vehicle?p1?p2 - location)) (:functions (fuel-level?v - vehicle) (fuel-used?v - vehicle) (fuel-required?p1?p2 - location) (total-fuel-used)) (:action drive :parameters (?v - vehicle?from?to - location) :precondition (and (at?v?from) (accessible?v?from?to) (>= (fuel-level?v) (fuel-required?from?to))) :effect (and (not (at?v?from)) (at?v?to) (decrease (fuel-level?v) (fuel-required?from?to)) (increase (total-fuel-used) (fuel-required?from?to)) (increase (fuel-used?v) (fuel-required?from?to))) ) ) (define (problem metricvehicle-example) (:domain metricvehicle) (:objects truck car - vehicle Paris Berlin Rome Madrid - location) (:init (at truck Rome) (at car Paris) (= (fuel-level truck) 100) (= (fuel-level car) 100) (accessible car Paris Berlin) (accessible car Berlin Rome) (accessible car Rome Madrid) (accessible truck Rome Paris) (accessible truck Rome Berlin) (accessible truck Berlin Paris) (= (fuel-required Paris Berlin) 40) (= (fuel-required Berlin Rome) 30) (= (fuel-required Rome Madrid) 50) (= (fuel-required Rome Paris) 35) (= (fuel-required Rome Berlin) 40) (= (fuel-required Berlin Paris) 40) (= (total-fuel-used) 0) (= (fuel-used car) 0) (= (fuel-used truck) 0) ) (:goal (and (at truck Paris) (at car Rome)) ) (:metric minimize (total-fuel-used)) ) Figure 2.3: A PDDL 2.1 example [5] 18

31 collectively define the state of the system. The actions declaration denote how the state of the system can be changed. Each action has a set of parameters, a precondition, and an effect; for instance action drive has three parameters v, from, and to, which respectively have the types: vehicle, location and location. The precondition specifies a boolean condition that must hold for an action to be executable. It uses predicates, relational operators on numeric fluents (<=, >=, =, <. <), and boolean logic operators (and, or, not), universal and existential quantifiers. The ef f ects describe the conditions that hold after an action is executed. The effect describes the changes that an action makes in the system state. It specifies new values for predicates and numeric fluents. The numeric operations that are supported in PDDL 2.1 are /,, +,, increase, decease, assign, scale-up and scale-down. The effects can have conditions and universal quantifiers. The name of the problem described in Figure 2.3 is metric vehicle example. It is a problem for the domain metricv ehicle. There are 4 objects in the problem: car, Berlin, Rome and M adrid, which respectively have the types: vehicle, location, location, and location. The initial state of the system is described by assigning values to predicates and numeric fluents. The goal state is specified by condition that holds on predicates and numeric fluents. The metric specifies a criteria that needs to be minimized or maximized in the devised plan A Mapping between PDDL and UML State Machine In order to use an AI planner for test case generation, the state machine specification should be translated into their input specification [40, 16, 21]. The UML state machine specification can be mapped to PDDL domain description and problem description. The mapping rules to generate PDDL specification from a state machine and an example are presented in Table 2.4 and Table 2.4, respectively. 19

32 ! "#$%%&'!(%)*)$#)+$*),%!!!"#$%&'&($"))*)+&,-."*)&/)*0*"$*1"0*-)& "#$%%&'!(%)*)$#)+$*),%! ($"))*)+&,-."*)&! Table 2.4: Mapping the UML state machine specification to PDDL 20"0%&3"45*)%&! state ost!"#$%&'&($"))*)+&,-."*)&/)*0*"$*1"0*-)& i - 20"0%&3"45*)%&! Transition tr from ost i to ost j (ost state i!=ost ost j ) i - Guard of the transition is condition g on state variable sv Transition tr from ost i to ost - action a is manipulated in j (ost i!=ost j ) manipulates - Guard of the sv transition is condition g on state variable sv - action a is manipulated in manipulates sv - Transition tr from ost i to ost j (ost i!=ost j ) - Guard of the transition is condition - Transition g on tr state from variable ost i to ost sv j - (ost action i!=ost a j ) is manipulated in manipulates - Guard of the sv transition is condition g on state variable sv Numeric - action state a is variable manipulated n-var in manipulates sv Boolean Numeric state state variable variable b-var n-var! Boolean state variable b-var!"#$%&6&($"))*)+&(7-#$%.&/)*0*"$*1"0*-)& Goal state and goal values for the state variables! Cost minimization &! & & & (:types s i - state) (:active s state) ; denotes active state (:action ($"))*)+&,-."*)&! tr (:types s i - state) :parameters(?st (:active s state) i s i ;?st denotes j s j ) active state :precondition((active?st (:action tr i ) (g (sv))) :effect((active :parameters(?st?st i s j ) i?st j s j ) :precondition((active (Not active?st?st i ) i ) (a (sv)) (g (sv))) "! :effect((active?st j ) (:action(not tr active?st i ) (a (sv)) :parameters(?st "! i s i ) :precondition((active (:action tr?st i ) (g (sv)) :parameters(?st :effect(a i (sv)) s i ) ) :precondition((active?st i ) (:functions (g (sv)) (n-var)):effect(a (sv)) ) (:predicates (:functions (b-var)) (n-var)) (:predicates (b-var))! 20"0%&3"45*)%&#!$%&'!&'()*')(%! ($"))*)+&(7-#$%.&! state!"#$%&6&($"))*)+&(7-#$%.&/)*0*"$*1"0*-)& ost i (:objects ost i - s i ) Initial values of the state (:init (active startstatename) variables 20"0%&3"45*)%&#!$%&'!&'()*')(%! are set in the problem ($"))*)+&(7-#$%.&! (not active otherstates) state ost i (:objects (initialize ost i - s i state ) variables)) Initial values of the state (:init (active startstatename) variables are set in the problem (not active otherstates) Goal state and goal values for (:goal(initialize (active goalstate) variables)) the state variables Cost minimization (predicate of the value of boolean variables) (:goal(function (active goalstate) of the value of numeric variables) (predicate of the value of boolean ) variables) (:metric (function minimize of (+ the (total-time))) value of numeric variables) ) (:metric minimize (+ (total-time))) 20

33 A9.2B(3(02$# A9.2B(3(025# &'()*+,#-./)0123>>%6##!%# A9.2B(3(02"# &'()*+,#-./) (23#+:.;,<#./)0123==# A9.2B(3(02%# &'()*+,#-./)01234%6789(23#+:.;,<#./)0123==#! "#$%&'!(!)*+*',+-.#/'!( ;problem decription (define (problem p1) (:domain StatemachineName) (:objects ost1 - s1 ost2 - s2 ost3 - s3) (:init (active ost1) (not active ost2) (not active ost3) (= (a-count) 0)) (:goal (active ost3)) (:metric minimize (+ (total-time)))) ; Domain description: (define (domain StatemachineName) (:requirements :typing :fluents) (:types s1,s2,s3 state) (:predicates (active?s - state)) (:functions (a-count)) (:action transition1 :parameters (?st1 - st) :precondition ( (active?st1) (< (a-count) 4)) :effect (increase (a-count) 1)) (:action transition2 :parameters (?st1 - s1?st2 - s2) :precondition ( (active?st1) (= (a-count) 4)) :effect ( (and (active?st2) (not active?st1)) (assign a-count 0))) (:action transition3 :parameters (?st2 - s2) :precondition ( (active?st2) (< (a-count) 3)) :effect (increase (a-count) 1)) (:action transition4 :parameters (?st2 - s2?st3 s3) :precondition (active?st2) :effect (and (active?st3) (not active?st2))))! Figure 2.4: An example of a UML state machine and the equivalent PDDL specification 21

34 Chapter 3 An Ontology-based Method for Software Testing The ontology-based software testing methodology is described as a series of transformations of specifications from the test oracle to the executable test suite. To be more precise, an ontology-based representation of the behavioral model of the system under test, an expert knowledge ontology, an implementation knowledge ontology, and coverage criteria rules are used to generate executable test cases. The rest of this chapter is organized as follows: Section 3.1 provides an overview of the method. Section 3.2 delineates the specifications that are transformed. Section 3.3 describes the phases of the transformation. Section 3.4 describes the specifications and the transformation phases with a simple example. Section 3.5 summarizes the method. 3.1 Method Overview The method generates an executable test suite in four phases. Figure 3.1 illustrates the phases of transformations, and their inputs and outputs. Phase 1 generates a set 22

35 of test objectives. After phase 1 is completed, phase 2 and phase 3 are performed repeatedly to generate an abstract test suite. Then in phase 4, based on the abstract Coverage Criteria Rules test suite, the executable test suite is generated. transformed into the outputs as follows: In each phase, the inputs are Coverage Criteria Rules Expert Knowledge Ontology Test Oracle Behavioral Model Ontology Redundancy Checking Rule Templates Implementation Knowledge Ontology Redundancy Checking Rule Templates Phase 1 Test Objective Generation Phase 2 Redundancy Checking Phase Phase 3 Abstract Test Suite Ontology Generation Phase Phase 4 Executable Test Suite Generation Phase Test Objectives Non-redundant Test Objectives Abstract Test Suite Ontology Executable Test Suite Figure 3.1: Phases of transformation of specifications Implementation Knowledge Ontology Phase 1- Test Objective Generation: Behavioral model specifications, expert knowledge, and coverage criteria rules are used to generate a set of test objectives. Phase 2- Redundancy Checking: Behavioral model specifications, test objectives, a partially generated abstract test suite ontology, and redundancy checking rule templates are used to select a non-redundant test objective, one at a time. Phase 3- Abstract Test Suite Ontology Generation: An abstract test case is generated and added to the partially generated abstract test suite ontology, for each non-redundant test objective. Phase 4- Executable Test Suite Generation: Behavioral model specifications, abstract test suite ontology, and implementation knowledge ontology are used to generate the executable test suite. The goal of the method is to facilitate exploitation of test experts knowledge 23

36 in automated test generation. To achieve this the method promotes the separation of three concerns of test case generation; namely, specification of what needs to be tested, identification of test objectives, and generation of test cases. To exploit a test expert s knowledge, the specification of what needs to be tested and the identification of test objectives need to be decoupled. The decoupling allows the test experts to freely manipulate the specification of what needs to be tested, without the need to modify hardcoded test objective identification algorithms. Given this, a test expert can enrich the extensible ontology-based test oracle and specify custom coverage criteria freely. Hence, the method responds to the need for supporting the specification of arbitrary test cases, implementation knowledge, invariants on model elements, distinguished states [45], and knowledge about error-prone aspect of the system, while supporting standard coverage criteria, as well as additional coverage criteria rules which are based on test experts mental model. 3.2 Syntax and Semantics of Specifications This section describes the specifications that are used in the process of test case generation and their syntax and semantics. The specifications are either provided by external entities as inputs or generated by the system as intermediate or final outputs Behavioral Model Ontology The behavioral model ontology is the ontological representation of the test oracle of the system under test. Various software test case generation methods are based on different behavioral models, which can be modeled in an ontology. For the UML models, the Ontology Definition Meta Model (ODM) [37] ontologies for the UML diagrams can be used and can be automatically generated from the XML Metadata 24

37 Interchange (XMI) [23] representation of existing models. The TBox of a prototype ontology for the UML state machine is depicted in Figure 3.2. The complete ontology TBox in OWL is included in Appendix A.1. The semantics of the classes and properties of this ontology are described in Tables 3.1 and 3.2. The ontology describes the structure of a state machine by defining the state machine s structural elements and the relationships between them. The structural elements include: states, transitions, guards, actions, state variables, and events. The guards and actions have a String property called description, which describes their semantics and can be parsed by a software. The description properties read and manipulate the values of state variables Expert Knowledge Ontology Figure 3.3 depicts a general expert knowledge ontology. Appendix A.3 includes the syntax of this ontology. AClassFromStateMachineOntology is a class that is defined in the behavioral model ontology. ExpertKnowledgeClass3 is a subclass of this class defined in the expert knowledge ontology. ExpertKnowledgeClass1 is a class defined in the expert knowledge ontology which is connected to the AClassFrom- StateMachineOntology by thw ObjectProperty1 property. ExpertKnowledgeClass2 is a class defined in the expert knowledge ontology which is connected to the Expert- KnowledgeClass1 by the ObjectProperty2 property. DataProperty2 is an attribute of AClassFromStateMachineOntology and its range is boolean. DataProperty1 is an attribute of ExpertKnowledgeClass1 and its range is string. The expert knowledge ontology extends the behavioral model ontology and provides the knowledge that is beyond the behavioral model and is used for specification and identification of test cases. This ontology further describes the elements in the behavioral model ontology by importing the behavioral model ontology and adding additional classes and properties to it. It can describe new classes, relationships, and 25

38 sm:statemachine sm:vars sm:states sm:statevariable sm:abstractstate sm:transitions is-a sm:state is-a is-a sm:condition sm:guard sm:from sm:to sm:transition sm:in sm:out sm:to sm:finalstate sm:from sm:startstate sm:event sm:action sm:call sm:behaviour TBox Definitions: class2 property class1 Figure 3.2: Part of TBox of state machine ontology Table 3.1: Semantics of classes of state machine ontology TBox. # Class Description 1 StateMachine An instance of a StateMachine class represents a single UML state machine. 2 AbstractState An AbstractState is the parent of the three types of state in a state machine: StartState, FinalState, and State. 3 State A state is a child of AbstractState and represents a state of the state machine, which is not a start state or a final state. 4 StartState A StartState represents the start state of a state machine. It doesn not have any incoming transitions and a state machine can only have one start state. 5 FinalState A FinalState represents the final state of a state machine. A final state does not have any outgoing transition. 6 StateVariable A StateVariable represents a state variable of the class that state machine describes its behavior. State variables are used in the state machine guard and action descriptions. 7 Transition A Transition represents a transition of the state machine. 8 Behavior A Behavior describes a set of changes in the state variables. It is used as action of a transition. 9 Call A Call represents a call event in the state machine. It is used as event of a transition. 10 Condition A Condition represents a constraint on the state variable values. It is used as guard of a transition. 26

39 Table 3.2: Semantics of properties of state machine ontology TBox. # Property Domain Range Description 1 states StateMachine State It is a collection that contains the states of the state machine. 2 transitions StateMachine Transition It is a collection that contains the transitions of the state machine. 3 vars StateMachine StateVariable It is a collection that contains the state variables that are used in the specification of its guards and actions. 4 out State, Start- State 5 in State, Final- State Transition Transition 6 from Transition State, Start- State 7 to Transition State, Final- State It is a collection that contains the transitions, which their source is the state. This property is the inverse of the from property. It is a collection that contains the transitions, which their destination is the state. This property is the inverse of the to property It is a state, which is the source of the transition. This property is the inverse of the out property. It is a state, which is the destination of the transition. This property is the inverse of the in property. 8 event Call Transition It is the event on a transition. 9 guard Condition Transition It is the constraint on the state variable that must hold, before a transition can be fired. 10 guarddesc String Guard It is a string that describes the guard condition. 11 behaviourdesc String Behaviour It is a string that describes how state variables are changed by an action. The behaviordesc of the transition emanating from the start state describes the initial value of the state variables. 12 name String StateVariable, Call It is a string that denotes the names of the entities. 27

40 sm:aclassfromstatemachineontology ek:dataproperty2 boolean is-a el:objectproperty1 ek:expertknowledgeclass3 ek:expertknowledgeclass1 ek:dataproperty1 string ek:objectproperty2 ek:expertknowledgeclass2 TBox Definitions: class2 property class1 Figure 3.3: A general expert knowledge ontology attributes. The relationships introduced in this ontology can be between two classes in the behavioral model ontology classes, between a class in the behavioral model ontology and a class in the test expert ontology, or between two classes in the test expert ontology. The domain of the attributes can be from the behavioral model ontology or the expert knowledge ontology classes. This ontology describes test experts mental model and is an extension point that facilitates support of various coverage criteria rules. The additional knowledge that is referred by coverage criteria rules are added to this ontology. Examples of pieces of knowledge that can be included in this ontology are: knowledge about use of an unreliable library, boundary values of state variables, exceptions, variable definition and use, concurrency relationships, and user interaction points. Two potential sources for extraction of expert knowledge are error taxonomies and commonly accepted coverage criteria. The ontology has the advantage of retaining this knowledge, which is gathered by test experts. 28

41 3.2.3 Test Objectives A test objective delineates a test-case. It consists of two parts: predicate and parameters. The syntax of test objectives is shown below. [Predicates separated by comma],[parameters separated by comma] The predicatelist and the parameterlist are lists of predicates and parameters separated by a comma. A predicate in the predicatelist has one or more parameters, which are listed respectively in the parameterlist. The use of list syntax provide the flexibility to add predicates and parameters to a test objective. A test objective specifies a condition that must hold on some model elements in a corresponding test case. The predicates specify the conditions. The parameters specify instances of the behavioral model ontology classes or their values. The test objectives provide a language for test experts to define the test cases abstractly. Several test objectives and their semantics are described in table 3.3. The syntax of the test objectives is listed In Appendix A.4. Table 3.3: Examples of test objectives # Name Arguments Semantic 1 Cover transition transition1 A test case that passes transition1 of the state machine 2 Cover state state1 A test case that passes state1 of the state machine 3 Immediate transition1, transition2 A test case that passes transition2 immidiately after transition1 of the state machine 4 After transition1, transition2 A test case that passes transition2 after transition1 of the state machine 5 Full predicate condition1, predicatevalue, clause1value, clause2value,... 6 At transition state variable has value transition1, statevariable1, value1, statevariable2, value2,... A test case in which, when the system is at a state that is the source of the transition that have condition1 as guard, the value of the condition is predicatevalue and the clauses in the predicate have the values listed; The values are boolean. The condition is in conjunctive normal form. A test case in which, when the system is at a state that is the source of the transition1, the values of the state variables are as indicated by the parameters. 29

42 A test objective describes the structural properties of a test case, which directly or indirectly make it a candidate as a test objective. For instance, in unit testing based on the UML state machines, a test objective can specify that transition tr 1 of the state machine should be traversed immediately after transition tr 2 is traversed. This test objective can be directly required because every possible sequence of two transitions is required to be covered. This sequence can be indirectly required because the two transitions have a definition-use relationship [24], which is required to be tested. A test case can be described by combining test objectives. For instance the combination of the the two test objectives for a single test case can be defined as shown below. [immediate,after], [transition1,transition2,transition2,transition3] This test objective consists of two predicates: immediate and after. Each of the predicates have two parameters: The parameters of the immediate predicate are transition1 and transition2. Parameters of the after predicate are transition2 and transition3. This test objective means that a single test case that passes transition2 immediately after transion1, and some time after passing transition2, passes transition3 must be included in the test suite Coverage Criteria Rules A general rule has the form shown below. :- is the deduction symbol. The right hand side of the deduction symbol describes the premises. The left hand side of the deduction symbol describes the conclusions which are derived when the premises are satisfied. conclusions :- premises. The coverage criteria rules are test case selection rules that specify what test cases should be generated. As shown below, in general a rule consists of two parts: 30

43 a head and a body. The body of the rule specifies a test objective selection criteria. The head of the rule specifies a test objective. The test objective selection criteria specify the conditions that should hold on some model elements for them to be a part of the structure of a test case. The test objective specifies the structure of the test cases. test objective :- test objective selection criteria. They can be expert-defined, system/domain-specific, or standard. Coverage criteria rules refer to the vocabulary, which are defined by the behavioral model and an expert knowledge ontology. The body of a rule specifies conditions on the parameters of the test objective, which is defined in the head of a rule. Appendix A.2 defines how coverage criteria rules are related to the ontologies that define the vocabularies. The general syntax of a coverage criteria rule in POSL is shown below. coverage([predicatename1, PredicateName2,...], [?C1,?C3,?Value1,...]):- Class1(?C1),Class2(?C2),Class3(?C3),Property1(?C1,?C2),Property2(?C2,?C3), OtherRule(?C3,?Value1),.... OtherRule(?C3,?Value1):- Attribute1(?C3,?Value1), Attribute2(?C3,?Value1),.... Class1, Class2 and Class3 are classes defined in the ontologies.?c1,?c2 and?c3 represent three instances of these classes, respectively. Property1 is an object property, whose domain includes?c1 and whose range includes?c2. Property2 is an object property, whose domain includes?c2 and whose range includes?c3. Attribute1 is a data property, whose domain includes?c3 and whose value is represented by?value1. Attribute2 is another data property, whose domain includes?c3 and whose value is represented by?value1. The classes and values that are among the test objective parameters must belong to the behavioral model ontology. The other elements can belong to either the test expert ontology or the behavioral model ontology. This is because the expert 31

44 knowledge is an extension point of the system, but the test objectives are hard coded. Hence, the expert knowledge elements can not appear in the test objectives. A consequence of this is that in the above rule, Class1 and Class3 necessarily belong to the behavioral model ontology, but Class2 can belong to either the behavioral model ontology or the test expert ontology Abstract Test Suite Ontology The abstract test suite ontology describes the test suite. It is linked to the behavioral model ontology. Abstract means that it is implementation-neutral and programminglanguage-neutral, depending merely on the design model. Figure 3.4 depicts the TBox of the abstract test suite ontology. sm:transition ts:nextstep ts:hascall ts:step ts:arg ts:outcome ts:hasstep ts:test ts:value ts:value ts:variablevalue ts:variable sm:statevariable TBox Definitions: class2 property class1 Figure 3.4: Part of TBox of the abstract test suite ontology ts:hastransition sm:starttoroad1green ts:nextstep abstract test case is specified by a list of steps and the values state tltest:test0 variables after ts:hasstep ts:nextstep each tltest:test0step0 step. A step corresponds to an event that changes thets:test state of the system. It ts:hascall ts:hasstep sm:transition is called an abstract test case because it is ts:step a programming language independent ts:outcome ts:arg ts:outcome ts:value ts:value tltest:test0step0outcome sm:road1greentoroad1green ts:hastransition tltest:test0step1 ts:variablevalue ts:variable ts:outcome The abstract test suite ontology consists of a set of abstract test cases. description of a test case. ts:hasstep sm:statevariable The semantics of the classes and properties of the Abstract Test Suite Ontology tltest:test0step1outome is described in Tables 3.4 and 3.5. Appendix A.5 provides the abstract test suite ontology TBox in OWL. TBox An Definitions: abstract test case property class2 is representedclass1 by the class Test in the Instantiation: individual1 class1 ABox Definition: individual2 property individual1 32 An

45 ontology. A Test consists of a set of Steps. Each Step has a link to the next step of the test case. A Step is described by the value of the state variables after it is executed and its corresponding transition of the state machine. Table 3.4: Semantics of classes of abstract test suite ontology TBox # Class Description 1 Test A Test describes a test case in the test suite. A test case consists of a number of steps. 2 Step A Step corresponds to a pass of a transition of the state machine. A step provides information about the transition, parameters, the new state of the system, and the next step. 3 VariableValue A VariableValue is a pair of a state variable and its value. The system state is specified by a set of variablevalues. Table 3.5: Semantics of properties of abstract test suite ontology TBox # Property Domain Range Description 1 nextstep Step Step specifies the step that is after this step. 2 hasstep Test Step specifies the collection of steps of a test case. 3 arg Transition VariableValue specifies the values of the arguments of a event of the Transition. 4 outcome Step VariableValue specifies the values of the state variables after a step is executed. 5 hascall Step Transtion specifies the transition of the state machine, which is passed when the step is executed. 6 hasvariable Step VariableValue specifies the name of the variable that its value is being defined. 7 hasboolean- Value VariableValue boolean specifies the value of a boolean variable Redundancy Checking Rule Templates Redundancy checking rule templates are used to generate redundancy checking rules for test objectives. As redundancy checking rule facilitates checking whether test objectives are already satisfied by a test suite. A test objective is satisfied by the test suite if a test case that satisfies the test objective already exists in the test suite. The body of a rule describes the characteristics of a test case that satisfies the corresponding test objective. The head of a rule is a predicate that means the test 33

46 objective is satisfied. The redundancy checking rules follow the following form: The test objective is satisfied by the test suite :- the structural characteristics of a test case that satisfies the test objective. The body of the rule describes the characteristics that hold about a test case in the test suite ontology, if the test objective is satisfied by it. The characteristics of the test case are defined by the vocabulary specified by the abstract test suite ontology and the behavioral model ontology. A general redundancy checking rule in POSL is shown below. Test Objective: [predicatename],[parameter1, parameter2, parameter3] Redundancy Checking Rule: exist() :- test(?t), hasstep(?t,?step1), hascall(?step1, parameter1), arg(parameter1,?variablevalue1), variable(?variablevalue1, parameter2), value(?variablevalue1, parameter3). The redundancy checking rule above is generated for a test objective with predicatename as predicate and parameter1, parameter2, and parameter3 as parameters. If the body of the rule can be unified with the knowledge that is defined by the test suite and behavioral model ontologies the test objective is satisfied. The body of the rule defines that in order for the test objective to be satisfied by the test suite the following conditions must hold: There is a test referred by?t, which has a step referred by the?step1 variable; parameter1 is the name of the transition of step1;?variablevalue1 is an argument of the parameter1 and the name of its variable is parameter2 and the name of its value is parameter3. The test objective predicate of the above rule can be called transitionhaveinputvariablevalue. The redundancy rule checks whether there is a test with parameter1 34

47 as transition, which would have these values: parameter3 for the variable: parameter2 in its arguments. For every test objective a redundancy checking rule is generated. For every test objective predicate, there exists a redundancy checking rule template. The redundancy checking rule template of a test objective predicate describes how a redundancy checking rule is generated for a test objective that uses that predicate. Appendix A.6 describes the syntax of the redundancy checking rule templates, and how the test suite and behavioral model ontology are related to the redundancy checking rules Implementation Knowledge Ontology The implementation knowledge ontology specifies the implementation-dependant knowledge, which is essential for translating the abstract test suite ontology to an executable test suite. This ontology is linked to the behavioral model ontology and extends it with implementation information. Figure 3.5 depicts a portion of TBox of this ontology. The knowledge represented by this ontology, which is programming-language-dependent, can include: variable getters and setters, implementation names of methods, classes, namespaces, constructors, etc. This ontology can be automatically populated, if the source code is available. This ontology helps postponing the task of generation of actual executable test cases to after the implementation is done. The semantics of the implementation knowledge ontology is described in Tables 3.6 and 3.7. Appendix A.7 provides the implementation knowledge ontology TBox in OWL. 35

48 Table 3.6: Semantics of classes of implementation knowledge ontology TBox # Class Description 1 ImplementedClass represents an implementation of a class. 2 ImplementedMethod represents an implementation of a method. 3 ImplementedGetter- Method represents the getter method of a state variable. It is an ImplementedMethod. 4 ImplementedSetter- represents the setter method of a state variable. It is an Imple- Method 5 ImplementedState- Variable mentedmethod. represents an implementation of a state variable. Table 3.7: Semantics of properties of implementation knowledge ontology TBox #. Property Domain Range Description 1 hascall Call Implemented- Method specifies a call from the state machine ontology that is implemented by the ImplementedMethod. The inverse of 2 inverseof- HasCall 3 hasstate- Variable 4 inverseofhas- StateVariable Implemented- Method StateVariable Implemented- StateVariable this property is InverseofHasCall. Call specifies an ImplementedMethod that implements the Call from the state machine ontology. The inverse of this property is HasCall. Implemented- StateVariable StateVariable specifies a state variable from the state machine ontology that is implemented by the ImplementedState- Variable. The inverse of this property is InverseofHas-StateVariable. specifies an ImplementedStateVariable that implements the StateVariable from the state machine ontology. The inverse of this property is Has StateVariable. 5 hasgetter- Method Implemented- GetterMethod Impemented- StateVariable specifies a getter method of the ImplementedStateVariable. 6 hassetter- Method Implemented- SetterMethod Impemented- StateVariable specifies a setter method of the ImplementedStateVariable. 7 packagename String Step specifies name of the package that contains the class. 8 name String Implemented- Class, Implemented- Method, Implemented- StateVariable specifies name of the entity. 9 hasclass Implemented- Class StateMachine specifies an ImplementedClass, which its behavior is described by the state machine. 36

49 doorimp:openk imp:name imp:packagename imp:implementedclass imp:hasclass sm:statemachine imp:name imp:hasvar sm:statevariable imp:classname imp:hasstatevariable imp:implementedstatevariable imp:hasgettermethod imp:hassettermethod imp:implementedgettermethod imp:implementedsettermethod is-a is-a imp:implementedmethod is-a imp:inverse_of_hascall imp:implementedcallmethod sm:call Figure 3.5: Part of TBox of implementation knowledge ontology Executable Test Suite An executable test suite is the main output of the method, and is the result of the abstract test suite being translated using the implementation knowledge. This test suite is written in a programming language for an implementation of the system under test. The general procedure of a simple executable test case is shown below. After each step of a test case the values of the state variables are read and verified by comparing with the expected values. After an object is created, the values of the state variables are verified using their getter methods. Then a method of the object is called and the values of the state variables are verified against the expected values again. A step in the abstract test suite ontology corresponds to a method call and state variable verification. If the values of the state variables are not as expected or the method throws an exception, the test case fails. Finally the object is deleted from the memory. - A constructor is called to create an object. 37

50 - The values of the state variables are verified. - A method of the object is called. - The values of the state variables are verified. - The object is deleted from the memory. Some exceptions are expected to be thrown after a method call. These exceptions can be defined as expected exceptions. In this case, if the expected exceptions are not thrown the test case fails. If the API of an automated testing framework such as JUnit is used, the tests can be executed and verified automatically. Appendix A.8 describes the structure of JUnit test cases. 3.3 Transformation Phases This section describes the phases of transformation of the specifications. Phase 1 generates test objectives. Phase 2 and phase 3 incrementally generate abstract test suite. Phase 4 translates the abstract test suite into an executable test suite Test Objective Generation Phase During this phase initial test case selection is conducted and the output, which is a very high level test suite is presented as a set of test objectives. These objectives are generated based on the behavioral model, the expert knowledge, and the coverage criteria rules. This information set is externalized and segregated into rules and ontologies, having only the decision making algorithm hard-coded. This makes the overall method extensible to support various coverage criteria. The behavioral model is a model that specifies the behavior of the system. In object-oriented design and development regardless of this method, the model can be a state machine diagram, sequence diagram, or activity diagrams. The model is 38

51 represented in an ontology, which enables reasoning, as well as extension. Expert knowledge is represented in an expert knowledge ontology. This ontology provides information that is needed for decision making, but is not included in the standard system behavioral models. This ontology imports the behavioral model ontology and adds new classes, properties, etc. to it. The expert knowledge ontology and behavior model ontology together define the knowledge that is used for test case selection. Based on the vocabulary defined by these two ontologies, coverage criteria rules define what test cases should be included in the test suite. Coverage criteria rules use the information provided by the ontologies to define specifications for the test cases that should be included in the test suite for the coverage criteria to be satisfied. The generated test suite is specified by a set of test objectives. The set of test objectives are the specifications of test cases that are required to be included in the test suite. The test objectives enable the system to separate the test selection process from test generation process. This separation has two advantages: First, different test generation algorithms can be used to generate test cases from the test objectives. Second, it makes it possible to use different test case selection strategies. Also an advantage of using test objectives is that the test objectives serve as a language that the test expert can use to compose the test suite manually or modify the generated test suite. The behavioral model and expert knowledge ontologies together define the system and additional knowledge that are needed for decision making with regards to selecting test objectives. Coverage criteria, conversely, define selection rules that specify what test cases should be included in the test suite based on the provided knowledge. For instance, a rule can be defined as follows: A test case that passes the transition with a wrong input :- A transition calls a method in its action, and the method receives an input from the user, 39

52 and the input can be wrong based on the business logic. For this example, expert knowledge should specify the methods that interact with the users and the erroneous input values that are likely to be given to the system. Then test cases can be identified that satisfy the coverage criterion. The first part of the rule specifies the test objectives that must be selected and the second part specifies test objective selection criteria. During this phase formal knowledge representation languages such as OWL-DL and rule languages such as Rule ML can be used to represent the specifications. Reasoning engines such as OO jdrew can be used for identification of the test objectives Redundancy Checking Phase The goal of this phase is to avoid generating test cases for test objectives that are already satisfied by the test suite. When a test case is generated for a test objective and added to the test suite, it is possible that the generated test case satisfies another test objective that is required to be satisfied later. To avoid generating test cases for a test objective that is already satisfied by the test suite, the test suite should be examined, before passing the test objective to phase 3 for test case generation. A non-redundant test objective is given to the abstract test suite ontology generation phase for test case generation, before the system continues to check another test objective for redundancy. Information sources used during this phase are: the test objectives and the corresponding test objective redundancy checking rule templates, the test suite ontology, and the behavioral model ontology. For each test objective, using the redundancy checking rule template of the test objective predicate, a redundancy checking rule is generated by replacing the param- 40

53 eters of the template with the arguments of the test objective. The test objective redundancy checking rules refer to the test suite ontology. The test suite ontology provides information about the steps of the test cases and the values of state variables at each step. The rules also refer to the behavioral model ontology to examine other properties of the steps of the test cases in the test suite. Referring to the information provided by the behavioral, the test suite can be examined to determine whether there is a test case with the specification given by the redundancy checking rule, and therefore decide whether a test case should be generated for the given test objective. As in phase 1, during this phase formal knowledge representation languages such as OWL-DL and rule languages such as Rule-ML can be used to represent the specifications. Reasoning engines such as OO-JDREW can be used for examining the test suite for the existence of the test objectives Abstract Test Suite Ontology Generation Phase The goal of this phase is generation of an abstract test suite, which is implementationindependant. The abstract test suite is written in an ontology and merely describes the test cases of the test suite by specifying their steps and the values of the state variables at each step. The use of the abstract test suite ontology instead of generating the test cases directly has several advantages: First, the test cases can be generated before the detailed design is final and implementation decisions are made. Second, it enables the system to reason on the test suite to specify whether a test case with the given specification exists in it (in the redundancy checking phase). Third it makes it possible to extend the system to design coverage criteria that based on the test suite decide whether enough test cases are included in the test suite. Information sets used in this phase are test objectives and the behavioral model 41

54 ontology. A path in the behavioral model ontology is generated, such that it conforms to the requirements of the given test objective. The generated path is the abstract test case for that test objective and is added to the abstract test suite ontology. Then the redundancy checking phase continues to provide another test objective, and the test suite ontology is generated incrementally. This phase performs test case generation while the former two phases perform test case selection. An advantage of separating this phase from the other phases is that other technologies that have widely been used for test case generation can be used, such as: AI Planning, Model Checking, or Graph traversal algorithms Executable Test Suite Generation Phase This phase generates a test suite that can be executed by an automated software testing framework. Separation of this phase enables support for various programming languages and automated testing frameworks. The abstract test suite ontology, which is generated in the abstract test suite ontology generation phase, is given as input to this phase. Also the behavioral model ontology, and implementation knowledge ontology are the inputs to this phase. The implementation knowledge ontology imports the behavioral model ontology and adds information that is required for generating an executable test case. This information can include names that are used in the implementation, method schemas and the order of their parameters, the names of the setters and getters of variables, the names of packages or namespaces, etc. With this information the steps of test cases are translated into an executable test case. 42

55 3.4 A Simple Example In this section a simple door class example is used to describe the method. The specifications and the phases of the method are described Elevator Door Example The class under test is an elevator door class. Figure 3.6 depicts the class diagram and the state machine model of the door class. The class has one state variable named Open that has a getter method: isopen. The door is initially closed. There are two methods: PressOpenKey and PressCloseKey, which can be called when the door is closed and open, respectively. The PressOpenKey method changes the value of Open to true and the PressCloseKey method changes the value of Open to false. Door boolean Open; + PressOpenKey(); + PressCloseKey(); + isopen(); + Door(); new() [] / Open=false; PressOpenKey() [ ] / Open=true; Closed delete() [] / PressCloseKey() [ ] / Close=true; + boolean isopen(); Open Figure 3.6: Elevator door class and its state machine Specifications In this section the specifications that are used or generated by the method are described Behavioral Model Ontology Figure 3.7 illustrates part of the ABox of the door class state machine. An instance of the ontology Class StateMachine called doorstatemachine represents the state ma- 43

56 chine of the door class. The two states of the system, namely open and closed are defined by instantiateing State. The system has also a StartState named startstate and FinalState named finalstate 1. Four transitions of the state machine namely: starttoopen, opentoclosed, closedtoopen, and closedtofinal are defined by instantiateing Transition. The state variable, Open is defined by instantiating StateVariable. These structural elements are referenced by the doorstatemachine instance through the properties called: states, transtions, and vars respectively. The starttoclosed, which is from the startstate to the closedstate is traversed when an object of the Door class is created. The Event of this transition is named new and corresponds to a call to the constructor of the Door. The Action of this transition is named init; it initialises the value of the Open state variable to false. The listing below shows part of this ontology in OWL. Appendix B.1 includes the OWL description of the door state machine. <smuri : T r a n s i t i o n r d f : ID= s t a r t t o c l o s e d > <smuri : From> <smuri : S t a r t S t a t e r d f : ID= s t a r t s t a t e > <smuri : Out r d f : r e s o u r c e= f i l e : / Users / input / door. owl#s t a r t t o c l o s e d /> </ smuri : S t a r t S t a t e> </ smuri : From> <smuri : Action> <smuri : Behaviour r d f : ID= i n i t > <smuri : Behaviour desc r d f : datatype= http : / /www. w3. org /2001/XMLSchema# s t r i n g > Open=false ; </ smuri : Behaviour desc> </ smuri : Behaviour> </ smuri : Action> <smuri : To> <smuri : S t a t e r d f : ID= c l o s e d s t a t e > </ smuri : S t a t e> </ smuri : To> <smuri : Event> <smuri : C a l l r d f : ID= new > </ smuri : C a l l> </ smuri : Event> 44

57 ek:method ek:uses :Behaviour sm:action m:transition sm:value ek:hasvalue riablevalue riablevalue :State door:doorstatemachine sm:statemachine sm:vars sm:states sm:vars sm:statevariable sm:abstractstate sm:states door:open sm:transitions is-a sm:states is-a door:openstate is-a sm:state sm:states sm:states door:closedstate sm:transitions sm:in door:finalstate_1 sm:finalstate door:startstate sm:out sm:from sm:startstate sm:to sm:to sm:to sm:from sm:from sm:condition door:opentoclosed sm:guard door:closedtofinal sm:transition door:closedtoopen sm:action door:starttoclosed sm:behaviour sm:event sm:event sm:action door:new sm:call door:init TBox Definitions: Instantiation: ABox Definition: class2 individual1 individual2 property property class1 class1 individual1 Figure 3.7: Part of ABox of the door state machine 45

58 </ smuri : T r a n s i t i o n> <smuri : s t a t e V a r i a b l e r d f : ID= open > <smuri : name r d f : datatype= http : / /www. w3. org /2001/XMLSchema#s t r i n g >Open</ smuri : name> <smuri : I n i t B o o l e a n V a l u e r d f : datatype= http : / /www. w3. org /2001/XMLSchema#boolean >true</ smuri : I n i t B o o l e a n V a l u e> </ smuri : s t a t e V a r i a b l e> Coverage Criteria Rule The coverage criteria rule used in this example is transition coverage which requires that all of the transitions of the state machine be covered at least in one test case. The code of this coverage criteria in POSL is shown below. It means that if there exists a transition?t1, then generate the test objective [covertransition][?t1]. coverage([covertransition],[?t1]):-transition(?t1) Test Objectives The following test objectives are generated for transition coverage: [covertransition],[starttoclosed] [covertransition],[closedtoopen] [covertransition],[opentoclosed] [covertransition],[closedtofinal] Generation of test cases for these test objectives ensure that all of the transitions in the state machine are covered at least once Expert Knowledge Ontology This example does not need extension of the state machine ontology with expert knowledge. The only knowledge that is used for selection of test cases is the transitions of the state machine, which is described by the state machine ontology. 46

59 Abstract Test Suite Ontology The listing below delineates a test in the abstract test suite ontology. This test includes two steps: test0step0 and test0step1. In test0step0, transition starttoclosed of the state machine is traversed. In test0step0, transition closed to final of the state machine is traversed. The value of the Open state variable is indicated as the outcome of each state. In the test below Open is false after both of the steps. Appendix B.2 includes the OWL description of the test suite. <j. 0 : t e s t r d f : about= http : / /www. valeh. com#t e s t 0 > <j. 0 : h a s s t e p> <j. 0 : s t e p r d f : about= http : / /www. valeh. com#t e s t 0 s t e p 0 > <j. 0 : h a s c a l l r d f : r e s o u r c e= f i l e : / Users / input / door. owl#s t a r t t o c l o s e d /> <j. 0 : outcome> <j. 0 : v a r i a b l e v a l u e r d f : about= http : / /www. valeh. com# t e s t 0 s t a r t t o c l o s e d 1 v a r v a l u e 0 > <j. 0 : h a s v a r i a b l e r d f : r e s o u r c e= f i l e : / Users / input / door. owl#open /> <j. 0 : h a s b o o l e a n v a l u e r d f : datatype= http : / /www. w3. org /2001/XMLSchema# boolean >false</ j. 0 : h a s b o o l e a n v a l u e> </ j. 0 : v a r i a b l e v a l u e> </ j. 0 : outcome> <j. 0 : n e x t s t e p> <j. 0 : s t e p r d f : about= http : / /www. valeh. com#t e s t 0 s t e p 1 > <j. 0 : h a s c a l l r d f : r e s o u r c e= f i l e : / Users / input / door. owl#c l o s e d t o f i n a l /> <j. 0 : outcome> <j. 0 : v a r i a b l e v a l u e r d f : about= http : / /www. valeh. com# t e s t 0 c l o s e d t o f i n a l 2 v a r v a l u e 0 > <j. 0 : h a s v a r i a b l e r d f : r e s o u r c e= f i l e : / Users / input / door. owl#open /> <j. 0 : h a s b o o l e a n v a l u e r d f : datatype= http : / /www. w3. org /2001/ XMLSchema#boolean >false</ j. 0 : h a s b o o l e a n v a l u e> </ j. 0 : v a r i a b l e v a l u e> </ j. 0 : outcome> </ j. 0 : s t e p> </ j. 0 : n e x t s t e p> </ j. 0 : s t e p> </ j. 0 : h a s s t e p> 47

60 <j. 0 : h a s s t e p r d f : r e s o u r c e= http : / /www. valeh. com#t e s t 0 s t e p 1 /> </ j. 0 : t e s t> Redundancy Checking Rule Templates The redundancy checking rule template for a test objective that uses covertransition test objective predicate is as follows: $covertransition exist() :- test(?t), hascall (?stepname1,#0), hasstep(?t,?stepname1). A redundancy checking rule is generated by replacing the argument of the coverage criteria in the place of #0. The number following # indicates the index of the parameter which is replaced in the parameters of the test objective. The resulting coverage criteria means that if there is a test?t, that has a step named?stepname1, and the?stepname1 has a transition #0, then the test objective is satisfied by the test suite Implementation Knowledge Ontology Figure 3.8 depicts the implementation ontology of the door class. This ontology describes the names of the class and the enclosing packages - Door and Elevator, respectively. The method of the Door class, which corresponds to the pressopenkey call of the statemachine is PressOpenKey. The name of the member variable of the Door class that corresponds to the open state variable is Open and its getter method is named isopen. The listing below shows part of this ontology in OWL. Appendix B.3 includes the OWL description of the implementation knowledge ontology of the Door class. <imp : implementedgettermethod r d f : ID= isopenedgettermethod > <imp : name r d f : datatype= http : / /www. w3. org /2001/XMLSchema#s t r i n g >isopened</ imp : name> 48

61 </imp : implementedgettermethod> <imp : implementedmethod r d f : ID= OpenKeyPressImplementedMethod > <imp : h a s C a l l> <r d f : D e s c r i p t i o n r d f : about= f i l e : / Users / input / door. owl#pressopenkey > <imp : i n v e r s e o f h a s C a l l r d f : r e s o u r c e= f i l e : / Users / input / doorimp. owl# OpenKeyPressImplementedMethod /> </ r d f : D e s c r i p t i o n> </imp : h a s C a l l> <imp : name r d f : datatype= http : / /www. w3. org /2001/XMLSchema#s t r i n g >PressOpenKey< /imp : name> </imp : implementedmethod> <imp : implementedstatevariable r d f : ID= OpenStateVariable > <imp : name r d f : datatype= http : / /www. w3. org /2001/XMLSchema#s t r i n g >Open</imp : name> <imp : hasgettermethod r d f : r e s o u r c e= f i l e : / Users / input / doorimp. owl# isopenedgettermethod /> <imp : h a s S t a t e V a r i a b l e> <r d f : D e s c r i p t i o n r d f : about= f i l e : / Users / input / door. owl#open > <imp : i n v e r s e o f h a s S t a t e V a r i a b l e r d f : r e s o u r c e= f i l e : / Users / input / doorimp. owl#openstatevariable /> </ r d f : D e s c r i p t i o n> </imp : h a s S t a t e V a r i a b l e> </imp : implementedstatevariable> Executable Test Suite The code below shows a test case in the JUnit test suite of the door class. The test suite is the DoorTest class, which extends the TestCase class of the JUnit framework. Methods of the DoorTest class are the test cases. The test case named test0 creates an object of the Door class and then checks whether the value of the Open state variable is initialized to false. Then the object is deleted by the garbage collector after the method is exited. Appendix B.4 includes the JUnit test suite of the Door class. 49

62 tl:road1green hasstatevariable limp:road1green asgettermethod imp:isroad1green hascall sm:call eroad1 ss tl:trafficlightsm sm:statemachine ble sm:statevariable tl:road1green hasstatevariable tlimp:road1green imp:hasgettermethod tlimp:isroad1green scall sm:call road1 imp:name imp:packagename imp:implementedclass imp:hasclass imp:name imp:hasvar imp:hasstatevariable imp:classname imp:implementedstatevariable imp:hassettermethod doorimp:door imp:hasgettermethod imp:implementedgettermethod imp:implementedsettermethod is-a is-a imp:implementedmethod is-a imp:implementedcallmethod doorimp:openkeypressimplementationmethod imp:hasclass imp:inverse_of_hascall imp:inverse_of_hascall door:doorstatemachine sm:statemachine sm:statevariable door:open hasstatevariable doorimp:openstatevariable imp:hasgettermethod doorimp:isopengettermethod sm:call door:pressopenkey Figure 3.8: Part of ABox of implementation knowledge ontology of the Door class package unittests ; import Elevator. Door ; public class DoorTest extends TestCase { // public void test0 ( ) { Door uot = new Door ( ) ; assertfalse ( isopened i s f a l s e, uot. isopened ( ) ) ; }} Transformation Phases This section describes the transformations of the specifications at each phase. 50

63 Test Objective Generation Phase In this phase, based on the state machine ontology of the Door class and the coverage criterion, a set of test objectives is generated. The coverage criteria in this example is all-transition coverage: coverage([covertransition],[?t1]):-transition(?t1). Thus, for every transition defined in the state machine a test objective is generated that covers that transition. Given the Door class state machine ontology with the four transitions defined below, the variable?t1 in the body of the coverage criteria rule is unified with four values. Hence, the four test objectives which are listed below is generated: <smuri : T r a n s i t i o n r d f : ID= s t a r t t o c l o s e d >... </ smuri : T r a n s i t i o n> <smuri : T r a n s i t i o n r d f : ID= c l o s e d t o o p e n >... </ smuri : T r a n s i t i o n> <smuri : T r a n s i t i o n r d f : ID= o p e n t o c l o s e d >... </ smuri : T r a n s i t i o n> <smuri : T r a n s i t i o n r d f : ID= c l o s e d t o f i n a l >... </ smuri : T r a n s i t i o n> [covertransition],[starttoclosed] [covertransition],[closedtofinal] [covertransition],[closedtoopen] [covertransition],[opentoclosed] Redundancy Checking Phase A test objective is selected at a time. First [covertransition],[starttoclosed] is selected. Based on the redundancy checking template for the covertransition test objective predicate, a redundancy checking rule is generated by replacing the parameters in the template as below: TestObjective: [covertransition],[starttoclosed] 51

64 Redundancy Checking Rule Template: $covertransition exist() :- test(?t), hascall(?stepname1,#0), hasstep(?t,?stepname1). Redundancy Checking Rule: exist() :- test(?t), hascall(?stepname1, starttoclosed), hasstep(?t,?stepname1). Then the test suite ontology is examined based on the generated redundancy checking rule. At this point the test suite ontology ABox is empty, because there is not any test case generated yet. Hence the [covertransition],[starttoclosed] is given to the next phase for test case generation. After the test case is generated for the [covertransition],[starttoclosed] test objective, another test objective is selected. The [covertransition],[closedtofinal] test objective is selected and the following redundancy checking rule is generated for it: Redundancy Checking Rule: exist() :- test(?t), hascall(?stepname1, closedtofinal), hasstep(?t,?stepname1). This time, there is a test named test0, which has a step test0step0, which has a call closedtofinal. The corresponding piece of the test suite ontology ABox is shown below. Therefore, the [covertransition],[closedtofinal] test objective is already satisfied by a test0 and is discarded. Then, another test objective is selected, until all of the test objectives are examined. <j. 0 : t e s t r d f : about= http : / /www. valeh. com#t e s t 0 > <j. 0 : h a s s t e p>... <j. 0 : s t e p r d f : about= http : / /www. valeh. com#t e s t 0 s t e p 1 > <j. 0 : h a s c a l l r d f : r e s o u r c e= f i l e : / Users / input / door. owl#c l o s e d t o f i n a l />... </ j. 0 : s t e p> </ j. 0 : t e s t> 52

65 Abstract Test Suite Ontology Generation Phase For a given test objective, based on the door state machine, a path is generated from the start state to the final state to satisfy the test objective. For the [covertransition],[starttoclosed] test objective, a path needs to be generated to cover the transition, starttoclosed. As shown in the listing below, a test case corresponding to the path is added to the test suite ontology ABox for reasoning. The test case, which is generated for the [covertransition],[starttoclosed] coverage criteria has two steps: test0step0 and test0step1. The transition, which is passed at each step is used as the value of the hascall property of the step: starttoclosed for test0step0 and closedtofinal for test0step1. The order of the steps is specified with the nextstep property of a step. The teststep1 is the next step of the test0step0. Also at each step the value of the Open state variable is specified in the outcome property of the step. <j. 0 : t e s t r d f : about= http : / /www. valeh. com#t e s t 0 > <j. 0 : h a s s t e p> <j. 0 : s t e p r d f : about= http : / /www. valeh. com#t e s t 0 s t e p 0 > <j. 0 : h a s c a l l r d f : r e s o u r c e= f i l e : / Users / input / door. owl#s t a r t t o c l o s e d /> <j. 0 : outcome>... </ j. 0 : outcome> <j. 0 : n e x t s t e p> <j. 0 : s t e p r d f : about= http : / /www. valeh. com#t e s t 0 s t e p 1 > <j. 0 : h a s c a l l r d f : r e s o u r c e= f i l e : / Users / input / door. owl#c l o s e d t o f i n a l /> <j. 0 : outcome>... </ j. 0 : outcome> </ j. 0 : s t e p> </ j. 0 : n e x t s t e p> </ j. 0 : s t e p> </ j. 0 : h a s s t e p> <j. 0 : h a s s t e p r d f : r e s o u r c e= http : / /www. valeh. com#t e s t 0 s t e p 1 /> </ j. 0 : t e s t> 53

66 Executable Test Suite Generation Phase Based on the generated test suite ontology and implementation knowledge, the executable test suite is generated. For testing a Java class, JUnit automated testing framework can be used. To generate JUnit code, the name of the class, which is Door, and the name of the package, which is Elevator, are retrieved from the implementation knowledge ontology. The test suite is generated by extending the TestCase class of the JUnit framework. For each test, a method is generated. Hence for test0 a method called test0() is added to the class. For each step of a test case in the abstract test suite ontology, from the hascall property of the step, the name of the transition which is traversed at that step is extracted. For test0step0, the name of the transition is starttoclosed, which has the event init. In this case, since the source of the starttoclosed transition is the start state, the init event is mapped to the constructor of the class. The generated test case for test0 is shown below. p u b l i c void test0 ( ) { Door uot = new Door ( ) ; assertfalse ( isopened i s f a l s e, uot. isopened ( ) ) ; } If a transition of a step is not from a start state and to a final state, based on the name of the transition, the name of the event of the transition is taken from the state machine ontology. The name of its event is used to extract the implementation information from the implementation knowledge ontology. For instance for the pressopenkey event, the following piece of implementation knowledge ontology is used to extract the name of the corresponding implemented method which is PressOpenKey. <imp : implementedgettermethod r d f : ID= isopenedgettermethod > <imp : name name> r d f : datatype= http : / /www. w3. org /2001/XMLSchema#s t r i n g >isopened</imp : 54

67 </imp : implementedgettermethod> <imp : implementedmethod r d f : ID= OpenKeyPressImplementedMethod > <imp : h a s C a l l> <r d f : D e s c r i p t i o n r d f : about= f i l e : / Users / input / door. owl#pressopenkey > <imp : i n v e r s e o f h a s C a l l r d f : r e s o u r c e= f i l e : / Users / input / doorimp. owl# OpenKeyPressImplementedMethod /> </ r d f : D e s c r i p t i o n> </imp : h a s C a l l> <imp : name r d f : datatype= http : / /www. w3. org /2001/XMLSchema#s t r i n g >PressOpenKey</ imp : name> </imp : implementedmethod> For a state variable, the name of the getter method is extracted from the implementation knowledge ontology. Based on the value of a state variable at each step, which is specified by the outcome property of the step, the correct value of the state variable is asserted using the JUnit assert methods. For instance, assertfalse( isopened is false,uot.isopened()) requires that the value returned by the isopened getter method be false, otherwise this test case fails. 3.5 Summary The ontology based test case generation method uses ontologies, rules, and reasoning to promote separation of concerns in case generation; this makes the method flexible for supporting various coverage criteria, test domains, and software models, and increases the control of test experts. The method generate test cases from the behavioral model specification of the system, coverage criteria specification, expert knowledge, and implementation knowledge. Based on these inputs it generates test cases in four phases. The behavioral model specification, coverage criteria specification and expert knowledge, which describe different aspects of the problem domain are used to generate test objectives in the test objective generation phase. The test objectives, 55

68 which are in the solution domain, describes the required test suite in a very high level language. The test objectives are translated into abstract test cases. An abstract test case might satisfy several test objectives. Hence, before an abstract test case is generated for a test objective, the partially generated abstract test suite ontology is examined for the existence of a test case that already satisfies the test objective. This is done using redundancy checking rules which are generated for every test objective. The abstract test suite ontology describe the test cases as a list of steps and the states of the system. This test suite is not executable and is based on the design and requirements. The abstract test suite ontology is then translated into an executable test suite using the implementation knowledge. The system transforms the high level testing requirements and system specification into a lower level test suite in each phase. During each phase some off-the-shelf tool can be used to perform the main transformation algorithm. The inputs of the system are ontologies and rules, which are highly modifiable. Reasoning algorithms, which are independent of what is being expressed, are used to make the system extensible to support extensions for various coverage criteria. 56

69 Chapter 4 System Design This chapter describes the design of a prototype that automates transformations of specifications based on the the ontology-based methodology described in Chapter 3. An overview of the system and the flow of data in it is discussed in Section 4.1. Section 4.2 describes the subsystems and their interactions. An operation scenario of the system, which is established for demonstration of an implementation of the system is discussed in Section 4.3. Section 4.4 summarizes this chapter. 4.1 System Overview Figure 4.1 shows a data flow diagram of the design of the system, which has three main processes: Test Objective Generation which is responsible for automating the test objective generation phase of the method, Redundancy Checking which is responsible for automating the redundancy checking phase of the method, and Test Case Generation which is responsible for the abstract test suite ontology generation and the executable test suite generation phases of the method. The Test Objective Generation process uses a reasoner to generate the test objectives based on a behavioral model ontology, an expert knowledge ontology, and 57

70 <Rules> Standard and Expert-Defined Coverage Criteria input <Ontology> Expert Knowledge input <Ontology> Behavioral Model input <PO Standa Expert- Coverag Test Objective Generation Test case Generation Initialization T Test Objectives Selected Test Objectives Test case Generator Testcases Redundancy Checking Ontology Test Writer Executable Test Writer T Test Redundancy Rule Templates input <Ontology> Test Suite output Executable Testcases output <Ontology> Implementation Knowledge input T Figure 4.1: High level data flow diagram of the system coverage criteria (phase 1 of the method). The Test Redundancy Checking process also uses a reasoner to examine a test suite ontology for the existence of a test case that satisfies a given test objective, using test objective redundancy checking rule templates (phase 2 of the method). The Test case Generation process performs the abstract test suite ontology generation and the executable test suite generation phases of the method. It consists of four subprocesses. The Initialization subprocess, initializes the inputs to the Test case Generator subprocess based on the behavioral model ontology and selected test objectives. The Test case Generator subprocess generates test cases. This subprocess can be implemented using different technologies including AI planning, graph traversal, or model checking. The generated test 58

71 <POSL> Standard and Expert-Defined Coverage Criteria input <OWL-DL> Expert Knowledge input <OWL-DL> UML State Machine Model input ration Test Objective Generation Test case Generation ases <OO-jDREW> Test Objective Generation Test Objectives OWLtoPOSL Selected Test Objectives PDDL Generator PDDL <Planner> Test case Generator Testcases Redundancy Checking <OWL-DL> Test Writer <OO-jDREW> Test Redundancy Checker OWLtoPOSL JUnit Test Writer <Ontology> plementation Knowledge input Test Redundancy Checking Rule Templates input <OWL-DL> Test Suite output JUnit Testcases output <OWL-DL> Implementation Knowledge input Figure 4.2: Technologies for realizing of the data flow diagram of the system cases are written to the test suite ontology by the Ontology Test Writer subprocess. At this point, the abstract test suite generation of the method is done. Finally, the Executable Test Writer subprocess generates the executable test cases for a particular language. For generating test cases in different languages different Executable Test Writers can be used. The data flow diagram depicted in Figure 4.1 is abstract and can be realized with various technologies. Figure 4.2 depicts a realization of the system for state machine based unit testing with technologies including: OWL-DL, POSL, and OO jdrew, and an AI planner named Metric-FF [43]. The UML state machine, expert knowledge, implementation knowledge, and test 59

72 suite are represented in OWL-DL [31]. TBox ontologies define the concepts and relationships among them, and ABox ontologies import the TBox ontologies to instantiate the elements defined in them. The TBoxes are reusable while the ABoxes are for a single unit under test. The Ontology Definition Metamodel (ODM), which is adopted by the OMG, has a section that describes the UML 2.0 metamodel in OWL-DL. However, a prototype ontology which is much simpler and less modifiable is suitable and used for this work. The XMI [23] representation of the UML state machine can be converted to the ontology-based representation automatically. The implementation knowledge can be automatically imported, when the source code of the unit is available. The Test Objective Generation and Redundancy Checking processes use OO jdrew [38] for reasoning tasks. The OWL-DL Ontologies are first transformed into POSL [36]. The coverage criteria and the test redundancy rule templates are provided and translated to POSL respectively. The Test case Generation process uses an AI planner called Metric-FF [43] in the Test case Generator subprocess. The inputs to Metric-FF are the problem and domain description in the PDDL 2.1 language [5], which are provided by the PDDL Generation subprocess. The inputs of the planner are initialized based on data from the state machine and structure predicates of a test objective. The generated test cases include methods to be called at each step, their inputs, and the expected values of the state variables. The generated test cases are then given to the Test Suite Writer subprocess to be written back to the Test-Suite Ontology in OWL-DL, from which the JUnit test cases are generated by the JUnit Test Writer subprocess. 60

73 4.2 Design Classes This section details the high level class diagrams for the system, which has three main subsystems: the test objective generation subsystem, the redundancy checking subsystem, and the test case generation subsystem Test Objective Generation Subsystem Figure 4.3 depicts the high level class diagram for the test objective generation subsystem together with its activity diagram. The TestObjectiveGenerator uses POSLReasoner for generating POSL files and reasoning on them. Test Objective Generation TestObjectiveGenerator OntModel theontmodel; String coveragecriterion; + process(string resultfileaddress) - OntModel readonto() - void readcoveragecriteria() theontmodel = thetestobjectivegenerator.readonto(); coveragecritorion = thetestobjectivegenerator.readcoveragecriteria(); theposlreasoner POSLReasoner Reasoner reasoner; + POSLWriter(OntModel ontmodel, String FileAddress) +loadrules(string rule) + reason(string query, String resultfileaddress) theposlreasoner.poslwriter(theontmodel, outfileaddress); theposlreasoner.loadrules( coveragecriterion) theposlreasoner.reason(query, resultfileaddress) Figure 4.3: The class diagram and activity diagram of the test objective generation subsystem The TestObjectiveGenerator has a process method. When the process method is called, the readonto and readcoveragecriteria methods are called to read the state machine ontology and coverage criteria, which are stored in the theontmodel Redundancy Checking and coveragecriterion properties, respectively. To read the ontology, the TestOb- RedundancyChecking OntModel theontmodel; 61 String redundancyrule; + boolean process(string testobjective) - String generateredundancycheckingrule(string testobjective) - OntModel readtestsuiteonto() theontmodel = theredundancycheck redundancyrule = theredundancychecking. generateredundancycheckingrule(te

74 Test Objective Generation TestObjectiveGenerator el theontmodel; overagecriterion; ss(string resultfileaddress) del readonto() adcoveragecriteria() theontmodel = thetestobjectivegenerator.readonto(); coveragecritorion = jectivegenerator thetestobjectivegenerator.readcoveragecriteria(); can use Jena API OWL API. The process method then uses POSLReasoner for converting the ontology stored in the theontmodel property into owltoposl POSLReasoner r reasoner; Writer(OntModel ontmodel, String ress) les(string rule) n(string query, String eaddress) POSL and reason on it. For this purpose, it first calls the POSLWriter to write the theposlreasoner.poslwriter(theontmodel, outfileaddress); theposlreasoner.reason(query, resultfileaddress) ontologytheposlreasoner.loadrules( into format. coveragecriterion) Then, it calls the loadrules method to load the generated POSL and a coverage criterion. Finally, the reasoning method is called with a query and the results are written to the resultfileaddress. The POSLReasoner uses OO jdrew for loading the POSL rules and performing reasoning Redundancy Checking Subsystem Figure 4.4 depicts the high level class diagram for the redundancy checking subsystem together with its activity diagram. The RedundancyChecking uses POSLReasoner for writing POSL files and reasoning on them. Redundancy Checking RedundancyChecking OntModel theontmodel; String redundancyrule; + boolean process(string testobjective) - String generateredundancycheckingrule(string testobjective) - OntModel readtestsuiteonto() theontmodel = theredundancychecker. readtestsuiteonto(); redundancyrule = theredundancychecking. generateredundancycheckingrule(testobjective); owltoposl POSLReasoner Reasoner reasoner; + POSLWriter(OntModel ontmodel, String FileAddress) +loadrules(string rule) + reason(string query, String resultfileaddress) theposlreasoner.poslwriter(theontmodel, outfileaddress); theposlreasoner.loadrules( redundancyrule) theposlreasoner.reason(query, resultfileaddress) Figure 4.4: The class diagram and activity diagram of the redundancy checking subsystem The RedundancyChecking has a process method. When the process method is called, the readtestsuiteonto method is called to read the test suite ontology, which 62

75 is stored in the theontmodel property. Then it calls the generateredundancycheckingrule, which returns a redundancy checking rule for the given test objective. The generated redundancy rule is stored in the redundancyrule property. The process method then uses POSLReasoner for converting the ontology stored in the theont- Model property into POSL and reasons on it. For this purpose, it first calls the POSLWriter to write the ontology into POSL format. It also calls the loadrules method to load the generated POSL and the redundancy checking rule. Finally, the reasoning method is called with a query and the results are written to the result- FileAddress. The POSLReasoner uses OO jdrew for loading the POSL rules and performing reasoning Test case Generation Subsystem Figure 4.5 depicts the high level class diagram of the test case generation subsystem. It has five classes: thetestgeneratorcontroller uses other classes for test case generation and controls the process. PDDLGeneration initializes the input of the planner by generating PDDL domain and problem files. TestcaseGeneration generates test cases. OntologyTestSuiteWriter adds the generated test cases to the test suite ontology. JUnitTestSuiteWriter, generates the JUnit test suite from the test suite ontology. To change the test case generation technology, the PDDLGeneration and the TestcaseGeneration classes are replaced with a class that initializes the inputs of the new test case generator and a class that implements the test case generator, respectively. The thetestgeneratorcontroller controls the process of generation of test cases from test objectives. Figure 4.6 depicts how the process method uses the other classes for test case generation. First, it uses PDDLGeneration for initializing the input to the planner. Then, for every test objective, it uses the RedundancyChecking class to check whether it is satisfied. If it is not, it proceeds to use the TestcaseGeneration 63

76 Checking estobjective) cycheckingrule(string nto() TestcaseGeneration + Test runplanner (String PDDLDomainFile, String PDDLProblemFile) PDDLGeneration PDDLDomain[] domains; PDDLProblem[] problems; PDDLDomain basedomain; PDDLProblem baseproblem; + void addstate(string name, boolean isfinal, boolean isstart) + void addtransition(string name, String source, String destination) + void addguard(string transition, String desc) + void addeffect(string transition, String desc) + void generatepddlproblemsanddomains (String testobjectivefileaddress) +String[] getpddldomains(); +String[] getpddlproblems(); - void writedomain() - void writeproblem() /* ba th (t do pr OntologyTestSuiteWriter OntModel theontmodel; + write(test test); TestGeneratorController PDDLDomain[] domains; PDDLProblem[] problems; RedundancyChecking theredundancychecking; + void process(string testobjectivefileaddress, String JUnitFileAddress) JUnitTestSuiteWriter OntModel testontmodel; OntModel implontmodel; + readimplementationknowledgeonto (String ImplementationOntoFileAddress) + generate(string testsuitefileaddress, String JUnitFileAddress); thejunittest readimpleme (Implementa thejunittest generate (testsuitefile JUnitFileAdd Figure 4.5: Test case generation subsystem to generate the tests and the OntologyTestSuiteWriter to write the tests to the test suite ontology. Finally, after all of the test objectives are examined, it uses the JUnitTestSuiteWriter to generate the JUnit file. The PDDLGeneration stores a basepddldomain, which is constructed based on System Operation the state machine and a basepddlproblem, which requires the planner to generate a path from the start state to a final state of the state machine. These two objects are constructed by calling the addstate, addtransition, addguard, and addeffect /*create objects and set their parameters including addresses of the input and output files*/ methods of the PDDLGeneration class. The generatepddlproblemsanddomains method creates the PDDLDomains and PDDLProblems for the test objectives by thetestobjectivegenerator.process (testobjectivefileaddress); thetestgeneratorcontroller.process (testobjectivefileaddress, JUnitFileAddress); altering the basepddldomain and basepddlproblem for every test objective. The 64

77 enerator.process ddress); ontroller.process ation Test Suite Ontology and Executable Test Suite Generation Activity diagram of TestGeneratorController.process() olean isfinal, boolean, String source, String, String desc), String desc) nddomains dress) /* Precondition: Create basepddldomain and basepddlproblem by calling the PDDLGeneration methods.*/ thepddlgeneration.generatepddlproblemsanddomains (testobjectivefileaddress); domains= thepddlgeneration.getpddldomains() problems= thepddlgeneration.getpddlproblems() No /* There is an unprocessed test objective? */ true //a redundant test objective Writer Onto (String dress, String thejunittestsuitewriter. readimplementationknowledgeont (ImplementationOntoFileAddress); thejunittestsuitewriter. generate (testsuitefileaddress, String JUnitFileAddress) YES theredundancycheck ing.process (testobjective) :false //a non-redundant test objective test = thetestcasegeneration.runplanner(pddldomain, pddlproblem); theontologytestsuitewriter.write(test); Figure 4.6: The activity diagram of the test case generation subsystem ystem Operation d set their parameters including put and output files*/ 65

78 generated PDDLDomains and PDDLProblems are stored in the domains and problems properties, which are retrieved by calling the getpddldomains and the get- PDDLProblems methods. Then, for every test objective, the process method of the RedundancyChecking subsystem is called. If the process method returns true, the test objective is redundant and is ignored. If it returns false, the runplanner method of the TestcaseGeneration class is called to generate a test case. Then the write method of the theontologytestsuitewriter class is called with the generated test as its parameter. This method adds the generated test case to the test suite ontology. The system then proceeds to pick another test objective for processing. When all of the test objectives are processed, the readimplementationknowledgeonto method of the JUnitTestSuiteWriter is called, which loads the implementation ontology into its implontmodel property. Then, generate method of the JUnitTestSuiteWriter is called to load the test suite ontology in its testontmodel property and creates the JUnit test suite accordingly System Operation To operate the system, after creating the objects of the classes and initializing them, the process method of the TestObjectiveGenerator is called to create the test objectives. Then the process method of the thetestgeneratorcontroller is called to create the JUnit test suite for the test objectives (Figure 4.7). 66

79 System Operation /*create objects and set their parameters including addresses of the input and output files*/ thetestobjectivegenerator.process (testobjectivefileaddress); thetestgeneratorcontroller.process (testobjectivefileaddress, JUnitFileAddress); Figure 4.7: The high-level activity diagram of system operation nto(); a(); 4.3 System Operation Scenario The system can be used by test experts to compose domain/system specific coverage criteria. The new coverage criteria can be specified by a test expert or selected from ) ss) a coverage criteria library. The system design can be extended to support plug-in based extension to include new coverage criteria. In the present design, to add new coverage criteria rules, they are composed in POSL and are appended to the coverage criteria file. The extra knowledge that is referred to the coverage criteria rules is added to the expert knowledge ontology. The system can be extended to populate the expert knowledge ontology from other sources such as formal documents and the system code. Besides, the system can be extended to import the test oracles into ontologies from their XMI format exported from the existing UML diagrams. If the implementation knowledge ontology is used the generated test cases are executable and can be automatically executed, otherwise the generated test cases are manually Checking executed. The system can be extended to support commonly used coverage criteria and theontmodel = theredundancychecker. readtestsuiteonto(); redundancyrule = theredundancychecking. generateredundancycheckingrule(testobjective); 67

80 coverage criteria which are developed for discovery of a specific class of errors. Development of the coverage criteria for discovery of a specific class of errors could be done using an error taxonomy as a reference. Another approach is to specify domain specific coverage criteria such as coverage criteria for GUI testing or concurrency testing. This can be helpful for software companies that work in a specific domain. However, there is a limitation on the coverage criteria that can be added to the system without changing the code; they should only use the test objective predicates that are supported by the system. If new test objective predicates are to be used in the coverage criteria, the redundancy checking rule templates for the test objective predicates should be added to the system by appending the rule in POSL to the corresponding file. Besides, the code of PDDLGeneration subsystem needs to be modified to support initialization of the AI Planner with new test objective predicates. Hence, for the system to be effective, a comprehensive test objective predicate language is required to enable the test experts to define various test objectives and the system should be extended to support the test objective predicate language. 4.4 Summary This chapter describes the design of a system for the ontology-based test case generation method. Several specifications are transformed by the system into JUnit test cases. The system has three main subsystems: test objective generation, redundancy checking, and test case generation. The test objective generation subsystem, which performs the redundancy checking phase of the method, generates the test objectives. The redundancy checking subsystem performs the redundancy checking phase of the method and is used by the test case generation subsystem before generating a test case for a test objective. 68

81 The test case generation subsystem performs the abstract test suite ontology generation and the executable test suite generation phases of the method. It has five classes: thetestgeneratorcontroller uses other classes for test case generation and controls the process. PDDLGeneration initializes the input of the planner by generating PDDL domain and problem files. TestcaseGeneration generates test cases. OntologyTestSuiteWriter adds the generated test cases to the test suite ontology. JUnitTestSuiteWriter, generates the JUnit test suite from the test suite ontology. 69

82 Chapter 5 System Implementation This chapter details an implementation of the system for our ontology-based test case generation method, which realizes the design described in Chapter 4. In the system implementation, the term test structure is used instead of test objective, and the term test structure assessment is used instead of test objective redundancy checking. The rest of this chapter is organized as follows: Section 5.1 describes which implementation classes realize the design classes. Section 5.2 describes the packages, the classes included in them, their responsibilities, their relationships, as well as how they operate together to realize the system behavior. Section 5.3 summarizes this chapter. 5.1 Realization of Design Classes Table 5.1 shows the mapping of the design classes to the implementation classes. There are 8 packages that work together to realize the system behavior: The test- StructureGenerator.generator uses the teststructuregenerator.common to generate test objectives. The teststructuregenerator.assessment reads the test suite ontology 70

83 uses the teststructuregenerator.common for redundancy checking. The testcasegenerator.plannerinit initializes input of the planner and controls test case generation. The PDDL packages provide data structures for in-memory representation of PDDL problems and domains. The testcasegenerator.plannerrunner provide functionallity for running the AI planner. The testcasegenerator.testwriter provide functionality for writing test suite ontology and executable test suites. Design Class Figure 5.1: Mapping of design classes to implementation classes TestObjectiveGenerator RedundancyChecking POSLReasoner PDDLGeneration PDDLDomain PDDLProblem TestcaseGeneration TestGeneratorController OntologyTestSuiteWriter JUnitTestSuiteWriter Test Implementation Class teststructuregenerator.generator.structuregeneratorcontroller teststructuregenerator.generator.expertknowledgereader teststructuregenerator.generator.statemachineowlreader teststructuregenerator.assessment.testsuiteowlreader teststructuregenerator.assessment.assessmentrulegenerator teststructuregenerator.common.lpwriter teststructuregenerator.common.reasoner testcasegenerator.plannerinit.pddlwriter testcasegenerator.plannerinit.teststructurereader testcasegenerator.plannerinit.pddlconstructor testcasegenerator.plannerinit.datastructures.teststructurepddlmap testcasegenerator.plannerinit.datastructures.pddlproblem testcasegenerator.plannerinit.datastructures.pddldomain testcasegenerator.plannerinit.datastructures.domainproblempair testcasegenerator.plannerrunner.runplanner testcasegenerator.plannerinit.plannermanager testcasegenerator.plannerinit.datastructures.testbuilder testcasegenerator.testwriterimplemetationknowledge testcasegenerator.testwriterjunitwriter testcasegenerator.testwriterowltestsuitewriter testcasegenerator.testwritertest testcasegenerator.testwriterstep 5.2 Detailed Design This section describes the system packages, the classes included in them and their responsibilities. It also describes the relationship between the classes, their implementation and how they operate together to realize the system behavior. 71

84 5.2.1 The teststructuregenerator.generator Package This package includes three classes (Figure 5.2): StructureGeneratorController, StateMachineOWLReader and ExpertKnowledgeReader. This package uses the classes in teststructuregenerator.common jointly with the teststructuregenerator.assessment package for performing reasoning tasks. The StructureGeneratorController s generate method calls the process methods of the StateMachineOWLReader and ExpertKnowledgeReader. When their process methods are called, they parse the UML state machine and the expert knowledge represented in OWL-DL using the Jena API. Then calls the methods of the teststructuregenerator.common.lpwriter class for writing it in POSL format. The process method of the StateMachineOWLReader also calls the methods of testcasegenerator.plannerinit.pddlconstructor class for constructing a PDDL file from the UML state machine. Code that demonstrates the use of Jena API for parsing OWL-DL files is shown in Appendix C.1. The StructureGenerator- Controller then uses teststructuregenerator.common.reasoner for reasoning on the POSL, coverage criteria rules to generate the test structures The teststructuregenerator.assessment Package This package has two classes named TestSuiteOWLReader and AssessmentRuleGenerator(Figure 5.3). It uses the classes in teststructuregenerator.common jointly with the teststructuregenerator.generator package for performing reasoning tasks. The TestSuiteOWLReader has a process method that parses the TestSuite OWL file and uses the teststructuregenerator.common.lpwriter class to convert it POSL. It uses the Jena API for parsing the test suite OWL file. A code that demonstrates the use of the Jena API for reading OWL-DL ontologies is shown in Appendix C.1. 72

85 Reasoner from teststructuregenerator.common private jdrew.oo.td.backwardreasoner reasoner; + Reasoner ( ) + boolean reason ( String, String, String, boolean, String ) reasoner StructureGeneratorController String ccaddress; String teststructureaddress; String theposladdress; + StructureGeneratorController ( Reasoner, String, String ) + void generate ( String ) smreader StateMachineOWLReader com.hp.hpl.jena.ontology.ontmodel m; PDDLConstructor pddlconstructor; + StateMachineOWLReader ( LPWriter, String ) + void process (String) ekreader ExpertKnowledgeReader com.hp.hpl.jena.ontology.ontmodel m; + ExpertKnowledgeReader ( LPWriter, String ) + void process (String) lpwriter lpwriter LPWriter from teststructuregenerator.common + LPWriter(String) + void init() void finalize() void addindividual(string, String) void addindividualdatapropertystring(string, String, String) void addindividualdatapropertyinteger(string, String, String) void addindividualdatapropertyboolean(string, String ind, String) void addindividualobjectproperty(string, String, String) void addsubclass(string, String) void addsubproperty(string, String) Figure 5.2: The teststructuregenerator.generator package 73

86 AssessmentRuleGenerator HashMap < String, String > templates; + AssessmentRuleGenerator ( String, String ) + void generateassessmentrule ( String ) + String getquery ( String AssessmentRule ) TestSuiteOWLReader com.hp.hpl.jena.ontology.ontmodel m; + TestSuiteOWLReader ( LPWriter ) + void process ( String ) lpwriter LPWriter from teststructuregenerator.common + LPWriter(String) + void init() void finalize() void addindividual(string, String) void addindividualdatapropertystring(string, String, String) void addindividualdatapropertyinteger(string, String, String) void addindividualdatapropertyboolean(string, String ind, String) void addindividualobjectproperty(string, String, String) void addsubclass(string, String) void addsubproperty(string, String) Figure 5.3: The classes of the teststructuregenerator.assessment package The AssessmentRuleGenerator reads and parses the assessment rule template file and generates a test structure assessment rule for a given test structure. The generateassessmentrule method generates an assessment rule for a given test structure, by replacing the parameters of the test structure in the corresponding rule template. The getquery method returns an assessment query for a test structure, which is the head of the generated assessment rule followed by a dot The teststructuregenerator.common Package The classes in this package are jointly used by the teststructuregenerator.assessment and teststructuregenerator.generator packages. This package includes two classes: LPWriter and Reasoner. The Reasoner class uses OO jdrew s BackwardReasoner for reasoning. The reasoner class has a public method called reason. When it is 74

87 called, it parses a knowledge-base, coverage criteria or assessment rules, and a query, and performs reasoning. A code that demonstrates how OO jdrew is used for parsing a knowledge base and a query, and reasoning is shown in Appendix C.4. The input to OO jdrew reasoner is in POSL format. The LPWriter is used to convert the elements in OWL-DL knowledge-based to POSL knowledge, based on the mappings described in Appendix A.2. Some of the LPWriter methods for writing the POSL file are shown in Appendix C The testcasegenerator.plannerinit Package This package (Figures 5.4 and 5.5) contains four classes: The PlannerManager class is a controller class that uses other classes to assess the test structure for redundancy, generate PDDL files and generate the plans. The PDDLConstructor uses classes from testcasegenerator.plannerinit.datastructures to create and store the inmemory representation of PDDL problems and domains. The PDDLWriter class writes the in-memory representation of PDDL domains and problems into files. The TestStructureReader class reads the test structures and uses PDDLConstructor to create PDDL files for them. The PDDLManager is a controller class. It has a process method that is called to process the test structures. When the process method is called, it first uses an object of the TestStructureReader class to process the test structures and modify the basepddlproblem and basepddldomain objects, which are constructed from the UML state machine to enforce the generated plan to satisfy the test structure. The PlannerManager then uses PDDLWriter to write the generated PDDL files. It uses testcasegenerator.plannerinit.datastructures. TestStructurePDDLMap to maintain the connection between the test structures and the PDDL domains and problems files. Finally, for every test structure it uses the teststructuregenerator.assessment.assessmentrulegenerator class to generate an assessment rule, and 75

88 lder erator nager tructor tiates>> blem r.plannerinit ctures TestStructureReader TestStructurePDDLMap map; PDDLDomain basedomain; PDDLProblem baseproblem; + TestStructureReader(String, PDDLConstructor, TestStructurePDDLMap, final PDDLDomain, final PDDLProblem) + void process() + ArrayList<String> readteststructurefile() PDDLConstructor PDDLDomain domain; PDDLProblem problem; + PDDLConstructor ( ) + void init ( ) + state ( boolean, boolean, String ) + void transition ( String ) + void from ( String, String ) + void to ( String, String ) + void precondition_eventname ( String, String ) + void precondition_event ( String, String ) + void precondition_guard ( String, String ) + void precondition_guarddesc ( String, String ) + void effect ( String, String ) + void effectdesc ( String, String ) + void statevariable ( String, boolean ) + void finalize ( ) PlannerManager PDDLConstructor pddlconstructor; TestStructurePDDLMap map; PDDLWriter writer; TestBuilder testbuilder; + PlannerManager ( TestBuilder, String, String ) + PDDLDomain getnewpddldomain ( ) + PDDLProblem getnewpddlproblem ( ) + PDDLConstructor getpddlconstructor ( ) + void PDDLDomain getclonepddldomain ( ) + void PDDLProblem getclonepddlproblem ( ) + TestStructurePDDLMap getcurrentmap ( ) + TestStructurePDDLMap getnewmap ( ) + void process ( String, String, String ) PDDLWriter + public PDDLWriter ( ) + void writedomin ( PDDLDomain, String, String ) + void writeproblem ( PDDLProblem, String, String ) Figure 5.4: The classes of the testcasegenerator.plannerinit package converts the Test Suite from OWL to POSL using teststructuregenerator.assessment. TestSuiteOWLReader and teststructuregenerator.common.lpwriter classes, and uses the teststructuregenerator.common.reasoner to assess the test structure for redundancy. If the test structure is not redundant, it uses a testcasegenerator.plannerrunner.runplanner object to run the planner and create test cases. The PDDLConstructor provides methods for constructing in-memory representation of PDDL domains and problems from UML state machines (i.e. objects of the PDDLDomain and PDDLProblem classes). The PDDLConstructor methods are called by the teststructuregenerator.generator.statemachineowlreader class. The generated PDDLDomain object is equivalent to the UML state machine. The gen- 76

89 PDDLWriter TestStructurePDDLMap from testcasegenerator.plannerinit.datastructures <<instantiates>> TestStructureReader <<instantiates>> <<instantiates>> PDDLDomain from testcasegenerator.plannerinit.datastructures <<instantiates>> TestBuilder from testcasegenerator.testwriter PlannerManager PDDLConstructor <<instantiates>> PDDLProblem from testcasegenerator.plannerinit.datastructures TestStr TestStructurePDDLMa PDDLDomain basedoma PDDLProblem basepro + TestStructureRead PDDLConstructor, Te final PDDLDomain, f + void process() + ArrayList<String> PDDL PDDLDomain domain; PDDLProblem problem + PDDLConstructor ( + void init ( ) + state ( boolean, + void transition ( + void from ( Strin + void to ( String, + void precondition String ) + void precondition String ) + void precondition String ) + void precondition String ) + void effect ( Str + void effectdesc ( + void statevariabl + void finalize ( ) Figure 5.5: The testcasegenerator.plannerinit package erated PDDLProblem requires that a path from the start state of the state machine to the final state be generated. The generated PDDLDomain and PDDLProblems objects are basepddldomain and basepddlproblem objects. They are cloned and PDDL elements are added to their copies, by the TestStructureReader in order to enforce the test structure. A mapping between the UML statemachines and PDDL domain and problem is described in Section The TestStructureReader class reads the test structures and modifies the basepddldomain and the basepddlproblem to enforce the planner to generate plan that satisfies the test structure. For adding the test structure predicates to the system this class needs to be modified to support generation of problems and plans for it. For instance, for the covertransition test structure predicate, a Passed predicate is added to the PDDL domain and is set true when the action corresponding to the transition, which needs to be covered is traversed. In the PDDL problem the Passed predicate is added to the goals. The PDDLWriter is used to write the PDDL domains and problems. The syntax 77

90 of PDDL domain and problem is described in section The testcasegenerator.plannerinit.datastructures Package This package (Figure 5.6) provides classes for in-memory representation of PDDL Domains and Problems as well as their mapping to the test structures. The classes lempair main, PDDLProblem) included in this package are shown in Figure 5.7. TestStructurePDDLMap domainproblempairs DomainProblemPair Cloneable problem PDDLDomain domain PDDLProblem Problem tateobject; tateobject; ent> objects; ts; goals; pedelement ) erator() Object() ject() ement> getobjects() init) ator() it() Desc gd) ator() () al () ct(typedelement ) getfinalstateobject() eobject(typedelement) irststateobject() ng name) Figure 5.6: The testcasegenerator.plannerinit.datastructures package The TestStructurePDDLMap maps a test structure to the corresponding PDDL Problem and Domain files. This class also generates a unique file address for the PDDL domain and problems. A test structure is a key for the HashMap, which returns a DomainProblemPair. The DomainProblemPair is an aggregation of a pair of PDDLDomain and PDDLProblem. The PDDLDomain and the PDDLProblem classes are in-memory representations of PDDL domains and problems. The design of the data structures that implement PDDL domain and problem classes is based on the BNF of PDDL 2.1 [5]. Both the PDDLDomain and PDDLProblem are an aggregation of PDDL Constructs. The components of a PDDLDomain are Actions, Predicates, Functions, and Types. The components of a PDDLProblem are Objects, Inits, and Goals. The PDDLDomain and PDDLProblem classed provide 78

91 TestStructurePDDLMap HashMap<String, DomainProblemPair> structuretoindex + TestStructurePDDLMap ( ) + String getdir(structure) + String getfilename( PDDLProblem ) + String getfilename( PDDLDomain ) + void add(string, DomainProblemPair) + DomainProblemPair get(string) + String[] getteststructures() DomainProblemPair PDDLDomain domain; PDDLProblem problem; + DomainProblemPair(PDDLDomain, PDDLProblem) + PDDLDomain getdomain() + PDDLProblem getproblem() Cloneable Cloneable PDDLDomain HashMap<String, Action> actions; HashMap<String, Predicate> predicates; HashMap<String, Function> functions; HashMap<String, Type> types; String name; + PDDLDomain() + void addaction(action act) + Action getaction(string name) + void initactioniterator() + Action nextaction() + boolean hasnextaction() + void addpredicate(string, ArrayList<TypedElement>) + void initpredicateiterator() + Predicate nextpredicate() + boolean hasnextpredicate() + void addfunction(string, ArrayList<TypedElement>) + void initfunctioniterator() + Function nextfunction() + boolean hasnextfunction() + void addpddltype(string, String) + void addpddltype(string typename) + Type getpddltype(string) + void initpddltypeiterator() + Type nextpddltype() + boolean hasnextpddltype() + String getname() + void setname(string name) PDDLProblem TypedElement firststateobject; TypedElement finalstateobject; ArrayList<TypedElement> objects; ArrayList<Init> inits; ArrayList<GoalDesc> goals; String name; + PDDLProblem() + void addobject(typedelement ) + void initobjectiterator() + TypedElement nextobject() + boolean hasnextobject() + ArrayList<TypedElement> getobjects() + void addinit(init init) + void initinititerator() + Init nextinit() + boolean hasnextinit() + void addgoal(goaldesc gd) + void initgoaliterator() + GoalDesc nextgoal() + boolean hasnextgoal () + setfinalstateobject(typedelement ) + void TypedElement getfinalstateobject() + void setfirststateobject(typedelement) + TypedElement getfirststateobject() + String getname() + void setname(string name) Figure 5.7: The classes of the testcasegenerator.plannerinit.datastructures package 79

92 methods for iterating over their components as well as adding components to them. Both implement the Cloneable interface. In their clone methods, they call the clone methods of their components. The other data structures that are used for implementing in-memory representation of PDDL domains and problems are included in the testcasegenerator.plannerinit.datastructures.pddl package The testcasegenerator.plannerinit.datastructures. PDDL package This package includes classes, which are used for in-memory representation of PDDL domain and PDDL problem. The classes included in this package are components of the composite PDDLDomain and PDDLProblem classes. The design of this package is based on BNF of PDDL 2.1. Only the relevant portion of the PDDL 2.1 is implemented. For every variable in BNF a class is added to the design. For every rule in the BNF, attributes with type of the classes corresponding to the variable on the right side of the rule are added to class corresponding to the left side of the rule. For instance, for the rule below from the PDDL 2.1 BNF, two classes Effect and CEffect are added to the class design and the Effect class has a linked list of CEffects in its andlist attribute. <effect> ::= ( and <c-effect>? ) In cases for which a variable in BNF goes to several regular expressions, an enum is defined, which has values corresponding to the rules that are used to expand the variable. A Type property, which holds the type of the defined enum, is added to the class corresponding to the variable. The Type property specifies which attributes of the class have valid values. When an object of a class that has Type attribute is created, the Type of the object is set, based on what rule is used to expand the variable and the values of the corresponding attributes of the rule are set. When 80

93 retrieving the data of an object of a class that has Type attribute, the value of the Type attribute is checked to determine what attributes have valid values. The classes of this package provide methods for iterating over their components and setting some of their attributes. They also implement the Cloneable interface. In their clone methods they call the clone methods of their components The testcasegenerator.plannerrunner Package This package includes the runplanner class which is responsible for running the AI Planner. The AI planner used in this implementation is Metric FF [43]. Figure 5.8 shows the runplanner class diagram. The run method executes the planner. The planner takes two arguments: the name of the PDDL domain file and the name of the PDDL problem file. Then, the run method parses the output and builds an object of the Test class. The format of the command to run the planner is as follows: ff -o domain -f problem A portion of the generated plan for the [covertransition][opentoclosed] test objective is listed below. The steps of the plan start with a #number starting from #0. After each step, the names of the predicates that are true are listed. In the listing below, the OPEN predicate corresponds to the open state variable of the class. The PASSED predicate is set to true when the OPENTOCLOSED PDDL action, which corresponds to the covertransition test objective parameter is traversed. The AC- TIVE predicate indicates the active state of the UML state machine at each step. The predicates that are added to impose the test objective are ignored when the plan is translated to an object of the Test class. Output: ff: parsing domain file domain STATEMACHINE defined 81

94 Step String transitionname; HashMap < String, Boolean > outcome; + Step ( String ) + void addoutcome ( String, boolean) + String gettransitionname () + void beginiteration () + boolean hasnext () + String getnextvar ( ) + boolean getvalue(string varname) Test from testcasegenerator.testwriter String comment; ArrayList < Step > test; + Test ( String ) + void addstep ( String ) + String gettransitionname () + void addoutcome ( String, String, boolean ) + void beginiteration() + boolean hasnext () + Step getnextstep ( ) + String getcomment ( ) <<instantiates>> runplanner + runplanner () + void init(string) + void finalize() + Test run ( String, String ) Figure 5.8: The testcasegenerator.plannerrunner package... done. ff: parsing problem file problem P1 defined... done.... ff: found legal plan as follows #0: STARTTOCLOSED CLOSEDSTATEOBJECT STARTSTATEOBJECT (ACTIVE CLOSEDSTATEOBJECT) #1: CLOSEDTOOPEN CLOSEDSTATEOBJECT OPENSTATEOBJECT (OPEN) (PASSED) (ACTIVE OPENSTATEOBJECT) #2: OPENTOCLOSED OPENSTATEOBJECT CLOSEDSTATEOBJECT (PASSED) 82

95 (ACTIVE CLOSEDSTATEOBJECT) #3: CLOSEDTOFINAL FINALSTATE_1OBJECT CLOSEDSTATEOBJECT (PASSED) (ACTIVE FINALSTATE_1OBJECT) time spent: 0.00 seconds instantiating 4 easy, 0 hard action templates The testcasegenerator.testwriter Package This package includes classes which are responsible for writing a test case in the OWL file and JUnit file (Figure 5.9). A Test object is an in-memory representation Step of a test case and contains a linked list of objects of the Step class. A TestBuilder class is responsible for managing OWLTestSuiteWriter and JUnitWriter to write ImplemetationKnowledge test suites. The OWLTestSuiteWriter imp class provides methods for adding a test to an OWL test suite ontology. The JUnitWriter class provides methods for writing a JUnit test case. The ImplementationKnowledge class provides methods for retrieving implementation knowledge from JUnitWriter the implementation knowledge OWLTestSuiteWriter ontology. Figure 5.10 shows the classes of this package. Step ImplemetationKnowledge Steps Steps junitwriter JUnitWriter Test TestBuilder Test owlwriter TestBuilder OWLTestSuiteWriter Figure 5.9: The testcasegenerator.testwriter package 83 Implemetati + implementationknow + String getgetterim + String getmethodim + String getclassimp + String getpackagei + String getcall (St + String[] getallboo String outputfileaddre + JUnitWriter () + void init ( String + void createjunitte + void createfilehea + void createtestcas + void addcreatetest + void addcallstep ( + void addbooleanfun + void endtestcase ( + void addcomment(st OW String theowltestsuite + OWLTestSuiteWriter + void createtestsui + void loadtestsuite + Individual createt + Individual creates Individual ) + Individual createv Individual ) + void setstatevaria + void setstatevaria + void setnextstep ( + void settransition + void savetestsuite Tes String comment; ArrayList < Step > + Test ( String ) + void addstep ( Str + String gettransiti + void addoutcome ( boolean ) + void beginiteratio + boolean hasnext () + Step getnextstep ( + String getcomment

96 ImplemetationKnowledge + implementationknowledge (String, String) + String getgetterimplname (String) + String getmethodimplname (String) + String getclassimplname () + String getpackageimplname () + String getcall (String) + String[] getallbooleanvariableinds () TestBuilder ImplementationKnowledge imp; JUnitWriter junitwriter; OWLTestSuiteWriter owlwriter; + TestBuilder ( String, ImplementationKnowledge, String ) + void buildtest ( Test ) + void finalize ( ) Writer WLTestSuiteWriter er OWLTestSuiteWriter JUnitWriter String outputfileaddress; + JUnitWriter () + void init ( String, String) + void createjunittestcasefile () + void createfileheader () + void createtestcase () + void addcreatetestobjectstep ( String[]) + void addcallstep (String, String[]) + void addbooleanfunctionoutcome ( String, String, String[], boolean ) + void endtestcase ( ) + void addcomment(string) OWLTestSuiteWriter String theowltestsuiteaddress; + OWLTestSuiteWriter (String, String) + void createtestsuite () + void loadtestsuite () + Individual createtestindividual (String) + Individual createstepindividual ( String, Individual ) + Individual createvariablevalueindividual ( String, Individual ) + void setstatevariable ( Individual, String ) + void setstatevariablevalue ( Individual, boolean ) + void setnextstep ( Individual, Individual ) + void settransition ( Individual, String ) + void savetestsuite ( ) Test String comment; ArrayList < Step > test; + Test ( String ) + void addstep ( String ) + String gettransitionname () + void addoutcome ( String, String, boolean ) + void beginiteration() + boolean hasnext () + Step getnextstep ( ) + String getcomment ( ) Step String transitionname; HashMap < String, Boolean > outcome; + Step ( String ) + void addoutcome ( String, boolean) + String gettransitionname () + void beginiteration () + boolean hasnext () + String getnextvar ( ) + boolean getvalue(string varname) Figure 5.10: The classes of the testcasegenerator.testwriter package 84

97 A Test object, which implements a test case contains a linked list of Step objects. It provides methods for creating the Step objects and iterating over its steps. A Step object, which implements a step of a test case, contains the value of state variables as the outcome of the step and the name of the transition of the state machine that is passed. It also provides methods for adding outcomes to the step and retrieving the outcomes. The TestBuilder class is used to build JUnit and OWL test suites from the Test objects. The buildtest method parses a given Test and calls the methods of the objects of OWLTestSuiteWriter and JUnitWriter to build the test suite in OWL format and JUnit format, respectively. The OWLTestSuiteWriter uses the Jena API for adding test cases to the test suite. It provides methods for loading an OWL Test Suite, creating a test case, adding the steps to the test case, and setting the properties of the test steps. The JUnitWriter provides methods for writing a JUnit file, including methods to create the file header, methods to create a test case and the steps of the test case. Some of the OWLTestSuiteWriter methods for creating the OWL Test Suite and writing to file are shown in Appendix C.3. An object of the ImplemntationKnowledge is used by the TestBuilder to retrieve the implementation knowledge. It has methods for retrieving the class name, the package name, the implementation names of the state variables and methods, and the getter methods of the state variables. 5.3 Summary The system implementation uses the Jena API for manipulating OWL ontologies, OO jdrew for reasoning, and an AI planner, which is named Metric FF, for generation of test cases. There are 8 packages that work together to realize the system behavior: The teststructuregenerator.generator uses the teststructuregenerator.common 85

98 to generate test objectives. The teststructuregenerator.assessment reads the test suite ontology and uses the teststructuregenerator.common for redundancy checking. The testcasegenerator.plannerinit initializes input of the planner and controls test case generation. The testcasegenerator.plannerinit.datastructures and testcasegenerator.plannerinit.datastructures.pddl packages provide data structures for in-memory representation of PDDL problems and domains. The testcasegenerator.plannerrunner provide functionallity for running the AI planner. The testcasegenerator.testwriter provide functionality for writing test suite ontology and executable test suites. 86

99 Chapter 6 System Demonstration and Evaluation In this chapter performance of the system is demonstrated and its potential for extensibility is explored (extensibility includes support for various coverage criteria and expert knowledge). Performance of the system in generation of test cases for a traffic light controller class, and several system limitations are delineated in Section 6.1. Section 6.2 examines extensibility of the system by discussing examples of extension of the test oracle with expert knowledge and definition of custom coverage criteria, definition of rules for several coverage criteria defined in literature, and definition of coverage criteria based on an error taxonomy. Section 6.3 the summarizes the this chapter. 6.1 Case Study System performance is demonstrated for test case generation for a traffic light controller class. 87

100 6.1.1 Case Study: Traffic Light Class Figure 6.1 depicts the state machine of a cross road traffic light controller. The traffic light stays green for at least the long time interval in one direction and turns yellow when a car is sensed in the other direction. Then it remains yellow for the short time interval before it becomes red. If a pedestrian crosses a road when its light is in a green state, the light goes to blink state for the blink time interval. There is a correspondance between the state machine elements and the class under test: The state variables correspond to the member variables. The events correspond to the public methods. The actions simulate how the state variables are changed by the methods. The traffic light class does not have a timer and delegates the counting responsibility to another class which produces call events. Figure 6.2 depicts part of the traffic light state machine ontology, which is detailed in Appendix D.1. Part of the test suite and part of the implementation knowledge ontologies are depicted in Figures 6.3 and 6.4, respectively. As an example coverage criteria in POSL for all-transition coverage and all-transition-pair coverage as well as the query, which is asked from OO jdrew to generate test objectives, are shown below. - All Transition Coverage: coverage([covertransition],[?tr]) :- transition (?tr). - All Transition Pair Coverage: coverage([immediate],[?a,?b]) :- transition(?a), transition(?b),notequal(?a,?b),from(?a,?state),to(?b,?state). - Query: coverage(?predicates,?args). 88

101 BlinkTimeInterval() [ ] / Road1Blink=false; LongTimeInterval() [ ] / lti=true; road1blink SenseRoad2() [ lti=true ] / Road1Yellow=true; Road1Green=false; Road1Pedestrian() [ ] / Road1Blink=true; LongTimeInterval() [ ] / lti=true; road1green ShortTimeInterval() [ ] / Road1Green=true; Road2Yellow=false; Road2Red=true; lti=false; Road1Green=true; Road2Green=false; Road1Yellow=false; Road2Yellow=false; Road1Red=false; Road2Red=false; road1yellow road2yellow Destroy() ShortTimeInterval() [ ] / Road2Green=true; Road1Yellow=false; Road1Red=true; road2green SenseRoad1() [ lti=true ] / Road2Yellow=true; Road2Green=false; Road2Pedestrian() [ ] / Road2Blink=true; LongTimeInterval() [ ] / lti=true; BlinkTimeInterval() [ ] / Road2Blink=false; road2blink LongTimeInterval() [ ] / lti=true; Figure 6.1: Traffic light state machine 89

102 tl:trafficlightsm sm:transitions sm:states sm:from sm:vars sm:states sm:to sm:guard tl:road1greentoroad1yellow sm:statevariable tl:road1green tl:road1green tl:road1yellow tl:lticondition sm:statemachine sm:vars sm:states is-a sm:transitions sm:state sm:from sm:to sm:condition sm:to sm:guard sm:transition sm:abstractstate sm:in sm:out is-a is-a sm:finalstate sm:startstate sm:from sm:action sm:event tl:senseroad2 tl:road1goyellow sm:call sm:event ts:nextstep sm:action sm:behaviour ts:test TBox Definitions: sm:transition Instantiation: ABox Definition: ts:value ts:value ts:hascall property class2 ts:step individual1 ts:arg ts:outcome individual2 property ts:variablevalue ts:variable ts:hasstep class1 class1 individual1 sm:statevariable property Figure TBox Definitions: 6.2: Traffic lightclass2 state machineclass1 ontology sm:starttoroad1green sm:road1greentoroad1green ts:hastransition ts:outcome ts:hastransition tltest:test0step1 ts:hasstep tltest:test0step0 ts:nextstep ts:hasstep ts:nextstep ts:test tltest:test0 ts:outcome sm:transition ts:hascall ts:step ts:arg ts:outcome ts:hasstep ts:value ts:value ts:variablevalue ts:variable sm:statevariable tltest:test0step0outcome tltest:test0step1outome TBox Definitions: Instantiation: ABox Definition: class2 individual1 individual2 property property class1 class1 individual1 Figure 6.3: Traffic light test suite ontology 90

Compiling and Executing PDDL in Picat

Compiling and Executing PDDL in Picat Compiling and Executing PDDL in Picat Marco De Bortoli, Roman Bartak, Agostino Dovier, Neng-Fa Zhou Università degli Studi di Udine CILC 2016 Picat CILC 2016 1 / 21 Outline Introduction to classical planning

More information

Designing and Evaluating Generic Ontologies

Designing and Evaluating Generic Ontologies Designing and Evaluating Generic Ontologies Michael Grüninger Department of Industrial Engineering University of Toronto gruninger@ie.utoronto.ca August 28, 2007 1 Introduction One of the many uses of

More information

A Survey of Temporal Knowledge Representations

A Survey of Temporal Knowledge Representations A Survey of Temporal Knowledge Representations Advisor: Professor Abdullah Tansel Second Exam Presentation Knowledge Representations logic-based logic-base formalisms formalisms more complex and difficult

More information

OWL Semantics COMP Sean Bechhofer Uli Sattler

OWL Semantics COMP Sean Bechhofer Uli Sattler OWL Semantics COMP62342 Sean Bechhofer sean.bechhofer@manchester.ac.uk Uli Sattler uli.sattler@manchester.ac.uk 1 Toward Knowledge Formalization Acquisition Process Elicit tacit knowledge A set of terms/concepts

More information

Semantic Evolution of Geospatial Web Services: Use Cases and Experiments in the Geospatial Semantic Web

Semantic Evolution of Geospatial Web Services: Use Cases and Experiments in the Geospatial Semantic Web Semantic Evolution of Geospatial Web Services: Use Cases and Experiments in the Geospatial Semantic Web Joshua Lieberman, Todd Pehle, Mike Dean Traverse Technologies, Inc., Northrop Grumman Information

More information

Event Operators: Formalization, Algorithms, and Implementation Using Interval- Based Semantics

Event Operators: Formalization, Algorithms, and Implementation Using Interval- Based Semantics Department of Computer Science and Engineering University of Texas at Arlington Arlington, TX 76019 Event Operators: Formalization, Algorithms, and Implementation Using Interval- Based Semantics Raman

More information

The TLA + proof system

The TLA + proof system The TLA + proof system Stephan Merz Kaustuv Chaudhuri, Damien Doligez, Leslie Lamport INRIA Nancy & INRIA-MSR Joint Centre, France Amir Pnueli Memorial Symposium New York University, May 8, 2010 Stephan

More information

Knowledge Representation and Description Logic Part 3

Knowledge Representation and Description Logic Part 3 Knowledge Representation and Description Logic Part 3 Renata Wassermann renata@ime.usp.br Computer Science Department University of São Paulo September 2014 IAOA School Vitória Renata Wassermann Knowledge

More information

Computational Tasks and Models

Computational Tasks and Models 1 Computational Tasks and Models Overview: We assume that the reader is familiar with computing devices but may associate the notion of computation with specific incarnations of it. Our first goal is to

More information

OntoRevision: A Plug-in System for Ontology Revision in

OntoRevision: A Plug-in System for Ontology Revision in OntoRevision: A Plug-in System for Ontology Revision in Protégé Nathan Cobby 1, Kewen Wang 1, Zhe Wang 2, and Marco Sotomayor 1 1 Griffith University, Australia 2 Oxford University, UK Abstract. Ontologies

More information

DESCRIPTION LOGICS. Paula Severi. October 12, University of Leicester

DESCRIPTION LOGICS. Paula Severi. October 12, University of Leicester DESCRIPTION LOGICS Paula Severi University of Leicester October 12, 2009 Description Logics Outline Introduction: main principle, why the name description logic, application to semantic web. Syntax and

More information

APPLICATION OF ONTOLOGIES AND SEMANTIC WEB FOR FACILITATION OF ECOLOGY

APPLICATION OF ONTOLOGIES AND SEMANTIC WEB FOR FACILITATION OF ECOLOGY Доклади на Българската академия на науките Comptes rendus de l Académie bulgare des Sciences Tome 65, No 5, 2012 MATHEMATIQUES Informatique APPLICATION OF ONTOLOGIES AND SEMANTIC WEB FOR FACILITATION OF

More information

Part 1: Fundamentals

Part 1: Fundamentals Provläsningsexemplar / Preview INTERNATIONAL STANDARD ISO 19101-1 First edition 2014-11-15 Geographic information Reference model Part 1: Fundamentals Information géographique Modèle de référence Partie

More information

Lecture 05: High-Level Design with SysML. An Introduction to SysML. Where are we? What is a model? The Unified Modeling Language (UML)

Lecture 05: High-Level Design with SysML. An Introduction to SysML. Where are we? What is a model? The Unified Modeling Language (UML) Where are we? Systeme hoher Sicherheit und Qualität Universität Bremen, WS 2017/2018 Lecture 05: High-Level Design with SysML Christoph Lüth, Dieter Hutter, Jan Peleska 01: Concepts of Quality 02: Legal

More information

Towards a traceability framework for model transformations in Kermeta

Towards a traceability framework for model transformations in Kermeta Towards a traceability framework for model transformations in Kermeta Jean-Rémy Falleri, Marianne Huchard, and Clémentine Nebut LIRMM, CNRS and Université de Montpellier 2, 161, rue Ada, 34392 Montpellier

More information

Ontologies and Domain Theories

Ontologies and Domain Theories Ontologies and Domain Theories Michael Grüninger Department of Mechanical and Industrial Engineering University of Toronto gruninger@mie.utoronto.ca Abstract Although there is consensus that a formal ontology

More information

OWL Basics. Technologies for the Semantic Web. Building a Semantic Web. Ontology

OWL Basics. Technologies for the Semantic Web. Building a Semantic Web. Ontology Technologies for the Semantic Web OWL Basics COMP60421 Sean Bechhofer University of Manchester sean.bechhofer@manchester.ac.uk Metadata Resources are marked-up with descriptions of their content. No good

More information

A Refined Tableau Calculus with Controlled Blocking for the Description Logic SHOI

A Refined Tableau Calculus with Controlled Blocking for the Description Logic SHOI A Refined Tableau Calculus with Controlled Blocking for the Description Logic Mohammad Khodadadi, Renate A. Schmidt, and Dmitry Tishkovsky School of Computer Science, The University of Manchester, UK Abstract

More information

An Introduction to Description Logics

An Introduction to Description Logics An Introduction to Description Logics Marco Cerami Palacký University in Olomouc Department of Computer Science Olomouc, Czech Republic Olomouc, 21.11.2013 Marco Cerami (UPOL) Description Logics 21.11.2013

More information

Knowledge Sharing. A conceptualization is a map from the problem domain into the representation. A conceptualization specifies:

Knowledge Sharing. A conceptualization is a map from the problem domain into the representation. A conceptualization specifies: Knowledge Sharing A conceptualization is a map from the problem domain into the representation. A conceptualization specifies: What sorts of individuals are being modeled The vocabulary for specifying

More information

An Introduction to GLIF

An Introduction to GLIF An Introduction to GLIF Mor Peleg, Ph.D. Post-doctoral Fellow, SMI, Stanford Medical School, Stanford University, Stanford, CA Aziz A. Boxwala, M.B.B.S, Ph.D. Research Scientist and Instructor DSG, Harvard

More information

Integrating State Constraints and Obligations in Situation Calculus

Integrating State Constraints and Obligations in Situation Calculus Integrating State Constraints and Obligations in Situation Calculus Robert Demolombe ONERA-Toulouse 2, Avenue Edouard Belin BP 4025, 31055 Toulouse Cedex 4, France. Robert.Demolombe@cert.fr Pilar Pozos

More information

Model Checking. Boris Feigin March 9, University College London

Model Checking. Boris Feigin March 9, University College London b.feigin@cs.ucl.ac.uk University College London March 9, 2005 Outline 1 2 Techniques Symbolic 3 Software 4 Vs. Deductive Verification Summary Further Reading In a nutshell... Model checking is a collection

More information

Conceptual Modeling in the Environmental Domain

Conceptual Modeling in the Environmental Domain Appeared in: 15 th IMACS World Congress on Scientific Computation, Modelling and Applied Mathematics, Berlin, 1997 Conceptual Modeling in the Environmental Domain Ulrich Heller, Peter Struss Dept. of Computer

More information

Knowledge representation DATA INFORMATION KNOWLEDGE WISDOM. Figure Relation ship between data, information knowledge and wisdom.

Knowledge representation DATA INFORMATION KNOWLEDGE WISDOM. Figure Relation ship between data, information knowledge and wisdom. Knowledge representation Introduction Knowledge is the progression that starts with data which s limited utility. Data when processed become information, information when interpreted or evaluated becomes

More information

The Underlying Semantics of Transition Systems

The Underlying Semantics of Transition Systems The Underlying Semantics of Transition Systems J. M. Crawford D. M. Goldschlag Technical Report 17 December 1987 Computational Logic Inc. 1717 W. 6th St. Suite 290 Austin, Texas 78703 (512) 322-9951 1

More information

micromodels of software declarative modelling and analysis with Alloy lecture 4: a case study MIT Lab for Computer Science Marktoberdorf, August 2002

micromodels of software declarative modelling and analysis with Alloy lecture 4: a case study MIT Lab for Computer Science Marktoberdorf, August 2002 micromodels of software declarative modelling and analysis with Alloy lecture 4: a case study Daniel Jackson MIT Lab for Computer Science Marktoberdorf, August 2002 on research strategy 2 on research strategy

More information

Qualitative Spatio-Temporal Reasoning & Spatial Database Design

Qualitative Spatio-Temporal Reasoning & Spatial Database Design Qualitative Spatio-Temporal Reasoning Focus on 2 research topics at the of the University of Liège J-P. Donnay P. Hallot F. Laplanche Curriculum in «Surveying & Geomatics» in the Faculty of Sciences of

More information

Automated Checking of Integrity Constraints for a Model- and Pattern-Based Requirements Engineering Method (Technical Report)

Automated Checking of Integrity Constraints for a Model- and Pattern-Based Requirements Engineering Method (Technical Report) Automated Checking of Integrity Constraints for a Model- and Pattern-Based Requirements Engineering Method (Technical Report) Isabelle Côté 1, Denis Hatebur 1,2, Maritta Heisel 1 1 University Duisburg-Essen,

More information

Interactive ontology debugging: two query strategies for efficient fault localization

Interactive ontology debugging: two query strategies for efficient fault localization Interactive ontology debugging: two query strategies for efficient fault localization Kostyantyn Shchekotykhin a,, Gerhard Friedrich a, Philipp Fleiss a,1, Patrick Rodler a,1 a Alpen-Adria Universität,

More information

Chapter 2 Background. 2.1 A Basic Description Logic

Chapter 2 Background. 2.1 A Basic Description Logic Chapter 2 Background Abstract Description Logics is a family of knowledge representation formalisms used to represent knowledge of a domain, usually called world. For that, it first defines the relevant

More information

FOUNDATIONS OF SEMANTIC WEB TECHNOLOGIES

FOUNDATIONS OF SEMANTIC WEB TECHNOLOGIES FOUNDATIONS OF SEMANTIC WEB TECHNOLOGIES OWL & Description Logics Markus Krötzsch Dresden, 16 May 2014 Content Overview & XML Introduction into RDF RDFS Syntax & Intuition Tutorial 1 RDFS Semantics RDFS

More information

OWL Semantics. COMP60421 Sean Bechhofer University of Manchester

OWL Semantics. COMP60421 Sean Bechhofer University of Manchester OWL Semantics COMP60421 Sean Bechhofer University of Manchester sean.bechhofer@manchester.ac.uk 1 Technologies for the Semantic Web Metadata Resources are marked-up with descriptions of their content.

More information

MODEL CHECKING. Arie Gurfinkel

MODEL CHECKING. Arie Gurfinkel 1 MODEL CHECKING Arie Gurfinkel 2 Overview Kripke structures as models of computation CTL, LTL and property patterns CTL model-checking and counterexample generation State of the Art Model-Checkers 3 SW/HW

More information

Description Logics. Glossary. Definition

Description Logics. Glossary. Definition Title: Description Logics Authors: Adila Krisnadhi, Pascal Hitzler Affil./Addr.: Wright State University, Kno.e.sis Center 377 Joshi Research Center, 3640 Colonel Glenn Highway, Dayton OH 45435, USA Phone:

More information

Adaptive ALE-TBox for Extending Terminological Knowledge

Adaptive ALE-TBox for Extending Terminological Knowledge Adaptive ALE-TBox for Extending Terminological Knowledge Ekaterina Ovchinnikova 1 and Kai-Uwe Kühnberger 2 1 University of Tübingen, Seminar für Sprachwissenschaft e.ovchinnikova@gmail.com 2 University

More information

ALC Concept Learning with Refinement Operators

ALC Concept Learning with Refinement Operators ALC Concept Learning with Refinement Operators Jens Lehmann Pascal Hitzler June 17, 2007 Outline 1 Introduction to Description Logics and OWL 2 The Learning Problem 3 Refinement Operators and Their Properties

More information

A New Approach to Knowledge Base Revision in DL-Lite

A New Approach to Knowledge Base Revision in DL-Lite A New Approach to Knowledge Base Revision in DL-Lite Zhe Wang and Kewen Wang and Rodney Topor School of ICT, Griffith University Nathan, QLD 4111, Australia Abstract Revising knowledge bases (KBs) in description

More information

Binary Decision Diagrams and Symbolic Model Checking

Binary Decision Diagrams and Symbolic Model Checking Binary Decision Diagrams and Symbolic Model Checking Randy Bryant Ed Clarke Ken McMillan Allen Emerson CMU CMU Cadence U Texas http://www.cs.cmu.edu/~bryant Binary Decision Diagrams Restricted Form of

More information

Preliminaries. Introduction to EF-games. Inexpressivity results for first-order logic. Normal forms for first-order logic

Preliminaries. Introduction to EF-games. Inexpressivity results for first-order logic. Normal forms for first-order logic Introduction to EF-games Inexpressivity results for first-order logic Normal forms for first-order logic Algorithms and complexity for specific classes of structures General complexity bounds Preliminaries

More information

Try to find a good excuse!

Try to find a good excuse! Try to find a good excuse! BRA-2015 (Workshop on Belief Revision and Argumentation) Bernhard Nebel & Moritz Göbelbecker Department of Computer Science Foundations of Artificial Intelligence Finding excuses

More information

Temporal Logic of Actions

Temporal Logic of Actions Advanced Topics in Distributed Computing Dominik Grewe Saarland University March 20, 2008 Outline Basic Concepts Transition Systems Temporal Operators Fairness Introduction Definitions Example TLC - A

More information

Using Patterns and Composite Propositions to Automate the Generation of LTL Specifications

Using Patterns and Composite Propositions to Automate the Generation of LTL Specifications Using Patterns and Composite Propositions to Automate the Generation of LTL Specifications Salamah Salamah, Ann Q. Gates, Vladik Kreinovich, and Steve Roach Dept. of Computer Science, University of Texas

More information

Fuzzy Propositional Logic for the Knowledge Representation

Fuzzy Propositional Logic for the Knowledge Representation Fuzzy Propositional Logic for the Knowledge Representation Alexander Savinov Institute of Mathematics Academy of Sciences Academiei 5 277028 Kishinev Moldova (CIS) Phone: (373+2) 73-81-30 EMAIL: 23LSII@MATH.MOLDOVA.SU

More information

Thesis Title Second Line if Necessary

Thesis Title Second Line if Necessary Thesis Title Second Line if Necessary by Author Name A thesis submitted to the School of Computing in conformity with the requirements for the degree of Master of Science Queen s University Kingston, Ontario,

More information

Real-Time Software Transactional Memory: Contention Managers, Time Bounds, and Implementations

Real-Time Software Transactional Memory: Contention Managers, Time Bounds, and Implementations Real-Time Software Transactional Memory: Contention Managers, Time Bounds, and Implementations Mohammed El-Shambakey Dissertation Submitted to the Faculty of the Virginia Polytechnic Institute and State

More information

8. INTRACTABILITY I. Lecture slides by Kevin Wayne Copyright 2005 Pearson-Addison Wesley. Last updated on 2/6/18 2:16 AM

8. INTRACTABILITY I. Lecture slides by Kevin Wayne Copyright 2005 Pearson-Addison Wesley. Last updated on 2/6/18 2:16 AM 8. INTRACTABILITY I poly-time reductions packing and covering problems constraint satisfaction problems sequencing problems partitioning problems graph coloring numerical problems Lecture slides by Kevin

More information

The PITA System for Logical-Probabilistic Inference

The PITA System for Logical-Probabilistic Inference The System for Logical-Probabilistic Inference Fabrizio Riguzzi 1 and Terrance Swift 2 1 EDIF University of Ferrara, Via Saragat 1, I-44122, Ferrara, Italy fabrizio.riguzzi@unife.it 2 CETRIA Universidade

More information

Erly Marsh - a Model-Based Testing tool. Johan Blom, PhD

Erly Marsh - a Model-Based Testing tool. Johan Blom, PhD Erly Marsh - a Model-Based Testing tool Johan Blom, PhD 1 Motivation Mobile Arts Develops server software for mobile telecom operators (Location server, SMSC etc.) Implementations rather big and complicated

More information

Semantics and Inference for Probabilistic Ontologies

Semantics and Inference for Probabilistic Ontologies Semantics and Inference for Probabilistic Ontologies Fabrizio Riguzzi, Elena Bellodi, Evelina Lamma, and Riccardo Zese ENDIF University of Ferrara, Italy, email: {fabrizio.riguzzi, elena.bellodi, evelina.lamma}@unife.it,

More information

An Operational Semantics for the Dataflow Algebra. A. J. Cowling

An Operational Semantics for the Dataflow Algebra. A. J. Cowling Verification and Testing Research Group, Department of Computer Science, University of Sheffield, Regent Court, 211, Portobello Street, Sheffield, S1 4DP, United Kingdom Email: A.Cowling @ dcs.shef.ac.uk

More information

Nonmonotonic Reasoning in Description Logic by Tableaux Algorithm with Blocking

Nonmonotonic Reasoning in Description Logic by Tableaux Algorithm with Blocking Nonmonotonic Reasoning in Description Logic by Tableaux Algorithm with Blocking Jaromír Malenko and Petr Štěpánek Charles University, Malostranske namesti 25, 11800 Prague, Czech Republic, Jaromir.Malenko@mff.cuni.cz,

More information

Diagnosing Automatic Whitelisting for Dynamic Remarketing Ads Using Hybrid ASP

Diagnosing Automatic Whitelisting for Dynamic Remarketing Ads Using Hybrid ASP Diagnosing Automatic Whitelisting for Dynamic Remarketing Ads Using Hybrid ASP Alex Brik 1 and Jeffrey B. Remmel 2 LPNMR 2015 September 2015 1 Google Inc 2 UC San Diego lex Brik and Jeffrey B. Remmel (LPNMR

More information

Non-Markovian Control in the Situation Calculus

Non-Markovian Control in the Situation Calculus Non-Markovian Control in the Situation Calculus An Elaboration Niklas Hoppe Seminar Foundations Of Artificial Intelligence Knowledge-Based Systems Group RWTH Aachen May 3, 2009 1 Contents 1 Introduction

More information

Model checking the basic modalities of CTL with Description Logic

Model checking the basic modalities of CTL with Description Logic Model checking the basic modalities of CTL with Description Logic Shoham Ben-David Richard Trefler Grant Weddell David R. Cheriton School of Computer Science University of Waterloo Abstract. Model checking

More information

Mathematical Foundations of Logic and Functional Programming

Mathematical Foundations of Logic and Functional Programming Mathematical Foundations of Logic and Functional Programming lecture notes The aim of the course is to grasp the mathematical definition of the meaning (or, as we say, the semantics) of programs in two

More information

Web Ontology Language (OWL)

Web Ontology Language (OWL) Web Ontology Language (OWL) Need meaning beyond an object-oriented type system RDF (with RDFS) captures the basics, approximating an object-oriented type system OWL provides some of the rest OWL standardizes

More information

Ecco: A Hybrid Diff Tool for OWL 2 ontologies

Ecco: A Hybrid Diff Tool for OWL 2 ontologies Ecco: A Hybrid Diff Tool for OWL 2 ontologies Rafael S. Gonçalves, Bijan Parsia, and Ulrike Sattler School of Computer Science, University of Manchester, Manchester, United Kingdom Abstract. The detection

More information

Axiomatic Semantics. Operational semantics. Good for. Not good for automatic reasoning about programs

Axiomatic Semantics. Operational semantics. Good for. Not good for automatic reasoning about programs Review Operational semantics relatively l simple many flavors (small vs. big) not compositional (rule for while) Good for describing language implementation reasoning about properties of the language eg.

More information

RDF and Logic: Reasoning and Extension

RDF and Logic: Reasoning and Extension RDF and Logic: Reasoning and Extension Jos de Bruijn Faculty of Computer Science, Free University of Bozen-Bolzano, Italy debruijn@inf.unibz.it Stijn Heymans Digital Enterprise Research Institute (DERI),

More information

Ordering, Indexing, and Searching Semantic Data: A Terminology Aware Index Structure

Ordering, Indexing, and Searching Semantic Data: A Terminology Aware Index Structure Ordering, Indexing, and Searching Semantic Data: A Terminology Aware Index Structure by Jeffrey Pound A thesis presented to the University of Waterloo in fulfillment of the thesis requirement for the degree

More information

Resolution of Concurrent Planning Problems using Classical Planning

Resolution of Concurrent Planning Problems using Classical Planning Master in Intelligent Interactive Systems Universitat Pompeu Fabra Resolution of Concurrent Planning Problems using Classical Planning Daniel Furelos Blanco Supervisor: Anders Jonsson September 2017 Master

More information

Dra. Aïda Valls Universitat Rovira i Virgili, Tarragona (Catalonia)

Dra. Aïda Valls Universitat Rovira i Virgili, Tarragona (Catalonia) http://deim.urv.cat/~itaka Dra. Aïda Valls aida.valls@urv.cat Universitat Rovira i Virgili, Tarragona (Catalonia) } Presentation of the ITAKA group } Introduction to decisions with criteria that are organized

More information

Factory method - Increasing the reusability at the cost of understandability

Factory method - Increasing the reusability at the cost of understandability Factory method - Increasing the reusability at the cost of understandability The author Linkping University Linkping, Sweden Email: liuid23@student.liu.se Abstract This paper describes how Bansiya and

More information

Geografisk information Referensmodell. Geographic information Reference model

Geografisk information Referensmodell. Geographic information Reference model SVENSK STANDARD SS-ISO 19101 Fastställd 2002-08-09 Utgåva 1 Geografisk information Referensmodell Geographic information Reference model ICS 35.240.70 Språk: engelska Tryckt i september 2002 Copyright

More information

Charter for the. Information Transfer and Services Architecture Focus Group

Charter for the. Information Transfer and Services Architecture Focus Group for the Information Transfer and Services Architecture Focus Group 1. PURPOSE 1.1. The purpose of this charter is to establish the Information Transfer and Services Architecture Focus Group (ITSAFG) as

More information

Change Management within SysML Requirements Models

Change Management within SysML Requirements Models Change Management within SysML Requirements Models David ten Hove Master's thesis University of Twente Faculty of Electrical Engineering, Mathematics and Computer Science Department of Computer Science

More information

A General Testability Theory: Classes, properties, complexity, and testing reductions

A General Testability Theory: Classes, properties, complexity, and testing reductions A General Testability Theory: Classes, properties, complexity, and testing reductions presenting joint work with Luis Llana and Pablo Rabanal Universidad Complutense de Madrid PROMETIDOS-CM WINTER SCHOOL

More information

Information System Desig

Information System Desig n IT60105 Lecture 7 Unified Modeling Language Lecture #07 Unified Modeling Language Introduction to UML Applications of UML UML Definition Learning UML Things in UML Structural Things Behavioral Things

More information

Just: a Tool for Computing Justifications w.r.t. ELH Ontologies

Just: a Tool for Computing Justifications w.r.t. ELH Ontologies Just: a Tool for Computing Justifications w.r.t. ELH Ontologies Michel Ludwig Theoretical Computer Science, TU Dresden, Germany michel@tcs.inf.tu-dresden.de Abstract. We introduce the tool Just for computing

More information

Discordance Detection in Regional Ordinance: Ontology-based Validation

Discordance Detection in Regional Ordinance: Ontology-based Validation Discordance Detection in Regional Ordinance: Ontology-based Validation Shingo HAGIWARA a and Satoshi TOJO a a School of Information and Science, Japan Advanced Institute of Science and Technology, 1 1

More information

P P P NP-Hard: L is NP-hard if for all L NP, L L. Thus, if we could solve L in polynomial. Cook's Theorem and Reductions

P P P NP-Hard: L is NP-hard if for all L NP, L L. Thus, if we could solve L in polynomial. Cook's Theorem and Reductions Summary of the previous lecture Recall that we mentioned the following topics: P: is the set of decision problems (or languages) that are solvable in polynomial time. NP: is the set of decision problems

More information

Seamless Model Driven Development and Tool Support for Embedded Software-Intensive Systems

Seamless Model Driven Development and Tool Support for Embedded Software-Intensive Systems Seamless Model Driven Development and Tool Support for Embedded Software-Intensive Systems Computer Journal Lecture - 22nd June 2009 Manfred Broy Technische Universität München Institut für Informatik

More information

Adding ternary complex roles to ALCRP(D)

Adding ternary complex roles to ALCRP(D) Adding ternary complex roles to ALCRP(D) A.Kaplunova, V. Haarslev, R.Möller University of Hamburg, Computer Science Department Vogt-Kölln-Str. 30, 22527 Hamburg, Germany Abstract The goal of this paper

More information

Dynamic Semantics. Dynamic Semantics. Operational Semantics Axiomatic Semantics Denotational Semantic. Operational Semantics

Dynamic Semantics. Dynamic Semantics. Operational Semantics Axiomatic Semantics Denotational Semantic. Operational Semantics Dynamic Semantics Operational Semantics Denotational Semantic Dynamic Semantics Operational Semantics Operational Semantics Describe meaning by executing program on machine Machine can be actual or simulated

More information

Completing Description Logic Knowledge Bases using Formal Concept Analysis

Completing Description Logic Knowledge Bases using Formal Concept Analysis Completing Description Logic Knowledge Bases using Formal Concept Analysis Franz Baader 1, Bernhard Ganter 1, Ulrike Sattler 2 and Barış Sertkaya 1 1 TU Dresden, Germany 2 The University of Manchester,

More information

Resilience Management Problem in ATM Systems as ashortest Path Problem

Resilience Management Problem in ATM Systems as ashortest Path Problem Resilience Management Problem in ATM Systems as ashortest Path Problem A proposal for definition of an ATM system resilience metric through an optimal scheduling strategy for the re allocation of the system

More information

MAT2345 Discrete Math

MAT2345 Discrete Math Fall 2013 General Syllabus Schedule (note exam dates) Homework, Worksheets, Quizzes, and possibly Programs & Reports Academic Integrity Do Your Own Work Course Web Site: www.eiu.edu/~mathcs Course Overview

More information

Artificial Intelligence. Propositional Logic. Copyright 2011 Dieter Fensel and Florian Fischer

Artificial Intelligence. Propositional Logic. Copyright 2011 Dieter Fensel and Florian Fischer Artificial Intelligence Propositional Logic Copyright 2011 Dieter Fensel and Florian Fischer 1 Where are we? # Title 1 Introduction 2 Propositional Logic 3 Predicate Logic 4 Reasoning 5 Search Methods

More information

Scalable and Accurate Verification of Data Flow Systems. Cesare Tinelli The University of Iowa

Scalable and Accurate Verification of Data Flow Systems. Cesare Tinelli The University of Iowa Scalable and Accurate Verification of Data Flow Systems Cesare Tinelli The University of Iowa Overview AFOSR Supported Research Collaborations NYU (project partner) Chalmers University (research collaborator)

More information

A Description Logic with Concrete Domains and a Role-forming Predicate Operator

A Description Logic with Concrete Domains and a Role-forming Predicate Operator A Description Logic with Concrete Domains and a Role-forming Predicate Operator Volker Haarslev University of Hamburg, Computer Science Department Vogt-Kölln-Str. 30, 22527 Hamburg, Germany http://kogs-www.informatik.uni-hamburg.de/~haarslev/

More information

TESTING is one of the most important parts of the

TESTING is one of the most important parts of the IEEE TRANSACTIONS 1 Generating Complete Controllable Test Suites for Distributed Testing Robert M. Hierons, Senior Member, IEEE Abstract A test suite is m-complete for finite state machine (FSM) M if it

More information

Probabilistic Ontologies: Logical Approach

Probabilistic Ontologies: Logical Approach Probabilistic Ontologies: Logical Approach Pavel Klinov Applied Artificial Intelligence Lab ECE Department University of Cincinnati Agenda Why do we study ontologies? Uncertainty Probabilistic ontologies

More information

GIS at UCAR. The evolution of NCAR s GIS Initiative. Olga Wilhelmi ESIG-NCAR Unidata Workshop 24 June, 2003

GIS at UCAR. The evolution of NCAR s GIS Initiative. Olga Wilhelmi ESIG-NCAR Unidata Workshop 24 June, 2003 GIS at UCAR The evolution of NCAR s GIS Initiative Olga Wilhelmi ESIG-NCAR Unidata Workshop 24 June, 2003 Why GIS? z z z z More questions about various climatological, meteorological, hydrological and

More information

Proving Inter-Program Properties

Proving Inter-Program Properties Unité Mixte de Recherche 5104 CNRS - INPG - UJF Centre Equation 2, avenue de VIGNATE F-38610 GIERES tel : +33 456 52 03 40 fax : +33 456 52 03 50 http://www-verimag.imag.fr Proving Inter-Program Properties

More information

A conceptualization is a map from the problem domain into the representation. A conceptualization specifies:

A conceptualization is a map from the problem domain into the representation. A conceptualization specifies: Knowledge Sharing A conceptualization is a map from the problem domain into the representation. A conceptualization specifies: What sorts of individuals are being modeled The vocabulary for specifying

More information

Reasoning with Higher-Order Abstract Syntax and Contexts: A Comparison

Reasoning with Higher-Order Abstract Syntax and Contexts: A Comparison 1 Reasoning with Higher-Order Abstract Syntax and Contexts: A Comparison Amy Felty University of Ottawa July 13, 2010 Joint work with Brigitte Pientka, McGill University 2 Comparing Systems We focus on

More information

Revision of DL-Lite Knowledge Bases

Revision of DL-Lite Knowledge Bases Revision of DL-Lite Knowledge Bases Zhe Wang, Kewen Wang, and Rodney Topor Griffith University, Australia Abstract. We address the revision problem for knowledge bases (KBs) in Description Logics (DLs).

More information

Outline Introduction Background Related Rl dw Works Proposed Approach Experiments and Results Conclusion

Outline Introduction Background Related Rl dw Works Proposed Approach Experiments and Results Conclusion A Semantic Approach to Detecting Maritime Anomalous Situations ti José M Parente de Oliveira Paulo Augusto Elias Emilia Colonese Carrard Computer Science Department Aeronautics Institute of Technology,

More information

Path Testing and Test Coverage. Chapter 9

Path Testing and Test Coverage. Chapter 9 Path Testing and Test Coverage Chapter 9 Structural Testing Also known as glass/white/open box testing Structural testing is based on using specific knowledge of the program source text to define test

More information

Affordances in Representing the Behaviour of Event-Based Systems

Affordances in Representing the Behaviour of Event-Based Systems Affordances in Representing the Behaviour of Event-Based Systems Fahim T. IMAM a,1, Thomas R. DEAN b a School of Computing, Queen s University, Canada b Department of Electrical and Computer Engineering,

More information

Notes. Corneliu Popeea. May 3, 2013

Notes. Corneliu Popeea. May 3, 2013 Notes Corneliu Popeea May 3, 2013 1 Propositional logic Syntax We rely on a set of atomic propositions, AP, containing atoms like p, q. A propositional logic formula φ Formula is then defined by the following

More information

A GIS Tool for Modelling and Visualizing Sustainability Indicators Across Three Regions of Ireland

A GIS Tool for Modelling and Visualizing Sustainability Indicators Across Three Regions of Ireland International Conference on Whole Life Urban Sustainability and its Assessment M. Horner, C. Hardcastle, A. Price, J. Bebbington (Eds) Glasgow, 2007 A GIS Tool for Modelling and Visualizing Sustainability

More information

6.841/18.405J: Advanced Complexity Wednesday, February 12, Lecture Lecture 3

6.841/18.405J: Advanced Complexity Wednesday, February 12, Lecture Lecture 3 6.841/18.405J: Advanced Complexity Wednesday, February 12, 2003 Lecture Lecture 3 Instructor: Madhu Sudan Scribe: Bobby Kleinberg 1 The language MinDNF At the end of the last lecture, we introduced the

More information

Luay H. Tahat Computer Science Department Gulf University for Science & Technology Hawally 32093, Kuwait

Luay H. Tahat Computer Science Department Gulf University for Science & Technology Hawally 32093, Kuwait Luay H. Tahat Computer Science Department Gulf University for Science & Technology Hawally 0, Kuwait tahaway@iit.edu Regression Test Suite Prioritization Using System Models Bogdan Korel Computer Science

More information

Path Testing and Test Coverage. Chapter 9

Path Testing and Test Coverage. Chapter 9 Path Testing and Test Coverage Chapter 9 Structural Testing Also known as glass/white/open box testing Structural testing is based on using specific knowledge of the program source text to define test

More information

CS1021. Why logic? Logic about inference or argument. Start from assumptions or axioms. Make deductions according to rules of reasoning.

CS1021. Why logic? Logic about inference or argument. Start from assumptions or axioms. Make deductions according to rules of reasoning. 3: Logic Why logic? Logic about inference or argument Start from assumptions or axioms Make deductions according to rules of reasoning Logic 3-1 Why logic? (continued) If I don t buy a lottery ticket on

More information

OBEUS. (Object-Based Environment for Urban Simulation) Shareware Version. Itzhak Benenson 1,2, Slava Birfur 1, Vlad Kharbash 1

OBEUS. (Object-Based Environment for Urban Simulation) Shareware Version. Itzhak Benenson 1,2, Slava Birfur 1, Vlad Kharbash 1 OBEUS (Object-Based Environment for Urban Simulation) Shareware Version Yaffo model is based on partition of the area into Voronoi polygons, which correspond to real-world houses; neighborhood relationship

More information

Context-Sensitive Description Logics in a Dynamic Setting

Context-Sensitive Description Logics in a Dynamic Setting Context-Sensitive Description Logics in a Dynamic Setting Satyadharma Tirtarasa 25.04.2018 RoSI - TU Dresden Role-based Software Infrastructures for continuous-context-sensitive Systems Overview Context-Sensitive

More information

Mappings For Cognitive Semantic Interoperability

Mappings For Cognitive Semantic Interoperability Mappings For Cognitive Semantic Interoperability Martin Raubal Institute for Geoinformatics University of Münster, Germany raubal@uni-muenster.de SUMMARY Semantic interoperability for geographic information

More information