An Advanced Rule Engine for Computer Generated Forces

Similar documents
REASONING UNDER UNCERTAINTY: CERTAINTY THEORY

Uncertainty and Rules

Reasoning with Uncertainty

Reasoning Systems Chapter 4. Dr Ahmed Rafea

Fuzzy Systems. Introduction

Fuzzy Systems. Introduction

A new Approach to Drawing Conclusions from Data A Rough Set Perspective

EFFECT OF VARYING CONTROLLER PARAMETERS ON THE PERFORMANCE OF A FUZZY LOGIC CONTROL SYSTEM

Fuzzy Logic and Computing with Words. Ning Xiong. School of Innovation, Design, and Engineering Mälardalen University. Motivations

EXPERT SYSTEM FOR POWER TRANSFORMER DIAGNOSIS

Hybrid Logic and Uncertain Logic

Probability Map Building of Uncertain Dynamic Environments with Indistinguishable Obstacles

A PRIMER ON ROUGH SETS:

Valentin Goranko Stockholm University. ESSLLI 2018 August 6-10, of 29

Sequential adaptive combination of unreliable sources of evidence

INTELLIGENT AUTONOMY FOR MULTIPLE, COORDINATED UAVS

Outline. On Premise Evaluation On Conclusion Entailment. 1 Imperfection : Why and What. 2 Imperfection : How. 3 Conclusions

A NEW CLASS OF FUSION RULES BASED ON T-CONORM AND T-NORM FUZZY OPERATORS

Development of a Cartographic Expert System

Decision of Prognostics and Health Management under Uncertainty

Dynamic Semantics. Dynamic Semantics. Operational Semantics Axiomatic Semantics Denotational Semantic. Operational Semantics

Reasoning Under Uncertainty

An Approach to Classification Based on Fuzzy Association Rules

Chapter 13 Uncertainty

OPSIAL Manual. v Xiaofeng Tan. All Rights Reserved

Algorithms for Increasing of the Effectiveness of the Making Decisions by Intelligent Fuzzy Systems

BRECCIA: A Multi-Agent Data Fusion and Decision Support Framework for Dynamic Mission Planning

CS 354R: Computer Game Technology

Intelligent Systems and Control Prof. Laxmidhar Behera Indian Institute of Technology, Kanpur

Reasoning about uncertainty

The internal conflict of a belief function

COMP219: Artificial Intelligence. Lecture 19: Logic for KR

Data classification (II)

Models for Inexact Reasoning. Fuzzy Logic Lesson 8 Fuzzy Controllers. Master in Computational Logic Department of Artificial Intelligence

Learning Goals of CS245 Logic and Computation

Introduction to Fall 2009 Artificial Intelligence Final Exam

Canadian Board of Examiners for Professional Surveyors Core Syllabus Item C 5: GEOSPATIAL INFORMATION SYSTEMS

Quality Assessment and Uncertainty Handling in Uncertainty-Based Spatial Data Mining Framework

A Residual Gradient Fuzzy Reinforcement Learning Algorithm for Differential Games

Where are we? Knowledge Engineering Semester 2, Reasoning under Uncertainty. Probabilistic Reasoning

Methods of Partial Logic for Knowledge Representation and Deductive Reasoning in Incompletely Specified Domains

Basic Probabilistic Reasoning SEG

Quantization of Rough Set Based Attribute Reduction

Overview. Knowledge-Based Agents. Introduction. COMP219: Artificial Intelligence. Lecture 19: Logic for KR

Proofs of Correctness: Introduction to Axiomatic Verification

Reinforcement Learning II

Introduction. Spatial Multi-Agent Systems. The Need for a Theory

Course Introduction. Probabilistic Modelling and Reasoning. Relationships between courses. Dealing with Uncertainty. Chris Williams.

THE EVOLUTION OF RASTER PROCESSING TECHNOLOGY WITHIN THE CARTOGRAPHIC ENVIRONMENT

A New Method to Forecast Enrollments Using Fuzzy Time Series

Index Terms Vague Logic, Linguistic Variable, Approximate Reasoning (AR), GMP and GMT

Probability and Statistics

Analyzing the degree of conflict among belief functions.

arxiv: v1 [cs.ai] 4 Sep 2007

Some Remarks on Alternating Temporal Epistemic Logic

Today s s lecture. Lecture 16: Uncertainty - 6. Dempster-Shafer Theory. Alternative Models of Dealing with Uncertainty Information/Evidence

Title: Logical Agents AIMA: Chapter 7 (Sections 7.4 and 7.5)

A Note on Backward Dual Resolution and Its Application to Proving Completeness of Rule-Based Systems*

MODELLING OF TOOL LIFE, TORQUE AND THRUST FORCE IN DRILLING: A NEURO-FUZZY APPROACH

Bayesian Reasoning. Adapted from slides by Tim Finin and Marie desjardins.

Rule-Based Fuzzy Model

CHAPTER 5 FUZZY LOGIC FOR ATTITUDE CONTROL

Reasoning in Uncertain Situations

A SELF-TUNING KALMAN FILTER FOR AUTONOMOUS SPACECRAFT NAVIGATION

Pei Wang( 王培 ) Temple University, Philadelphia, USA

Fuzzy Controller. Fuzzy Inference System. Basic Components of Fuzzy Inference System. Rule based system: Contains a set of fuzzy rules

Rule-Based Classifiers

OPTIMAL CAPACITOR PLACEMENT USING FUZZY LOGIC

D-S Evidence Theory Applied to Fault 1 Diagnosis of Generator Based on Embedded Sensors

Rough Set Model Selection for Practical Decision Making

A Generalized Decision Logic in Interval-set-valued Information Tables

Application of New Absolute and Relative Conditioning Rules in Threat Assessment

Statistical methods for decision making in mine action

COMP3702/7702 Artificial Intelligence Week1: Introduction Russell & Norvig ch.1-2.3, Hanna Kurniawati

Predicate Logic 1. The Need for Predicate Logic. The Need for Predicate Logic. The Need for Predicate Logic. The Need for Predicate Logic

Measuring the Value of High Level Fusion

CPDA Based Fuzzy Association Rules for Learning Achievement Mining

Previous Accomplishments. Focus of Research Iona College. Focus of Research Iona College. Publication List Iona College. Journals

UNIVERSITY OF SURREY

Financial Informatics XI: Fuzzy Rule-based Systems

ADVANCED ROBOTICS. PLAN REPRESENTATION Generalized Stochastic Petri nets and Markov Decision Processes

Multi Sensor Data Fusion, Methods and Problems

EEE 8005 Student Directed Learning (SDL) Industrial Automation Fuzzy Logic

APPLICATION OF AIR HEATER AND COOLER USING FUZZY LOGIC CONTROL SYSTEM

Representing and Querying Correlated Tuples in Probabilistic Databases

Chapter 7 R&N ICS 271 Fall 2017 Kalev Kask

Lecture 2: Symbolic Model Checking With SAT

Managing Decomposed Belief Functions

A novel k-nn approach for data with uncertain attribute values

Fuzzy Logic Notes. Course: Khurshid Ahmad 2010 Typset: Cathal Ormond

Anomaly Detection in Logged Sensor Data. Master s thesis in Complex Adaptive Systems JOHAN FLORBÄCK

Interacting Vehicles: Rules of the Game

Reduced Size Rule Set Based Fuzzy Logic Dual Input Power System Stabilizer

CS 380: ARTIFICIAL INTELLIGENCE

Final exam of ECE 457 Applied Artificial Intelligence for the Fall term 2007.

Towards Smooth Monotonicity in Fuzzy Inference System based on Gradual Generalized Modus Ponens

Hamidreza Rashidy Kanan. Electrical Engineering Department, Bu-Ali Sina University

Argumentation with Abduction

On flexible database querying via extensions to fuzzy sets

Collaborative topic models: motivations cont

Transcription:

An Advanced Rule Engine for Computer Generated Forces Qing Sui Ye-Chuan Yeo Khee-Yin How DSO National Laboratories. 20 Science Park Drive Singapore 118230 Darren Wee-Sze Ong Defence Science and Technology Agency 71 Science Park Drive Singapore 118253 sqing@dso.org.sg, yyechuan@dso.org.sg, hkheeyin@dso.org.sg, oweesze@dsta.gov.sg Keywords: Computer generated forces, advanced rule engine, behavioral modeling, inexact reasoning ABSTRACT: Many current computer generated forces (CGF) use a rule based approach to model behaviours and make decisions. The behaviour of the CGF entities and the decisions made in the simulation environment will depend on the situational awareness of the entities. In real situations, the decision-maker may not have complete information or there could be some uncertainty in the information relating to the current situation. In order to cater for such requirements and enhance the realism and robustness of the decision model, we have developed an advanced rule engine (ARE) to facilitate the modelling of decision making in the presence of incomplete situation awareness and uncertain information. This paper describes the ARE design focusing on its inference mechanism, which combines inexact reasoning capability with conventional rule based technology. The use of inexact reasoning in the ARE can capture the approximate, qualitative aspects of human reasoning and decision-making process. 1. Introduction Computer generated forces (CGF) technology is an important component in many constructive and virtual simulation systems (Clark et. al., 2000). One key requirement of CGF is autonomy, i.e., given a mission goal, the CGF must be able to autonomously complete the mission without human intervention. This means that the CGF must be able to decide on what actions to take in order to complete the mission. For instance in the air domain, a CGF aircraft that has been assigned an air intercept mission must be able to decide on what manoeuvring action(s) to take based on its sensor input so as to achieve an advantageous position for the intercept. Such decision-making behaviour constitutes the behavioural model for the CGF. The behaviour model is the first step towards developing an autonomous CGF. Many current CGF systems use a rule based approach to model behaviours and decision making process. The behaviour of the CGF entities and the decisions made in the simulation will depend on the situational awareness of the entities. In real situations, the decision-maker may not have complete information or there could be some uncertainty in the information relating to the current situation. Compared to the existing traditional rule engines, an advanced rule engine for CGF must take into account the following requirements: 1) Incomplete situation awareness: We cannot assume that sensors will always provide complete information about a target at all times. In the presence of incomplete situation awareness, humans do continue to make decisions. The advanced rule engine needs to have a mechanism to do likewise. 2) Uncertain information: In many situations, there is also an element of uncertainty in the sensor data. This translates into uncertain information in the situation awareness. For instance at time t, it is possible that the identification of a target is only known with a certain probability. The advanced rule engine would need to be able to handle uncertain information to decide which rule should fire. Uncertainty and incompleteness result from a lack of adequate information to make a decision. This presents a significant problem because it hinders us from making the best decision and may even result in a wrong decision. However, this is a reflection of real world situations where decisions are normally made with inadequate information or analysis. As far as we know from the open literature, most conventional rule based systems such as CLIPS (Giarratano, 2002), Soar (Laird & Congdon, 2006) currently do not handle both uncertainty and incompleteness. The extension of CLIPS or FuzzyCLIPS (Orchard, 2004) can handle to a certain extent the uncertainty in data, by using the

certainty factors approach. This approach provides a degree of certainty value to the fired rule. In our CGF domain, we assessed that both factors of uncertainty and incompleteness should be considered in the rule inference and selection process, i.e. they would affect the decision on which rule is to be fired. In order to cater to the incomplete situation awareness and uncertain information, and enhance the realism and robustness of decision modelling in the CGF domain, we have developed an Advanced Rule Engine (ARE) to improve the modelling of decision making. The ARE combines inexact reasoning capability (i.e., reasoning under incomplete situation awareness and uncertain information) with conventional rule based technology. The use of inexact reasoning in the ARE can capture the approximate, qualitative aspects of human reasoning and decision-making process. Its inference mechanism considers both incomplete situation awareness and uncertain information at the same time. The ARE can be used to drive the decision/behavioural model of a CGF system, and attempts to mimic the human reasoning capabilities in making decisions. The ARE also allows the decision rules to be constructed and modified by the user without having to re-compile the program. The remainder of this paper is organized into several sections: Section 2 introduces the system that includes a brief description of CGF simulation systems and the ARE architecture. Section 3 describes the ARE inference mechanism. Section 4 gives a brief outline of some of its applications. Section 5 closes with the conclusions. 2. System Description In this section, we will briefly present the various components of a CGF simulation system, and the ARE architecture. 2.1 CGF Simulation System Figure 1 shows a typical CGF simulation system, which consists of the following components: 1) Simulation Platform 2) C4I Model 3) CGF Action Model 4) Behavioural Model Situation C4I Model Sensing Behavioural Model (ARE) Simulation Environment Action CGF Action Model Execution Simulation Engine & CGF Framework Figure 1: CGF Simulation System The ARE can be used to drive the behaviour model of the CGF entities in the simulation system. The CGF behaviour model needs to be interfaced with a C4I (i.e. command, control, communications, computer and intelligence) model that processes the ground truth data from the simulation system, and constructs a realistic situation awareness for the CGF entity. The behavioural model is also integrated with the CGF action model that basically simulates the specific action that has been decided upon. The C4I model employs sensor modelling and data fusion technology to build the perception model, which enables the construction of a realistic situation picture of the battlefield. This picture includes inecting uncertain and incomplete information and presenting it to the behavioural model. With the C4I model, it is therefore important that the ARE is able to handle incomplete and uncertain information. The Simulation Engine and CGF Framework provide the main CGF simulation platform. In this paper, we will not be covering the C4I model, simulation engine and CGF framework. In the following sections, we will focus on the behavioural modelling aspect, and discuss the problem of designing and developing an ARE-based model that mimics the behaviour of a single human or collective team of humans in the decision making process. 2.2 ARE Architecture The main functional blocks in ARE is illustrated in Figure 2. The functions of the main modules are explained briefly below.

Figure 2: The ARE Architecture 1) Rule Base This is the knowledge base of ARE. Knowledge is coded in the form of If-Then rules. The ARE rule format is discussed in detail in section 3.1. 2) Attributes Module This is the descriptor for the condition and action attributes that are used in each rule. The characteristic of each attribute will determine how matching is done for that attribute. For example, if a condition attribute is defined to be fuzzy, then fuzzy matching will be used. 3) Working Memory The working memory is a database of facts (with their associated uncertainties, if applicable) that describe the current state of the environment or the problem to be solved. These facts will be used to match against the rules in the rulebase and those rules that have their antecedents matched are eligible for firing. 4) Rule Matching Mechanism The rule matching module employs the following schemes to match the facts in the working memory with the condition part of each rule: Exact matching scheme Fuzzy matching scheme Truth value computation scheme The choice of which scheme to use will depend on the characteristic of the condition attribute, the relative importance of that condition in the rule and whether there is uncertainty involved. There is a partial match degree being computed for each <fact, condition> pair and finally all the partial match degrees are combined together using the truth value computation scheme to the truth value. The truth value partly determines the chance of the rule being fired. This module uses a modified Rete algorithm (Forgy, 1982) for efficient matching. More details of the matching mechanism can be found in section 3.3. 5) Rule Selection Mechanism Given a set of matching rules with their corresponding truth values, this module is responsible for selecting the most suitable one for firing. The filtering process involves sequential computations of confidence thresholding, bidding and conflict resolution. The consequent of the final selected rule can either be an executable action or a high-level goal. For the later case, it will be passed to the sub-goal module, which will then modify the contents of the working memory accordingly and trigger another round of inference. Section 3.4 describes the rule selection mechanism in greater detail. In the architecture, the process flow of the ARE is described as follows: 1) The rule engine gets a message from the environment (i.e. environmental states or specific type of event) through its detectors (i.e. an input module). 2) The message is put into the working memory. 3) The rule engine looks through the rule base to find all the matching rules by using the various rule matching mechanisms.

4) Based on the matching results, the rules truth values are computed by using the truth value computation scheme. 5) After a set of matching rules has been found, a rule selection mechanism is used to choose the winning rule. 6) If the action of the winning rule is a high-level goal, then it is asserted into the working memory, triggering another round of inference. Otherwise, it is sent to the environment through the effectors (i.e. an output module) 3. ARE Inference As conventional rule engines usually adopt exact matching mechanisms (i.e. all the conditions must match before the rule can be fired), they are unable to handle the incomplete and uncertain data. Thus, default rules need to be provided for the conventional rule engines and fired in those cases where no rules are matched. Otherwise, the rule engines will not be able to make any decision. Firing the default rules may not be a good solution, as it may not be the most appropriate nor logical in some cases. In this section, we will discuss the ARE inference mechanism which uses inexact reasoning technology to mimic the approximate, qualitative aspects of human reasoning and decision-making process under uncertain and incomplete information. 3.1 Rule Representation The representation of the rule is an important issue, which directly influences the accuracy of knowledge acquisition and knowledge representation. In the ARE, we assume that the premise of a rule is always a conunction of one or more conditions, and the consequent of a rule is one or a group of actions to be executed by the CGF entities. Given a rule represented as follows: Rule: If C1 and C2 and C3 Then A From the point of view of symbolic logic, A is true if and only if C1 and C2 and C3 are all true at the same time. If either C1 or C2 or C3 is not true, A will not be true. We call these conditions C1, C2 and C3 restrictive conditions. For many conventional rule engines, the antecedents of the rule are restrictive in nature. However, in many practical applications like CGF domain, there are also supporting conditions that are pieces of evidence that support the consequent A. We will call such conditions supporting conditions. Let s assume that C1 is a restrictive condition, and C2 and C3 are supporting conditions. Given that C1 is true, but any of the supporting conditions (e.g., C2) is false or is unknown due to incomplete data, the consequent A can still be true to a lesser degree of certainty. This degree is computed from the remaining supporting conditions (i.e. C3). An example of these conditions is given below. Here, we can assume that the Identity is a restrictive condition, and Range and Aspect_angle are supporting conditions. RULE (strength = 0.8) IF (Identity = Hostile) (R) AND (Range < 1000) (S) AND (Aspect_angle = [-10, 10]) (S) THEN (Action is Launch_missile) The ARE allows the user to define the properties of the conditions (i.e., supporting or restrictive) according to the meaning of the rules. The concept of restrictive and supporting conditions will be used in truth value computation mechanism. Each rule in the ARE also has a priority (called the strength) associated with it. This priority, a value between 0 and 1, is used in the rule selection mechanism. 3.2 Fuzziness and Uncertainty Before discussing the rule matching and truth-value computation scheme, we look at the difference between fuzziness and uncertainty. Fuzziness and uncertainty are actually two distinct, inexact concepts employed in some systems (Orchard, 2004), and uncertainty and fuzziness can actually occur simultaneously. Fuzziness occurs when the boundary of a piece of information is not clear-cut. For example, concepts such as near, small, or high are fuzzy. There is no single quantitative value that defines the term near. In fact the concept near has no clean boundary. The ARE is able to accommodate fuzzy knowledge through fuzzy reasoning of the antecedents. A fuzzy variant of the earlier rule is given below, where the Range < 1000 condition is replaced by the Range = Near condition. RULE (strength = 0.8) IF (Identity is Hostile) (R) AND (Range = Near) (S) AND (Aspect_angle = [-10, 10]) (S) THEN (Action is Launch_missile) On the other hand, uncertainty occurs when one is not absolutely certain about a piece of information. Different approaches to handle the issue of uncertain information have been proposed. These include Bayes Theorem, certainty factors (CF), and Dempster-Shafer theory (Shafer, 1976). The ARE tackles the issue of uncertainty using a modified certainty factors approach. In the ARE, the degree of uncertainty is

represented by a crisp numerical value on a scale from 0 to 1. A fact with CF = 0 indicates that the system is very uncertain that it is true, which can also be interpreted as no information is available. Conversely, a fact with CF = 1 indicates that the system is very certain that it is true. An example of an uncertain fact is identity Hostile [CF 0.7], which indicates that the identity of the CGF is Hostile with a certainty of 70%. Besides facts, each rule can also have certainty factor associated with it as given below. This describes the degree of certainty or confidence that the user has in the correctness of the rule. RULE (strength = 0.8, CF = 0.85) IF (Identity is Hostile) (R) AND (Range = Near) (S) AND (Aspect_angle = [-10, 10]) (S) THEN (Action is Launch_missile) 3.3 Rule Matching Mechanism The first step of the inference is matching. In the ARE, the matching mechanism that will be adopted depends on the attribute type. Given a rule {C 1,, C k, } A, there will be an input vector {x 1,, x k, } that is used for matching, the result of each [C k, x k ] pair matching is a match degree µ k (x k ) = [0, 1]. We will discuss them respectively in the following sections. 3.3.1 Exact Matching For those attributes that have been specified as nonfuzzy in nature, the exact matching will be used. This is the primary matching mechanism in conventional rule engine. For simplicity, let us assume that we have a non-fuzzy rule as follows: Rule: If x is A and y is B Then z is C where x, y are condition attributes (non-fuzzy); z is an action attribute; A, B and C are values. Then the match degree of first and second condition between input data and the data in the rule is 1 or 0, i.e., Degree of match (for A): µ A (x) = {0, 1} Degree of match (for B): µ B (y) = {0, 1} 1 means that the input exactly matches the condition, while 0 means totally unmatched. 3.3.2 Fuzzy Matching For those attributes that have been specified as fuzzy in nature, the fuzzy matching will be used. For simplicity, let us assume that we have a fuzzy rule as follows: Rule: If x is A and y is B Then z is C where x, y are condition attributes (fuzzy); z is an action attribute; A, B and C are values. Then the degree of partial match of first and second condition between input data and the data in the fuzzy rule can be expressed as µ A (x) and µ B (y), i.e., Degree of match (for A): µ A (x) = [0, 1] Degree of match (for B): µ B (y) = [0, 1] µ A (x) and µ B (y) are the membership values for linguistic values A and B. In the ARE, the membership functions will be defined and specified in the Rule Attribute Module. The fuzzification (i.e. the mapping from an input data space to labels of fuzzy set) will be done in the fuzzy matching module. 3.3.3 Truth Value Computation Scheme Now, we consider the truth value computation scheme by a rule {C 1,, C k, } A[CF r ] for a given input {x 1 [CF 1 ],, x k [CF k ], }, where x k is a value in the domain of the feature on which C k is a (fuzzy or nonfuzzy) predicate, CF 1 CF k is the certainty factor of the inputs, and CF r is the certainty factor of the rule. We suppose the predicate C k is represented by a matching degree function µ k (x k ). If C k is a fuzzy predicate, then µ k (x k ) [0,1], else µ k (x k ) {0,1}. {C k } can be classified into two subsets. One is the subclass {C i } of supporting conditions, denoted as Ss, and the other is the subclass {C } of restrictive conditions, denoted as S R. For each condition in S R, the rule is true only if C is true (i.e. matched). That is, the degree of a rule to be true T(C ) is proportional to µ (x ). If the certainty factor CF of the input x is taken into consideration, T(C ) is then proportional to µ (x )CF. For all the restrictive conditions in S R, the degree of the rule to be true is proportional to I µ ( x ) CF S R. For the set of supporting conditions Ss, the rule is true if all pieces of evidence {C i } are true. Thus, the degree of the rule being true is proportional to the conditional probability P(A {C i }). In case only one body of evidence C i is true, it stimulates the rule with a degree of P(A C i ), and the evidence C i is true or not is given by the match degree function at x i. Thus, for any supporting condition C i and an input x i, it stimulates the rule with a degree of P(A C i )µ i (x i ). Considering the certainty factor CF i of the input x i, then for the condition C i and an input x i, it stimulates the rule with a degree of P(A C i )µ i (x i )CF i. For all the supporting conditions in S s, they stimulate the rules with a degree of P ( A Ci ) µ i ( xi ) CFi i S S. Let w i = P(A C i ), and considering all the restrictive and supporting conditions, the truth value of the premise of the rule can be represented as

w iµ i ( xi ) CFi I Iµ ( x ) CF. i S S S R Considering the rule certainty factor CF r, the truth value of a rule {C 1,, C k, } A[CF r ] can be computed as T = CFr wi µ i ( xi ) CFi I Iµ ( x ) CF (1) i S S S R = 1 The parameters w i (where w i i S S ) are determined based on the importance of individual supporting conditions. An even distribution can be used in the case where the weights are hard to evaluate. The truth value is the primary parameter in the ARE that determines whether a rule will be selected for firing. A rule in the ARE is eligible for firing as long as T > 0, unlike that in conventional rule engines where the requirement is strictly T = 1. A rule with higher truth value in the ARE will be selected over another with a lower truth value for firing since the former has a greater degree of match and less uncertainty. The next section describes the rule selection mechanism in greater detail. From the above, we can see how the rule matching mechanism of ARE elegantly handles 1) uncertainty in the facts (i.e. 0 < CF k < 1), 2) uncertain knowledge (i.e. 0 < CF r < 1), 3) fuzzy knowledge (i.e. 0 < µ k (x k ) 1) and 4) preserves the meanings of supporting and restrictive conditions, which specify that a) a rule can be fired (i.e. T > 0) even if some of its supporting conditions are false (i.e. µ i (x i ) = 0 for some i) or unknown (i.e. CF i = 0 for some i), as long as the restrictive conditions match (i.e. CF > 0 and µ (x )> 0 for all ), and b) the more supporting conditions that match, the higher the truth value. Clearly, these inexact reasoning capabilities distinguish the ARE from conventional rule engines. 3.4 Rule Selection Mechanism The output of the rule matching mechanism is a match set, which is a set of rules with non-zero truth values. The rule selection mechanism then selects the best rule in this match set to fire. The rule selection mechanism in the ARE mainly includes the bidding mechanism, conflict resolution, and sub-goal handling: 3.4.1 Bidding Mechanism The bidding mechanism computes the bid for each rule R in the match set based on its truth value T R and its strength s R. It follows two general principles: 1) A rule with higher T R should have higher bid compared to another with lower T R, regardless of their strengths, and 2) For rules with similar T R, the bid should be proportional to the strength s R. This bidding logic is implemented in two steps. Firstly, to establish if two rules have the same truth values and yet reduce the sensitivity of the bids to small differences in truth values, we performed binning of the rules in the match set according to their truth values. By default 3 bins are used, and these correspond to 1) T R = 1, 2) 0.5 T R < 1 and 3) T R < 0.5 (Note: the definition of an exclusive T R =1 bin is to differentiate between a fully matched rule and a partially matched rule). Rules in the same bins are viewed to have similar truth values. Secondly, the bins are considered in decreasing order of truth values. If a higher-valued bin has at least one valid rule, bids will not be computed for rules in the lower-valued bins, i.e. these rules will not be selected for firing. For rules in the same bin, their bids are equal to their strengths s R. 3.4.2 Conflict Resolution The scheme of the conflict resolution for the ARE is dependent on the bids of the rules in the match set, i.e. the winner will be the rule with maximum bid value. In the case where there are several rules with the same maximum bid values, a rule will be selected randomly from the match set. In practical applications, a confidence threshold can be used in the scheme to control the firing of the rules. 3.4.3 Sub-goal Module Rules in the ARE are used to propose, select and apply actions. However some actions are actually goals which are not low level actions that can be carried out immediately. For example, an action such as intercept an enemy is itself a goal, which can be dynamically decomposed further into rules proposing more primitive actions. The sub-goal module in the ARE is to identify and propose these sub-goals to the working memory and matching mechanism for further inference. 4. Applications We have applied the ARE in several applications, two of which are 1) an air-to-air combat constructive simulation system and 2) a ground robot swarm simulation system. The first system has been used for in house experimentation and operations research studies. The ARE drives the virtual aircrafts based on behavioural rules elicited from the pilots. See Figure 3 for a screenshot of the system. The second system is used to test and demonstrate the feasibility of using a swarm of robots, each with a limited sensor field-of view and driven by simple reactive rules, to search a bounded urban environment

for targets. The whole swarm system consists of 30-50 robots and each virtual robot is controlled by its own copy of ARE. Figure 4 shows a screenshot of the system. awareness and uncertain information. These capabilities will become more important as simulation systems takes into account the impact of information modelling on warfare. 6. Acknowledgements This proect is funded by DRD, DSTA Singapore. The authors would also like to thank their colleagues Chee- Kong Cheng, Yew-Hong Toh and Ching-Ching Ong. 7. References Clark, P., Pongracic, & H., Chandran, A. (2000). Researching the Use of Agent-Based CGF in Human-in-the-Loop Simulation. In Proceedings of the 9th Conference on Computer Generated Forces, Orlando, May 16-18, 2000, pp.3-12. Figure 3: Air-to-air combat constructive simulator with ARE driving the virtual aircrafts. Giarratano, J. (2002). CLIPS 6.20 User s Guide. Orchard, B. (2004). FuzzyCLIPS 6.10d User s Manual. Laird, J. & Congdon, C. B. (2006). Soar User s Manual version 8.6.3. Shafer, G. (1976). A Mathematical Theory of Evidence. Princeton University Press, 1976. target Forgy, C.L. (1982). Rete: A fast algorithm for the many pattern/ many obect pattern match problem. In Artificial Intelligence, 19, Sept 1982, pp7-37. robot Figure 4: Robotic swarm simulator with ARE driving the virtual robots. 5. Conclusion In this paper, we have described an Advanced Rule Engine (ARE), which can be used to drive the behaviour models in CGF simulation systems. We also outline two applications of the ARE, one to control virtual aircrafts in a constructive simulation, and the other to control a virtual swarm of ground robots. The ARE combines inexact reasoning capability with conventional rule based technology, and therefore possesses human-like reasoning capabilities to make decisions in the presence of incomplete situation

Author Biographies QING SUI is a Senior Member of Technical Staff in the Cooperative Systems and Machine Intelligence Lab in DSO National Laboratories, Singapore. Dr. Sui has a Ph.D. in Robotics and Automation. His research interests are in the area of robotics, autonomous agents and machine learning. YE-CHUAN YEO is a Senior Member of Technical Staff in the Cooperative Systems and Machine Intelligence Lab in DSO National Laboratories, Singapore. Mr. Yeo has a M.Tech. degree in Knowledge Engineering. His research interests are in the area of soft computing, intelligent agents and machine learning. KHEE YIN HOW is the Director of Information Division in DSO National Laboratories, Singapore. Dr. How has a Ph.D. in Artificial Intelligence from the University of Edinburgh. His research interests are in the area of software agents and machine learning. DARREN WEE-SZE ONG is a Senior Technology Manager in Directorate of R&D, Defence Science and Technology Agency, Singapore. Mr. Ong has a B.Eng. degree in Electrical Engineering and a MSc degree in Digital Media Technology. His research interests are in the area of systems modeling and simulation, distributed virtual environments and use of game technologies for serious applications.