Potential Cases, Methodologies, and Strategies of Synthesis of. Solutions in Distributed Expert Systems. Minjie Zhang

Size: px
Start display at page:

Download "Potential Cases, Methodologies, and Strategies of Synthesis of. Solutions in Distributed Expert Systems. Minjie Zhang"

Transcription

1 Potential Cases, Methodologies, and Strategies of Synthesis of Solutions in Distributed Expert Systems Minjie Zhang School of Computer, and Information Sciences Edith Cowan University, WA 6027, Australia Chengqi Zhang School of Mathematical, and Computing Sciences University of New England, NSW 2351, Australia Abstract In this paper, rstly, potential synthesis cases in distributed expert systems (DESs) and types of DESs are identied. Based on these results, necessary conditions of synthesis strategies in dierent synthesis cases are recognized. Secondly, two methodologies for designing synthesis strategies in distributed expert systems are investigated. They are analysis methods, and inductive methods. Thirdly, two methodologies are discussed based on the points of performance, complexity, and requirements. Key words: distributed expert systems, methodologies, synthesis of solutions, synthesis strategies, analysis methods, inductive methods 1 Introduction A distributed expert system (DES) is one of the special congurations of distributed problem solving. It consists of a number of dierent expert systems (ESs) connected by computer networks. In a DES, each expert system (ES) can either work individually to solve some specic problems, or cooperate with the others to deal with complex problems [7, 3, 4]. If more than one ES solves the same task, each ES could obtain a solution. It is thus important to synthesize such multiple solutions to the same task (called inputs) from dierent ESs in order to obtain the desired nal solution (called outputs) to the task. Let us look at an example. Two ESs predict an earthquake in a particular area. ES1 believes that the possibility of the potential earthquake being class 5 is x1 = 0:8, while ES2 believes that the possibility of the potential earthquake being class 5 is x2 = 0:5. Consider the following two cases based on the the example. Case (1): Two ESs obtain the uncertainties of the solution (a class 5 earthquake in a particular area) x1 = 0:8 and x2 = 0:5, respectively, based on the same geochemical results. This case demonstrates a belief conict between ES1 and ES2 because they obtained the same solution with dierent uncertainties given the same evidence. The nal uncertainty S(x1; x2) should be between x1 and x2 (i.e., minfx1; x2g S(x1; x2) maxfx1; x2g). Case (2): ES1 predicts a class 5 earthquake in an area with an uncertainty of y1 = 0:8 based on the evidence from a geophysical experiment and ES2 obtains the same solution with an uncertainty of y2 = 0:5 based on the geological structure of this area. The nal uncertainty for the solution of a class 5 earthquake in this case should be bigger than any one of y1 and y2 (i.e., S(y1; y2) maxfy1; y2g) because two ESs obtain the same solution from dierent evidence, so this solution is more reliable than the solution which comes from the same evidence. The conclusion is, that if two ESs obtain the same solution with two identical uncertainties, such as x1 = y1, x2 = y2 (using the above example), the synthesis of x1 and x2 represented by S(x1; x2) may be dierent from S(y1; y2) if these solutions originate from dierent evidence. 1

2 The above two cases indicate that a right synthesis strategy is not only based on the uncertainties of solutions, but also based on the relationship between evidence of solutions. If an improper synthesis strategy is chosen in a situation, a wrong solution may result. Therefore, how to classify, design and choose synthesis strategies is one of the critical research issues in the DES eld. This paper concentrates on the classication of the potential synthesis cases, and investigations of methodologies for designing synthesis strategies. In Section 2, the synthesis problem is formally described. In Section 3, the potential synthesis cases in DESs are identied, and necessary conditions for developing synthesis strategies are proposed. In Section 4, the principle of methodologies for designing synthesis strategies are discussed and two methodologies are compared. Finally, in Section 5, this paper is concluded and further proposed work is outlined. 2 Problem description Suppose there are n ESs in a DES to evaluate the values of an attribute of an object (e.g. in a medical DES, the identity of an organism infecting a specic patient). The solution for an ES i can be represented as (< object >< attribute > (V1 CF i1 A i ) (V2 CF i2 A i )... (V m CF im A i )) 2.1 where V j (1 j m) represents jth possible value, CF ij (1 i n; 1 j m) represents the uncertainty for jth value from ES i, A i (1 i n) represents the authority of ES i, and m indicates that there are m possible values for this attribute of the object. For example, there are 6 possible values for the face-up of a die. The authority A i is the condence level for the solution from ES i. The value range of the authority is [0, 1]. The higher the authority, the more reliable the solution. It can be assigned for each ES by human experts or generated based on the historical performance of ESs. >From the synthesis point of view, all ESs are concerned with the same attribute of an object. So we will only keep the attribute values, uncertainties, and authorities in the representation. Here is the representation of m possible values with uncertainties from n ESs: 2 64 CF11 CF12::: CF1m A1 CF21 CF22::: CF2m A2 :::::::::::::::::::::::::::::::::::::: CF n1 CF n2::: CF nm A n :2 The synthesis strategy is responsible for obtaining nal uncertainties (CF 1 CF 2... CF m ) based on Matrix 2.2, where * indicates the synthesis result from corresponding values with the subscripts 1; 2; :::; n in the same place. 3 Synthesis of solutions 3.1 Potential synthesis cases in DESs In this subsection, we will analyze the synthesis cases in DESs based on the relationship between evidence sets of a solution from dierent ESs. Informally, we know that there are four relationships between evidence sets of a solution. That is, the evidence sets of a solution from dierent ESs are (a) identical, (b) inclusion, (c) overlap, and (d) disjoint. 2

3 Before we formally dene synthesis cases, some preparation work should be done. We use propositions to represent evidence and use conclusions to represent solutions. In order to simplify the explanation, we will rst work on the assumption that there are two ESs in a DES; then we will extend to any number of ESs in DESs. Let P be a set of propositions, R be a set of rules, and CF represent the uncertainty of a proposition in an ES. Denition 1: An inference network G in an ES is dened as a directed acyclic graph in which the nodes are propositions in P, and the arcs are activated rules in R. (Suppose the rule format is A?! B in this denition). The root of such a network is a proposition in P which is not the premise of any rule in R. In contrast, a leaf is a proposition in P which is not the conclusion of any rule in R. Denition 2: A rule chain is dened as any chain from one node A to another node B in an inference network G (1) if there exists a rule in which A is a premise of the rule and B is a conclusion of the rule ; or (2) if there exist a sequence of rules in which A is a premise of the rule and there exists a rule chain from a node in the conclusion of the rule to node B. Denition 3: A general rule chain is dened as a rule chain from a leaf to the root of an inference network G. Denition 4: The original evidence set of a proposition B is represented by E(B), where E(B) is a unique set of leaf propositions which satisfy the condition that there is a rule chain to connect such a leaf to the proposition B. For example, if there is an inference network G i in ES i, where G i is: D?! C?! A?! B, F?! E?! B, T?! A?! C?! B, then E i (B) = fd; F; T g. The next four denitions are our formal denitions of dierent synthesis cases. In the following denitions, only the evidences in original evidence sets are considered because they are objective. Denition 5: A conict synthesis case [11] occurs when the original evidence sets of a proposition from dierent ESs are equivalent, but dierent ESs produce the same solution with dierent uncertainties. That is, for a proposition B, if there exist E i (B) = E j (B), where E i (B) is in ES i, E j (B) is in ES j, and CF i 6= CF j, where CF i is the uncertainty of the proposition B from ES i and CF j is the uncertainty of the proposition B from ES j. For example, if there are inference networks G i in ES i and G j in ES j, where G i : D?! C?! A?! B, F?! H?! B and G j : D?! H?! B, F?! J?! B, then E i (B) = fd; F g and E j (B) = fd; F g are equivalent. Denition 6: An inclusion synthesis case occurs when the original evidence set of a proposition from one ES is a subset of an original evidence set of another ES. Formally, for a proposition B, E i (B) E j (B), or vice versa, where E i (B) is in ES i and E j (B) is in ES j. For example, for a proposition B, there are inference networks G i in ES i and G j in ES j, where G i : A?! C?! B; D?! B; and G j : A?! K?! B; D?! G?! B; C?! B; E?! B. In this example, E i (B) = fa; Dg, E j (B) = fa; D; C; Eg, so E i (B) E j (B). Note: for ES j, C is an original evidence while for ES i, C is a derived evidence, so that C is not in the E i (B) but in E j (B). Denition 7: An overlap synthesis case occurs when the original evidence sets of a proposition from dierent ESs are not equivalent, but the intersection of original evidence sets is not empty. Formally, for a 3

4 proposition B, E i (B) \ E j (B) 6=, E i (B) \ E j (B) 6= E i (B), and E i (B) \ E j (B) 6= E j (B) where E i (B) is in ES i and E j (B) is in ES j. For example, for a proposition B, there are inference networks G i in ES i and G j in ES j, where G i : A?! C?! B, D?! B and G j : A?! E?! F?! B, H?! G?! B. In this example, E i (B) = fa; Dg, E j (B) = fa; Hg, so E i (B) \ E j (B) = fag 6=, E i (B) \ E j (B) 6= E i (B), and E i (B) \ E j (B) 6= E j (B). Denition 8: A disjoint synthesis case occurs when the intersection of original evidence sets of a proposition from dierent ESs is empty. Formally, for a proposition B, E i (B) \ E j (B) =, where E i (B) is in ES i, and E j (B) is in ES j. For instance, for a proposition B, there are inference networks G i in ES i and G j in ES j where G i : A?! C?! B, X?! H?! B and G j : F?! D?! B. In this example, E i (B) = fa; Xg, E j (B) = ff g, so E i (B) \ E j (B) =. The analysis of the above four synthesis cases is based on only two ESs. Now we extend above denitions to n ESs. Denition 9: A conict synthesis case occurs among n ESs when the original evidence sets of a proposition from n ESs are equivalent, but at least two dierent ESs produce the same solution with dierent uncertainties. Formally, for a proposition B, if there exist E1(B) = E2(B) = ::: = E n (B), where E1(B) is in ES1, E2(B) is in ES2,..., and E n (B) is in ES n, and 9i; j, (1 i; j n; i 6= j), CF i 6= CF j, where CF i is the uncertainty of the proposition B from ES i and CF j is the uncertainty of the proposition B from ES j. Denition 10: An inclusion synthesis case among n ESs occurs if these exists an original evidence set of a proposition from ES i (E i (B)) in which E i (B) strictly includes all other E j (B). Formally, for a proposition B, 8j (1 j n; j 6= i), E i (B) E j (B) where E1(B) is in ES1, E2(B) is in ES2,..., and E n (B) is in ES n. Denition 11: A disjoint synthesis case occurs among n ESs when the intersection of any two original evidence sets of a proposition from n ESs is empty. Formally, for a proposition B, 8i; j, (1 i; j n; i 6= j), E i (B) \ E j (B) =, where E1(B) is in ES1, E2(B) is in ES2,..., and E n (B) is in ES n. If the number of ESs in a DES is more than 2, rst we identify whether the synthesis case belongs to a conict, inclusion, or disjoint case. If the synthesis case does not satisfy the conditions of the above three cases, then it is an overlap synthesis case. 3.2 Classication of Types of DESs In this subsection, we will discuss the classication of types of DESs based on the relationship between the sets of knowledge and the sets of available data of ESs. Denition 12: An available data set of an ES is the data set which the ES can access. It is represented by DAT A. >From the above denition, we know that an available data set is a superset of an original evidence set (refer to denition 4). First, every element in an original evidence set must come from an available data set. Second, based on the knowledge base of an ES, some elements in an available data set may not be used either because they are not in the premise part of any rule or because the rule is not activated. For example, suppose there are two rules in an ES, A1?! B and A2?! B. It is possible that the ES obtains the solution B from A1 only. It might not need to use A2 even if A2 may be available. In this case, the original evidence set E(B) = fa1g while the available data set DAT A = fa1; A2g. 4

5 Denition 13: ES i = ES j if (1) both ES i and ES j have the same set of propositions, P i = P j ; and (2) both ES i and ES j have the same set of rules, R i = R j ; supposing both ES i and ES j use the same inference theory. Otherwise ES i 6= ES j. The relationships among the original evidence set (E(B)), the available data set (DAT A) and the proposition set (P ) is E(B) DAT A P. The proposition set (P ) includes the available data set DAT A and other derived propositions. According to the relationships between knowledge and available data of ESs, DESs can be classied into four types: Denition 14: A homogeneous DES is a DES in which all of the ESs have the same knowledge and can access the same available data set. This type of DESs can be dened as: 8i; j, ES i = ES j, 1 i, j n 8i; j, DAT A i = DAT A j, 1 i, j n In this type of DESs, all of the ESs can perform the same type of job in parallel. For example, in a VLSI system, \input data and the design constraints are available to all nodes" and \knowledge (or rules of ESs) could be distributed among multiple nodes which would operate in parallel " [8]. Denition 15: A partially homogeneous DES is a DES in which all of the ESs have the same knowledge but at least one ES accesses a dierent data set. This type of DESs can be dened as: 8i; j; ES i = ES j ; 1 i; j n 9i; j; DAT A i 6= DAT A j ; 1 i; j n For example, there are n ESs cooperate to determine the dangerous area of earthquakes for a specic area. All of ESs are identical knowledge based systems. Some of them use data from geochemical experiments while others use the information of geophysics test. Denition 16: A partially heterogeneous DES is a DES in which at least one ES has a dierent knowledge from other ESs, but all access the same available data set. This type of DESs can be dened as: 9i; j; ES i 6= ES j ; 1 i; j n 8i; j, DAT A i = DAT A j, 1 i, j n For instance, consider the situation when several medical ESs in one eld diagnose a patient. Each ES uses its own domain knowledge to make the decision for the patient, based on the same available evidences. Denition 17: A heterogeneous DES is a DES in which at least one ES in a DES has dierent knowledge from other ESs, and at least one ES accesses the dierent available data set from other ESs. It can be dened as: 9i; j; ES i 6= ES j ; 1 i; j n 9i; j; DAT A i 6= DAT A j ; 1 i; j n Suppose that two ESs cooperate to predict earthquakes in the same area. One ES is a geologist and the other is a geochemist. When they cooperate, they may use dierent knowledge and evidences to solve the problem. 3.3 Relationships between synthesis cases and DES types We have identied synthesis cases and classied DES types in the above two subsections. Now we can briey summarize the relationships between synthesis cases and DES types as follows. 5

6 Theorem 1: A conict synthesis case (refer to Denition 5 in Subsection 3.1) does not exist in a homogeneous DES, nor in a partially homogeneous DES, but may exist in a partially heterogeneous DES and a heterogeneous DES. Proof: (1) In both a homogeneous DES and a partially homogeneous DES, if two ESs have the same original evidence set, the solution is the same (no conict) because ES i = ES j. (2) However, in both a partially heterogeneous DES and a heterogeneous DES, the knowledge bases of dierent ESs can be dierent. In this situation, for the same original evidence set, dierent ESs may use dierent evidences to obtain the same solution with dierent uncertainties. Theorem 2: An inclusion synthesis case (refer to Denition 6 in Subsection 3.1) does not exist in a homogeneous DES, but may exist in a partially homogeneous DES, a partially heterogeneous DES and a heterogeneous DES. Proof: (1) In a homogeneous DES, all ESs have the same knowledge and choose the same original data sets, so there is no inclusion synthesis case. (2) In a partially homogeneous DES and a heterogeneous DES, the available data sets can be dierent. In these cases, for the same solution, the original evidence set from one ES can be a subset of an original evidence set of another ES. For instance, in a partially homogeneous DES, the rules A?! B; C?! B, D?! B, E?! B exist in both ES i and ES j, and suppose A, C, D, E 2 DAT A i, C, D 2 DAT A j. An inclusion synthesis case may also exist in a partially heterogeneous DES because in a partially heterogeneous DES, ESs can share the same available data set. For example, ES i have rules A?! B, C?! B, D?! B, ES j has rules A?! B, D?! B. E i (B) = fa; C; Dg, E j (B) = fa; Dg, and E j (B) E i (B). Theorem 3: An overlap synthesis case (refer to Denition 7 in Subsection 3.1) does not exist in a homogeneous DES but may exist in a partially homogeneous DES, a partially heterogeneous DES, and a heterogeneous DES. Proof: (1) In a homogeneous DES, it is impossible to have an overlap synthesis case because all ESs have the same knowledge and choose the same original data sets. (2) However, in a partially homogeneous DES and a heterogeneous DES, the available data sets of dierent DESs can be dierent. In this situation, for the same solution, dierent ESs can use dierent original evidence sets. For example, in a partially homogeneous DES, the rules A?! B; C?! B; D?! B exist in both ES i and ES j, and suppose A; C 2 DAT A i, C; D 2 DAT A j. This kind of situation may also exist in a heterogeneous DES. In a partially heterogeneous DES, ESs can use dierent data to obtain the same solution by using dierent knowledge, even if the available data sets are the same. For example, ES i has rules A?! B; C?! B, ES j has rules A?! B; D?! B. The available data set is fa; C; Dg. From this example, we know that a partial overlap synthesis case exists in a partially heterogeneous DES. Theorem 4: A disjoint synthesis case (refer to Denition 8 in Subsection 3.1) does not exist in a homogeneous DES but might appear in a partially homogeneous DES, a partially heterogeneous DES and a heterogeneous DES. Proof: This proof is quite similar to the proof of Theorem 3. The only dierence is that the original evidence sets of dierent ESs have no overlap. For example, in a partially homogeneous DES and a heterogeneous DES, both rule A i?! B and rule A j?! B exist in both ES i and ES j, and suppose A i 2 DAT A i and A j 2 DAT A j (and vice versa ). In a partially heterogeneous DES, ES i has rule sets A?! B, C?! B; 6

7 ES j has rule sets D?! B; E?! B. The available data set is fa; C; D; Eg. This is the disjoint synthesis case. 3.4 Necessary conditions of synthesis strategies in DESs In this subsection, we will describe both general necessary conditions for all synthesis strategies and specic necessary conditions of synthesis strategies for each synthesis case. The general conditions are on a more abstract level, while specic conditions are on a more concrete level General necessary conditions for all synthesis strategies Let S represent the synthesis function of CF i and CF j (S(CF i ; CF j )) where CF i and CF j represent the uncertainties of a proposition B from ES i and ES j, respectively. S could be any of the synthesis functions appropriate to the dierent situations of conict, inclusion, overlap, or disjoint. The following properties are the fundamental consistency conditions which an acceptable synthesis strategy in DESs must satisfy. (a) Suppose that X is the set of uncertainties of propositions in an inexact reasoning model. If 8i; j; CF i 2 X & CF j 2 X, then S(CF i ; CF j ) 2 X. The reason for this property is that the value of uncertainty after synthesis should be still in the same uncertainty range, otherwise the synthesis result will be meaningless. (b) The synthesis function S on X must satisfy the associative law. The reason for this property is that in the real world, the nal solution of the problem is only based on the evidence which is used to obtain the solution, not on the order of evidence. That is, S(S(CF i ; CF j ); CF k ) = S(CF i ; S(CF j ; CF k )). (c) The synthesis function S on X must satisfy the commutative law. The reason for this property is the same as (b). The general necessary conditions are valid only when the synthesis strategies synthesize dierent uncertainties in accumulative manners (i.e., synthesize two uncertainties at once) Specic necessary conditions of synthesis strategies for each synthesis case In the conict synthesis case, the necessary condition for the synthesis function should be minfcf i ; CF j g S conflict (CF i ; CF j ) maxfcf i ; CF j g, where S conflict is a synthesis function for a conict synthesis case, because both uncertainties of CF i and CF j come from the same original evidence set (E i (B) = E j (B)). This condition is nothing related to any inexact reasoning model even we use min and max. In other words, the dierence between CF i and CF j only comes from the dierent subjective interpretation of dierent ESs on the same objective evidences. Since there is no additional evidence for each of the ESs, the opinion from all of them should be considered, and they constrain each other. In the disjoint synthesis case, there is no overlap between E i (B) and E j (B). Either E i (B) or E j (B) can contribute positively or negatively to the uncertainty of proposition B being true independently. Therefore, if both ESs favor the proposition B being true (CF i > 0 and CF j > 0 under the EMYCIN [5] inexact reasoning model), the necessary condition for the synthesis function should be S disjoint (CF i ; CF j ) > maxfcf i ; CF j g; where S disjoint represents a synthesis function for the disjoint synthesis case. If both ESs are against the proposition B being true (CF i < 0 and CF j < 0), then S disjoint (CF i ; CF j ) < minfcf i ; CF j g. In all other cases, it should be minfcf i ; CF j g S disjoint (CF i; CF j ) maxfcf i ; CF j g. 7

8 In the inclusion synthesis case, if E i (B) is the subset of E j (B), the necessary condition of inclusion synthesis case should be S inclusion (CF i ; CF j ) = CF j where S inclusion is the synthesis function for the inclusion case. The idea behind this is that ES i gets the solution based on less evidences than ES j. Evidences used by ES i are already used by ES j, so ES i makes no more contribution to the nal solution. In the overlap synthesis case, there are some additional evidences between E i (B) and E j (B). The necessary condition for this kind of case should be S overlap (CF i ; CF j ) S conflict (CF i ; CF j ) ( if CF i 0; CF j 0); or S overlap (CF i ; CF j ) < S conflict (CF i ; CF j ) (if CF i < 0; CF i < 0) where S overlap is the synthesis function for the overlap synthesis case. That means, S overlap is stronger than S conflict. 4 Methodologies for designing synthesis strategies 4.1 Measurements for synthesis strategies Firstly, we would like to formally dene synthesis strategies as follows: Denition 18: A perfect synthesis strategy f can be dened as: 8X i 2 X; f(x i ) = Y i where X i represents a matrix (n (m + 1)) (refer to Matrix 2.2) of multiple solutions to a problem from dierent ESs (an input), Y i represents a vector (m) of the desired nal solution after synthesis from X i (an output), and X is the set of all X i. As described above, we always know the set X (all input matrices). Y i is our desired nal solution after synthesis from X i. Questions here are: (1) How do we know whether Y i is our desired nal solution for any X i? and, (2) For how many X i do we know the corresponding Y i? For the rst question, it is reasonable to dene Y i as a synthesis of solution for any X i from human experts. For the second question, if, for any X i, we know Y i, we have nothing to do about synthesis of solutions. In fact, for only a limited number of X i, we may know corresponding Y i, and f is dened as a psuedo function to map the limited X i to corresponding Y i perfectly. The goal of designing synthesis strategies is to nd a mapping function (strategy) f 0 in which, for any X i, f 0 (X i ) = Y 0 i should be very close to Y i. Denition 19: A better synthesis strategy is dened as the strategy which can map every X i to the corresponding Y i, with less errors. Denition 20: The specic error f 0(X i ) for a synthesis strategy f 0 for X i is dened as f 0(X i ) = jy i? f 0 (X i )j = jy i? Y 0 i j = fj 1j; j2j; :::; j m jg where m is the number of possible values for an attribute. Denition 21: The specic mean error f 0 (X i ) for a synthesis strategy f 0 from X i is dened as f 0 (X i ) = j 1j + j2j + ::: + j m j m as Denition 22: The specic maximum error max (X i ) for the synthesis strategy f 0 max (X i ) = maxfj1j; 2j; :::; j m jg from X i is dened 8

9 . Denition 23: The general mean error f 0 for the synthesis strategy f 0 from all X i is dened as Supposing the number of elements in the set X is N. 8X i 2 X; f 0 = P N i=1 f 0(X i ) N Denition 24: The general maximum error for the synthesis strategy f 0 from all X i is dened as 8X i 2 X; max = maxf max (X1); max (X2); :::; max (X N )g Thus, the strategy f 0 is said to be better than g 0 if f 0 < g 0. The best synthesis strategy f is dened as f = 0. Such a measurement for a synthesis strategy is known as `quantitatively measured '. In practice, it is nearly impossible to dene the best f. The above denition is valid only when you know a certain amount of Y i from X i. There are two main steps to judge a synthesis strategy. The rst step: if some answers are known for some examples, these examples can be the benchmark to test the methods. Otherwise, the necessary condition can be used to judge the valid of the methods (refer to Subsection 3.4). 4.2 Principles of two methodologies In the literature, there are two methodologies to dene f 0. One is the method used for the analysis of characteristics of the input X i thoroughly to dene f 0 (analysis methods) and the other is from the number of X i and the corresponding Y i to dene f 0 (inductive methods). We analyze these methodologies below Analysis methods An analysis method is a methodology which can be used to dene a synthesis strategy from X to Y (refer to Denition 18) by analyzing the characteristics of X. These characteristics may include relationships among original evidence sets from ESs which derive the input matrix, the factors which aect the desired nal solution, and the weights for all factors. For example, this method can be used to synthesize committee members' opinions to select the best movie in a movie festival; to synthesize the individual comments from assessors in order to decide whether a project should be funded; and to synthesize multiple opinions from dierent experts to decide whether a new product should be produced. This method is useful for areas in which the individual solution from each expert can be described by uncertainties, or numbers, and relationships between X i and Y i are not so complicated. In particular, this method is useful for areas in which patterns are dicult to obtain. Normally, analysis methods require some preconditions. If synthesis cases satisfy these preconditions, this kind of strategies can work well [11]. The typical examples of analysis methods are: (1) uncertainty management in which the outputs are based on not only the mean value of corresponding inputs but also the uniformity about corresponding inputs [2]; (2) a synthesis strategy for heterogeneous DESs which was developed based on both transformation functions among dierent inexact reasoning models among heterogeneous ESs and mean values of inputs from ESs [9]; and (3) a synthesis strategy based on the factors of authorities from ESs, mean values of inputs, inuence among ESs for decision making, and uniformity of inputs from ESs. All of these strategies implemented analysis methods by some mathematical theories. 9

10 Popular mathematical theories used to implement analysis methods could be `decision theory', `evidential theory' [6], `probability theory' [1], and so on. Generally, there are ve levels to be explored in this kind of method: (1) analysis of the characteristics of the evidence to derive an input X i (such as conict cases and non-conict cases. (2) analysis of input X i itself (such as all experts give consistently positive solutions, or negative solutions, or some experts give positive solutions while other experts give negative solutions); (3) analysis of the factors which aect Y i (such as, average of individual solutions, the consistency among individual solutions, condence of each expert, and so on); (4) how to weight each factor; and (5) how to combine dierent factors. We have developed a synthesis strategy [10] to demonstrate how an analysis method was used. For this method, patterns are not absolutely necessary. If there are some patterns, they can be used to test strategies, and then give some clues to generate better strategies. If the patterns are dicult to obtain, or too few patterns are available, we will test the strategies to see whether they satisfy the necessary conditions. For example, one typically necessary condition is that the nal solution after synthesis should not be negative if all individual solutions are positive Inductive methods The idea of inductive methods diers from the idea of analysis methods. An inductive method is a methodology which can be used to nd the general relationship between X and Y based on sucient patterns. The principle of inductive methods is from specic to general. Suppose 8i(1 i k)f(x i ) = Y i, in which f is a synthesis strategy, X i is the input matrix of multiple solutions, and Y i is the known synthesis solution from X i, we believe that f also works well for any X j (j > k) although we cannot guarantee that it is always true. It is obvious that the rst condition of using this method for dening a strategy is that enough patterns must be known (this requirement diers from the analysis methods above). The more patterns, the better. The second condition is that the patterns are distributed fairly randomly. The third condition is that the of the strategy is closed to 0, or the strategy `converges' in neural network terms. For some synthesis problems, (if the relationship between inputs and outputs is linear,) we can use mathematical models such as \The Least Square Method" to nd this relationship. However, it does not work well for complicated problems. Another tool used to implement inductive methods is the neural network. We have developed two synthesis strategies using the neural network technique [12]. The characteristics of these strategies are based on patterns of both inputs and corresponding outputs in order to nd the best mapping functions from inputs to corresponding outputs. A neural network is a good mechanism to simulate certain complicated relationships [12]. The general structure of a neural network includes an input layer, a hidden layer, and an output layer. The three layers are connected by a certain method based on requirements. A neural network can be generated by a number of learning patterns. A pattern includes an input pattern and an output pattern. Generally, neural networks can be trained by supervised training. That is, the input and output patterns are known during training. 10

11 The output patterns oer hints as to the correct output for a particular input pattern. During training, the hidden layer continuously adjusts the weight to reduce the dierence between desired outputs and actual outputs. Once the relationship is found, actual outputs Y 0 come from the mapping function. In Section 6, a neural network strategy will be briey discussed. 5 Conclusion In this paper, we have identied the potential cases of synthesis in DESs and classied the types of DESs. Based on these results, necessary conditions of synthesis strategies in dierent synthesis cases are recognized. Two methodologies (analysis and inductive methods) have been proposed to design synthesis strategies in DESs. Two methodologies compensate each other in the following ways: (1) Performance Analysis methods can work well for some simple problems. In particular, this method is perfect when the desired nal solutions are derived from some formulas based on inputs. Inductive methods can be used to solve complicated problems. For complicated cases, inductive methods are better than analysis methods because they can simulate complicated relationships quite closely if inductive functions exist. (2) Complexity The complexity of analysis methods is less than that of inductive methods. In this kind of method, the relationship between inputs and outputs is concise, and the calculation is inexpensive. An inductive method is the complex method. The selection of suitable samples is a big job. To nd a nal mapping function is time consuming such as training of a neural network. (3) Requirements Analysis methods virtually need no patterns. In analysis methods, the only requirement is that the relationship between inputs and outputs can be summarized by an analysis method. Inductive methods need a lot of patterns. Some of the additional requirements are: (1) samples should be distributed randomly and cover most cases, and (2) a mapping function should exist (neural network should be converged for the problems). After comparison, our conclusion is that they compensate each other. Based on dierent kinds of synthesis problems and available information, dierent methodologies should be used to design synthesis strategies. This paper gives a clear idea of two potential ways for doing research in synthesis of solutions and also oers a guideline for developing and choosing synthesis strategies in DESs by using two dierent methodologies. Further work intends to (1) investigate the analogical methodology and develop new synthesis strategies by case-based reasoning based on an analogical method, and (2) investigate how to combine dierent methodologies together. 11

12 References [1] R. Duda, P. Hart and N. Nilsson, Subjective Bayesian Method for Rule-based Inference System, AFIPS, Vol. 45, pp , [2] N. A. Khan and R. Jain, Uncertainty Management in a Distributed Knowledge Based System, Proceedings of 9th International Joint Conference on Articial Intelligence, California, pp , [3] V. Lesser and D. Corkill, The Distributed Vehicle Monitoring Testbed, AI Magazine, Vol. 4, pp , [4] S. Matwin, S. Szpakowicz, Z. Koperczak, G. Kersten and W. Michalowski, Negoplan: An Expert System Shell for Negotiation Support, IEEE Expert, Vol. 4, pp , [5] W. Van Melle, A Domain-Independent System that Aids in Constructing Knowledge-Based Consultation Programs, Ph.D. Dissertation, Report STAN-CS , Computer Science Department, Stanford University, CA, [6] G. Shafer, A Mathematical Theory of Evidence, Princeton University Press, Princeton and London, [7] P. W. Thorndyke, D. McArthur and S. Cammarata, Autopilot: A Distributed Planner for Air Flee Control, Proceedings of the 7th International Joint Conference on Articial Intelligence, Vancouver, BC, Canada, [8] J. D. Yang, M. N. Huhns, and L. M. Stephens, An Architecture for Control and Communications in Distributed Articial Intelligence Systems, Readings in Distributed Articial Intelligence, edited by A. Bond and L. Gasser, Morgan Kaufman Publishers, California, pp , [9] C. Zhang, Cooperation under Uncertainty in Distributed Expert Systems. Articial Intelligence, Vol. 56, pp , [10] M. Zhang and C. Zhang, A Comprehensive Strategy for Conict Resolution in Distributed Expert Systems, Australian Journal of Intelligent Information Processing Systems, Vol. 1, No. 2, pp , [11] M. Zhang and C. Zhang, Synthesis of Solutions in Distributed Expert Systems, Articial Intelligence - Sowing the Seeds for the Future, edited by C. Zhang, J. Debenham and D. Lukose, World Scientic Publishers, Singapore, pp , [12] M. Zhang and C. Zhang, Neural Network Strategies for Solving Synthesis Problems in Non-conict Cases in Distributed Expert Systems, Distributed Articial Intelligence: Architecture and Modeling, LNAI Vol 1087, edited by C. Zhang and D. Lukose, Lecture Notes in Articial Intelligence, Springer Verlag Publishers, pp ,

Computing the acceptability semantics. London SW7 2BZ, UK, Nicosia P.O. Box 537, Cyprus,

Computing the acceptability semantics. London SW7 2BZ, UK, Nicosia P.O. Box 537, Cyprus, Computing the acceptability semantics Francesca Toni 1 and Antonios C. Kakas 2 1 Department of Computing, Imperial College, 180 Queen's Gate, London SW7 2BZ, UK, ft@doc.ic.ac.uk 2 Department of Computer

More information

2 C. A. Gunter ackground asic Domain Theory. A poset is a set D together with a binary relation v which is reexive, transitive and anti-symmetric. A s

2 C. A. Gunter ackground asic Domain Theory. A poset is a set D together with a binary relation v which is reexive, transitive and anti-symmetric. A s 1 THE LARGEST FIRST-ORDER-AXIOMATIZALE CARTESIAN CLOSED CATEGORY OF DOMAINS 1 June 1986 Carl A. Gunter Cambridge University Computer Laboratory, Cambridge C2 3QG, England Introduction The inspiration for

More information

1/sqrt(B) convergence 1/B convergence B

1/sqrt(B) convergence 1/B convergence B The Error Coding Method and PICTs Gareth James and Trevor Hastie Department of Statistics, Stanford University March 29, 1998 Abstract A new family of plug-in classication techniques has recently been

More information

a subset of these N input variables. A naive method is to train a new neural network on this subset to determine this performance. Instead of the comp

a subset of these N input variables. A naive method is to train a new neural network on this subset to determine this performance. Instead of the comp Input Selection with Partial Retraining Pierre van de Laar, Stan Gielen, and Tom Heskes RWCP? Novel Functions SNN?? Laboratory, Dept. of Medical Physics and Biophysics, University of Nijmegen, The Netherlands.

More information

ON THE COMPLEXITY OF SOLVING THE GENERALIZED SET PACKING PROBLEM APPROXIMATELY. Nimrod Megiddoy

ON THE COMPLEXITY OF SOLVING THE GENERALIZED SET PACKING PROBLEM APPROXIMATELY. Nimrod Megiddoy ON THE COMPLEXITY OF SOLVING THE GENERALIZED SET PACKING PROBLEM APPROXIMATELY Nimrod Megiddoy Abstract. The generalized set packing problem (GSP ) is as follows. Given a family F of subsets of M = f mg

More information

Parts 3-6 are EXAMPLES for cse634

Parts 3-6 are EXAMPLES for cse634 1 Parts 3-6 are EXAMPLES for cse634 FINAL TEST CSE 352 ARTIFICIAL INTELLIGENCE Fall 2008 There are 6 pages in this exam. Please make sure you have all of them INTRODUCTION Philosophical AI Questions Q1.

More information

Zhang and Poole ([23]) made the observation that ICI enables one to factorize a conditional probability table into smaller pieces and showed how the V

Zhang and Poole ([23]) made the observation that ICI enables one to factorize a conditional probability table into smaller pieces and showed how the V Independence of Causal Inuence and Clique Tree Propagation Nevin Lianwen Zhang and Li Yan Hong Kong University of Science and Technology ABSTRACT This paper explores the role of independence of causal

More information

Arguing and Explaining Classifications

Arguing and Explaining Classifications Arguing and Explaining Classifications Leila Amgoud IRIT CNRS 118, route de Narbonne 31062, Toulouse, France amgoud@irit.fr Mathieu Serrurier IRIT CNRS 118, route de Narbonne 31062, Toulouse, France serrurier@irit.fr

More information

Information Evaluation in fusion: a case study. Laurence Cholvy. ONERA Centre de Toulouse, BP 4025, 2 av Ed Belin, Toulouse, France

Information Evaluation in fusion: a case study. Laurence Cholvy. ONERA Centre de Toulouse, BP 4025, 2 av Ed Belin, Toulouse, France Information Evaluation in fusion: a case study Laurence Cholvy ONERA Centre de Toulouse, BP 4025, 2 av Ed Belin, 31055 Toulouse, France cholvy@cert.fr Abstract. This paper examines a case study of information

More information

Stochastic dominance with imprecise information

Stochastic dominance with imprecise information Stochastic dominance with imprecise information Ignacio Montes, Enrique Miranda, Susana Montes University of Oviedo, Dep. of Statistics and Operations Research. Abstract Stochastic dominance, which is

More information

No.5 Node Grouping in System-Level Fault Diagnosis 475 identified under above strategy is precise in the sense that all nodes in F are truly faulty an

No.5 Node Grouping in System-Level Fault Diagnosis 475 identified under above strategy is precise in the sense that all nodes in F are truly faulty an Vol.16 No.5 J. Comput. Sci. & Technol. Sept. 2001 Node Grouping in System-Level Fault Diagnosis ZHANG Dafang (± ) 1, XIE Gaogang (ΞΛ ) 1 and MIN Yinghua ( ΠΦ) 2 1 Department of Computer Science, Hunan

More information

In: Proc. BENELEARN-98, 8th Belgian-Dutch Conference on Machine Learning, pp 9-46, 998 Linear Quadratic Regulation using Reinforcement Learning Stephan ten Hagen? and Ben Krose Department of Mathematics,

More information

CS Lecture 3. More Bayesian Networks

CS Lecture 3. More Bayesian Networks CS 6347 Lecture 3 More Bayesian Networks Recap Last time: Complexity challenges Representing distributions Computing probabilities/doing inference Introduction to Bayesian networks Today: D-separation,

More information

var D (B) = var(b? E D (B)) = var(b)? cov(b; D)(var(D))?1 cov(d; B) (2) Stone [14], and Hartigan [9] are among the rst to discuss the role of such ass

var D (B) = var(b? E D (B)) = var(b)? cov(b; D)(var(D))?1 cov(d; B) (2) Stone [14], and Hartigan [9] are among the rst to discuss the role of such ass BAYES LINEAR ANALYSIS [This article appears in the Encyclopaedia of Statistical Sciences, Update volume 3, 1998, Wiley.] The Bayes linear approach is concerned with problems in which we want to combine

More information

3.1 Basic properties of real numbers - continuation Inmum and supremum of a set of real numbers

3.1 Basic properties of real numbers - continuation Inmum and supremum of a set of real numbers Chapter 3 Real numbers The notion of real number was introduced in section 1.3 where the axiomatic denition of the set of all real numbers was done and some basic properties of the set of all real numbers

More information

Some Notes on Costless Signaling Games

Some Notes on Costless Signaling Games Some Notes on Costless Signaling Games John Morgan University of California at Berkeley Preliminaries Our running example is that of a decision maker (DM) consulting a knowledgeable expert for advice about

More information

Counting and Constructing Minimal Spanning Trees. Perrin Wright. Department of Mathematics. Florida State University. Tallahassee, FL

Counting and Constructing Minimal Spanning Trees. Perrin Wright. Department of Mathematics. Florida State University. Tallahassee, FL Counting and Constructing Minimal Spanning Trees Perrin Wright Department of Mathematics Florida State University Tallahassee, FL 32306-3027 Abstract. We revisit the minimal spanning tree problem in order

More information

A Preference Semantics. for Ground Nonmonotonic Modal Logics. logics, a family of nonmonotonic modal logics obtained by means of a

A Preference Semantics. for Ground Nonmonotonic Modal Logics. logics, a family of nonmonotonic modal logics obtained by means of a A Preference Semantics for Ground Nonmonotonic Modal Logics Daniele Nardi and Riccardo Rosati Dipartimento di Informatica e Sistemistica, Universita di Roma \La Sapienza", Via Salaria 113, I-00198 Roma,

More information

Lecture 5: Bayesian Network

Lecture 5: Bayesian Network Lecture 5: Bayesian Network Topics of this lecture What is a Bayesian network? A simple example Formal definition of BN A slightly difficult example Learning of BN An example of learning Important topics

More information

4.1 Notation and probability review

4.1 Notation and probability review Directed and undirected graphical models Fall 2015 Lecture 4 October 21st Lecturer: Simon Lacoste-Julien Scribe: Jaime Roquero, JieYing Wu 4.1 Notation and probability review 4.1.1 Notations Let us recall

More information

Bayesian Networks: Construction, Inference, Learning and Causal Interpretation. Volker Tresp Summer 2016

Bayesian Networks: Construction, Inference, Learning and Causal Interpretation. Volker Tresp Summer 2016 Bayesian Networks: Construction, Inference, Learning and Causal Interpretation Volker Tresp Summer 2016 1 Introduction So far we were mostly concerned with supervised learning: we predicted one or several

More information

Aijun An and Nick Cercone. Department of Computer Science, University of Waterloo. methods in a context of learning classication rules.

Aijun An and Nick Cercone. Department of Computer Science, University of Waterloo. methods in a context of learning classication rules. Discretization of Continuous Attributes for Learning Classication Rules Aijun An and Nick Cercone Department of Computer Science, University of Waterloo Waterloo, Ontario N2L 3G1 Canada Abstract. We present

More information

Computational Intelligence, Volume, Number, VAGUENES AND UNCERTAINTY: A ROUGH SET PERSPECTIVE. Zdzislaw Pawlak

Computational Intelligence, Volume, Number, VAGUENES AND UNCERTAINTY: A ROUGH SET PERSPECTIVE. Zdzislaw Pawlak Computational Intelligence, Volume, Number, VAGUENES AND UNCERTAINTY: A ROUGH SET PERSPECTIVE Zdzislaw Pawlak Institute of Computer Science, Warsaw Technical University, ul. Nowowiejska 15/19,00 665 Warsaw,

More information

Learning Conditional Probabilities from Incomplete Data: An Experimental Comparison Marco Ramoni Knowledge Media Institute Paola Sebastiani Statistics

Learning Conditional Probabilities from Incomplete Data: An Experimental Comparison Marco Ramoni Knowledge Media Institute Paola Sebastiani Statistics Learning Conditional Probabilities from Incomplete Data: An Experimental Comparison Marco Ramoni Knowledge Media Institute Paola Sebastiani Statistics Department Abstract This paper compares three methods

More information

[3] R.M. Colomb and C.Y.C. Chung. Very fast decision table execution of propositional

[3] R.M. Colomb and C.Y.C. Chung. Very fast decision table execution of propositional - 14 - [3] R.M. Colomb and C.Y.C. Chung. Very fast decision table execution of propositional expert systems. Proceedings of the 8th National Conference on Articial Intelligence (AAAI-9), 199, 671{676.

More information

bound on the likelihood through the use of a simpler variational approximating distribution. A lower bound is particularly useful since maximization o

bound on the likelihood through the use of a simpler variational approximating distribution. A lower bound is particularly useful since maximization o Category: Algorithms and Architectures. Address correspondence to rst author. Preferred Presentation: oral. Variational Belief Networks for Approximate Inference Wim Wiegerinck David Barber Stichting Neurale

More information

below, kernel PCA Eigenvectors, and linear combinations thereof. For the cases where the pre-image does exist, we can provide a means of constructing

below, kernel PCA Eigenvectors, and linear combinations thereof. For the cases where the pre-image does exist, we can provide a means of constructing Kernel PCA Pattern Reconstruction via Approximate Pre-Images Bernhard Scholkopf, Sebastian Mika, Alex Smola, Gunnar Ratsch, & Klaus-Robert Muller GMD FIRST, Rudower Chaussee 5, 12489 Berlin, Germany fbs,

More information

A critical review of General Design Theory. Yoram Reich 1. Department of Solid Mechanics, Materials and Structures. Faculty of Engineering

A critical review of General Design Theory. Yoram Reich 1. Department of Solid Mechanics, Materials and Structures. Faculty of Engineering A critical review of General Design Theory Yoram Reich 1 Department of Solid Mechanics, Materials and Structures Faculty of Engineering Tel Aviv University Tel Aviv 69978 Israel In Research in Engineering

More information

Elementary 2-Group Character Codes. Abstract. In this correspondence we describe a class of codes over GF (q),

Elementary 2-Group Character Codes. Abstract. In this correspondence we describe a class of codes over GF (q), Elementary 2-Group Character Codes Cunsheng Ding 1, David Kohel 2, and San Ling Abstract In this correspondence we describe a class of codes over GF (q), where q is a power of an odd prime. These codes

More information

Equivalence in Non-Recursive Structural Equation Models

Equivalence in Non-Recursive Structural Equation Models Equivalence in Non-Recursive Structural Equation Models Thomas Richardson 1 Philosophy Department, Carnegie-Mellon University Pittsburgh, P 15213, US thomas.richardson@andrew.cmu.edu Introduction In the

More information

On Rening Equipment Condition Monitoring using Fuzzy Sets and Articial Neural Nets K. Tomsovic A. Amar School of Electrical Engineering and Computer S

On Rening Equipment Condition Monitoring using Fuzzy Sets and Articial Neural Nets K. Tomsovic A. Amar School of Electrical Engineering and Computer S On Rening Equipment Condition Monitoring using Fuzzy Sets and Articial Neural Nets K. Tomsovic A. Amar School of Electrical Engineering and Computer Science Washington State University Pullman, WA, USA

More information

usual one uses sequents and rules. The second one used special graphs known as proofnets.

usual one uses sequents and rules. The second one used special graphs known as proofnets. Math. Struct. in omp. Science (1993), vol. 11, pp. 1000 opyright c ambridge University Press Minimality of the orrectness riterion for Multiplicative Proof Nets D E N I S B E H E T RIN-NRS & INRILorraine

More information

CHAPTER-17. Decision Tree Induction

CHAPTER-17. Decision Tree Induction CHAPTER-17 Decision Tree Induction 17.1 Introduction 17.2 Attribute selection measure 17.3 Tree Pruning 17.4 Extracting Classification Rules from Decision Trees 17.5 Bayesian Classification 17.6 Bayes

More information

Default Reasoning and Belief Revision: A Syntax-Independent Approach. (Extended Abstract) Department of Computer Science and Engineering

Default Reasoning and Belief Revision: A Syntax-Independent Approach. (Extended Abstract) Department of Computer Science and Engineering Default Reasoning and Belief Revision: A Syntax-Independent Approach (Extended Abstract) Dongmo Zhang 1;2, Zhaohui Zhu 1 and Shifu Chen 2 1 Department of Computer Science and Engineering Nanjing University

More information

Model Complexity of Pseudo-independent Models

Model Complexity of Pseudo-independent Models Model Complexity of Pseudo-independent Models Jae-Hyuck Lee and Yang Xiang Department of Computing and Information Science University of Guelph, Guelph, Canada {jaehyuck, yxiang}@cis.uoguelph,ca Abstract

More information

Model Theory Based Fusion Framework with Application to. Multisensor Target Recognition. Zbigniew Korona and Mieczyslaw M. Kokar

Model Theory Based Fusion Framework with Application to. Multisensor Target Recognition. Zbigniew Korona and Mieczyslaw M. Kokar Model Theory Based Framework with Application to Multisensor Target Recognition Abstract In this work, we present a model theory based fusion methodology for multisensor waveletfeatures based recognition

More information

An average case analysis of a dierential attack. on a class of SP-networks. Distributed Systems Technology Centre, and

An average case analysis of a dierential attack. on a class of SP-networks. Distributed Systems Technology Centre, and An average case analysis of a dierential attack on a class of SP-networks Luke O'Connor Distributed Systems Technology Centre, and Information Security Research Center, QUT Brisbane, Australia Abstract

More information

The size of decision table can be understood in terms of both cardinality of A, denoted by card (A), and the number of equivalence classes of IND (A),

The size of decision table can be understood in terms of both cardinality of A, denoted by card (A), and the number of equivalence classes of IND (A), Attribute Set Decomposition of Decision Tables Dominik Slezak Warsaw University Banacha 2, 02-097 Warsaw Phone: +48 (22) 658-34-49 Fax: +48 (22) 658-34-48 Email: slezak@alfa.mimuw.edu.pl ABSTRACT: Approach

More information

2 SUMMARISING APPROXIMATE ENTAILMENT In this section we will summarise the work in (Schaerf and Cadoli 995), which denes the approximate entailment re

2 SUMMARISING APPROXIMATE ENTAILMENT In this section we will summarise the work in (Schaerf and Cadoli 995), which denes the approximate entailment re Computing approximate diagnoses by using approximate entailment Annette ten Teije SWI University of Amsterdam The Netherlands annette@swi.psy.uva.nl Abstract The most widely accepted models of diagnostic

More information

In: Advances in Intelligent Data Analysis (AIDA), International Computer Science Conventions. Rochester New York, 1999

In: Advances in Intelligent Data Analysis (AIDA), International Computer Science Conventions. Rochester New York, 1999 In: Advances in Intelligent Data Analysis (AIDA), Computational Intelligence Methods and Applications (CIMA), International Computer Science Conventions Rochester New York, 999 Feature Selection Based

More information

Environment (E) IBP IBP IBP 2 N 2 N. server. System (S) Adapter (A) ACV

Environment (E) IBP IBP IBP 2 N 2 N. server. System (S) Adapter (A) ACV The Adaptive Cross Validation Method - applied to polling schemes Anders Svensson and Johan M Karlsson Department of Communication Systems Lund Institute of Technology P. O. Box 118, 22100 Lund, Sweden

More information

A Cooperation Strategy for Decision-making Agents

A Cooperation Strategy for Decision-making Agents A Cooperation Strategy for Decision-making Agents Eduardo Camponogara Institute for Complex Engineered Systems Carnegie Mellon University Pittsburgh, PA 523-389 camponog+@cs.cmu.edu Abstract The style

More information

Managing Decomposed Belief Functions

Managing Decomposed Belief Functions Managing Decomposed Belief Functions Johan Schubert Department of Decision Support Systems, Division of Command and Control Systems, Swedish Defence Research Agency, SE-164 90 Stockholm, Sweden schubert@foi.se

More information

What Causality Is (stats for mathematicians)

What Causality Is (stats for mathematicians) What Causality Is (stats for mathematicians) Andrew Critch UC Berkeley August 31, 2011 Introduction Foreword: The value of examples With any hard question, it helps to start with simple, concrete versions

More information

Marginal Functions and Approximation

Marginal Functions and Approximation UCSC AMS/ECON 11A Supplemental Notes # 5 Marginal Functions and Approximation c 2006 Yonatan Katznelson 1. The approximation formula If y = f (x) is a dierentiable function then its derivative, y 0 = f

More information

1. Introduction Bottom-Up-Heapsort is a variant of the classical Heapsort algorithm due to Williams ([Wi64]) and Floyd ([F64]) and was rst presented i

1. Introduction Bottom-Up-Heapsort is a variant of the classical Heapsort algorithm due to Williams ([Wi64]) and Floyd ([F64]) and was rst presented i A Tight Lower Bound for the Worst Case of Bottom-Up-Heapsort 1 by Rudolf Fleischer 2 Keywords : heapsort, bottom-up-heapsort, tight lower bound ABSTRACT Bottom-Up-Heapsort is a variant of Heapsort. Its

More information

A Neuro-Fuzzy Scheme for Integrated Input Fuzzy Set Selection and Optimal Fuzzy Rule Generation for Classification

A Neuro-Fuzzy Scheme for Integrated Input Fuzzy Set Selection and Optimal Fuzzy Rule Generation for Classification A Neuro-Fuzzy Scheme for Integrated Input Fuzzy Set Selection and Optimal Fuzzy Rule Generation for Classification Santanu Sen 1 and Tandra Pal 2 1 Tejas Networks India Ltd., Bangalore - 560078, India

More information

Behavioral Economics

Behavioral Economics Behavioral Economics 1 Behavioral Economics Midterm, 13th - Suggested Solutions 1 Question 1 (40 points) Consider the following (completely standard) decision maker: They have a utility function u on a

More information

Recall from last time. Lecture 3: Conditional independence and graph structure. Example: A Bayesian (belief) network.

Recall from last time. Lecture 3: Conditional independence and graph structure. Example: A Bayesian (belief) network. ecall from last time Lecture 3: onditional independence and graph structure onditional independencies implied by a belief network Independence maps (I-maps) Factorization theorem The Bayes ball algorithm

More information

Chapter 1. Comparison-Sorting and Selecting in. Totally Monotone Matrices. totally monotone matrices can be found in [4], [5], [9],

Chapter 1. Comparison-Sorting and Selecting in. Totally Monotone Matrices. totally monotone matrices can be found in [4], [5], [9], Chapter 1 Comparison-Sorting and Selecting in Totally Monotone Matrices Noga Alon Yossi Azar y Abstract An mn matrix A is called totally monotone if for all i 1 < i 2 and j 1 < j 2, A[i 1; j 1] > A[i 1;

More information

Vector Space Basics. 1 Abstract Vector Spaces. 1. (commutativity of vector addition) u + v = v + u. 2. (associativity of vector addition)

Vector Space Basics. 1 Abstract Vector Spaces. 1. (commutativity of vector addition) u + v = v + u. 2. (associativity of vector addition) Vector Space Basics (Remark: these notes are highly formal and may be a useful reference to some students however I am also posting Ray Heitmann's notes to Canvas for students interested in a direct computational

More information

How to Pop a Deep PDA Matters

How to Pop a Deep PDA Matters How to Pop a Deep PDA Matters Peter Leupold Department of Mathematics, Faculty of Science Kyoto Sangyo University Kyoto 603-8555, Japan email:leupold@cc.kyoto-su.ac.jp Abstract Deep PDA are push-down automata

More information

1 Introduction Tasks like voice or face recognition are quite dicult to realize with conventional computer systems, even for the most powerful of them

1 Introduction Tasks like voice or face recognition are quite dicult to realize with conventional computer systems, even for the most powerful of them Information Storage Capacity of Incompletely Connected Associative Memories Holger Bosch Departement de Mathematiques et d'informatique Ecole Normale Superieure de Lyon Lyon, France Franz Kurfess Department

More information

Basic Probabilistic Reasoning SEG

Basic Probabilistic Reasoning SEG Basic Probabilistic Reasoning SEG 7450 1 Introduction Reasoning under uncertainty using probability theory Dealing with uncertainty is one of the main advantages of an expert system over a simple decision

More information

Probabilistic ATMS. Weiru Liu, Alan Bundy. Department of Articial Intelligence, University of Edinburgh, Edinburgh EH1 1HN, UK.

Probabilistic ATMS. Weiru Liu, Alan Bundy. Department of Articial Intelligence, University of Edinburgh, Edinburgh EH1 1HN, UK. Constructing Probabilistic ATMS Using Extended Incidence Calculus Weiru Liu, Alan Bundy Department of Articial Intelligence, University of Edinburgh, Edinburgh EH1 1HN, UK. ABSTRACT This paper discusses

More information

Dempster's Rule of Combination is. #P -complete. Pekka Orponen. Department of Computer Science, University of Helsinki

Dempster's Rule of Combination is. #P -complete. Pekka Orponen. Department of Computer Science, University of Helsinki Dempster's Rule of Combination is #P -complete Pekka Orponen Department of Computer Science, University of Helsinki eollisuuskatu 23, SF{00510 Helsinki, Finland Abstract We consider the complexity of combining

More information

Bayesian belief networks

Bayesian belief networks CS 2001 Lecture 1 Bayesian belief networks Milos Hauskrecht milos@cs.pitt.edu 5329 Sennott Square 4-8845 Milos research interests Artificial Intelligence Planning, reasoning and optimization in the presence

More information

Bayesian Networks: Construction, Inference, Learning and Causal Interpretation. Volker Tresp Summer 2014

Bayesian Networks: Construction, Inference, Learning and Causal Interpretation. Volker Tresp Summer 2014 Bayesian Networks: Construction, Inference, Learning and Causal Interpretation Volker Tresp Summer 2014 1 Introduction So far we were mostly concerned with supervised learning: we predicted one or several

More information

CHAPTER 2 AN ALGORITHM FOR OPTIMIZATION OF QUANTUM COST. 2.1 Introduction

CHAPTER 2 AN ALGORITHM FOR OPTIMIZATION OF QUANTUM COST. 2.1 Introduction CHAPTER 2 AN ALGORITHM FOR OPTIMIZATION OF QUANTUM COST Quantum cost is already introduced in Subsection 1.3.3. It is an important measure of quality of reversible and quantum circuits. This cost metric

More information

Lecture 9 : PPAD and the Complexity of Equilibrium Computation. 1 Complexity Class PPAD. 1.1 What does PPAD mean?

Lecture 9 : PPAD and the Complexity of Equilibrium Computation. 1 Complexity Class PPAD. 1.1 What does PPAD mean? CS 599: Algorithmic Game Theory October 20, 2010 Lecture 9 : PPAD and the Complexity of Equilibrium Computation Prof. Xi Chen Scribes: Cheng Lu and Sasank Vijayan 1 Complexity Class PPAD 1.1 What does

More information

Rough Sets, Rough Relations and Rough Functions. Zdzislaw Pawlak. Warsaw University of Technology. ul. Nowowiejska 15/19, Warsaw, Poland.

Rough Sets, Rough Relations and Rough Functions. Zdzislaw Pawlak. Warsaw University of Technology. ul. Nowowiejska 15/19, Warsaw, Poland. Rough Sets, Rough Relations and Rough Functions Zdzislaw Pawlak Institute of Computer Science Warsaw University of Technology ul. Nowowiejska 15/19, 00 665 Warsaw, Poland and Institute of Theoretical and

More information

Winter Lecture 10. Convexity and Concavity

Winter Lecture 10. Convexity and Concavity Andrew McLennan February 9, 1999 Economics 5113 Introduction to Mathematical Economics Winter 1999 Lecture 10 Convexity and Concavity I. Introduction A. We now consider convexity, concavity, and the general

More information

Diagram-based Formalisms for the Verication of. Reactive Systems. Anca Browne, Luca de Alfaro, Zohar Manna, Henny B. Sipma and Tomas E.

Diagram-based Formalisms for the Verication of. Reactive Systems. Anca Browne, Luca de Alfaro, Zohar Manna, Henny B. Sipma and Tomas E. In CADE-1 Workshop on Visual Reasoning, New Brunswick, NJ, July 1996. Diagram-based Formalisms for the Verication of Reactive Systems Anca Browne, Luca de Alfaro, Zohar Manna, Henny B. Sipma and Tomas

More information

Optimal blocking of two-level fractional factorial designs

Optimal blocking of two-level fractional factorial designs Journal of Statistical Planning and Inference 91 (2000) 107 121 www.elsevier.com/locate/jspi Optimal blocking of two-level fractional factorial designs Runchu Zhang a;, DongKwon Park b a Department of

More information

Genuine atomic multicast in asynchronous distributed systems

Genuine atomic multicast in asynchronous distributed systems Theoretical Computer Science 254 (2001) 297 316 www.elsevier.com/locate/tcs Genuine atomic multicast in asynchronous distributed systems Rachid Guerraoui, Andre Schiper Departement d Informatique, Ecole

More information

Some remarks on the Shannon capacity of odd cycles Bruno Codenotti Ivan Gerace y Giovanni Resta z Abstract We tackle the problem of estimating the Sha

Some remarks on the Shannon capacity of odd cycles Bruno Codenotti Ivan Gerace y Giovanni Resta z Abstract We tackle the problem of estimating the Sha Some remarks on the Shannon capacity of odd cycles Bruno Codenotti Ivan Gerace y Giovanni Resta z Abstract We tackle the problem of estimating the Shannon capacity of cycles of odd length. We present some

More information

EEE 8005 Student Directed Learning (SDL) Industrial Automation Fuzzy Logic

EEE 8005 Student Directed Learning (SDL) Industrial Automation Fuzzy Logic EEE 8005 Student Directed Learning (SDL) Industrial utomation Fuzzy Logic Desire location z 0 Rot ( y, φ ) Nail cos( φ) 0 = sin( φ) 0 0 0 0 sin( φ) 0 cos( φ) 0 0 0 0 z 0 y n (0,a,0) y 0 y 0 z n End effector

More information

Pp. 311{318 in Proceedings of the Sixth International Workshop on Articial Intelligence and Statistics

Pp. 311{318 in Proceedings of the Sixth International Workshop on Articial Intelligence and Statistics Pp. 311{318 in Proceedings of the Sixth International Workshop on Articial Intelligence and Statistics (Ft. Lauderdale, USA, January 1997) Comparing Predictive Inference Methods for Discrete Domains Petri

More information

and combine the results of the searches. We consider parallel search with subdivision, although most notions can be generalized to using dierent searc

and combine the results of the searches. We consider parallel search with subdivision, although most notions can be generalized to using dierent searc On the representation of parallel search in theorem proving Maria Paola Bonacina Department of Computer Science { The University of Iowa Abstract This extended abstract summarizes two contributions from

More information

Machine Learning (CS 567) Lecture 2

Machine Learning (CS 567) Lecture 2 Machine Learning (CS 567) Lecture 2 Time: T-Th 5:00pm - 6:20pm Location: GFS118 Instructor: Sofus A. Macskassy (macskass@usc.edu) Office: SAL 216 Office hours: by appointment Teaching assistant: Cheol

More information

f(z)dz = 0. P dx + Qdy = D u dx v dy + i u dy + v dx. dxdy + i x = v

f(z)dz = 0. P dx + Qdy = D u dx v dy + i u dy + v dx. dxdy + i x = v MA525 ON CAUCHY'S THEOREM AND GREEN'S THEOREM DAVID DRASIN (EDITED BY JOSIAH YODER) 1. Introduction No doubt the most important result in this course is Cauchy's theorem. Every critical theorem in the

More information

Discrete Probability and State Estimation

Discrete Probability and State Estimation 6.01, Spring Semester, 2008 Week 12 Course Notes 1 MASSACHVSETTS INSTITVTE OF TECHNOLOGY Department of Electrical Engineering and Computer Science 6.01 Introduction to EECS I Spring Semester, 2008 Week

More information

Lecture 10: Introduction to reasoning under uncertainty. Uncertainty

Lecture 10: Introduction to reasoning under uncertainty. Uncertainty Lecture 10: Introduction to reasoning under uncertainty Introduction to reasoning under uncertainty Review of probability Axioms and inference Conditional probability Probability distributions COMP-424,

More information

MARKOV CHAINS: STATIONARY DISTRIBUTIONS AND FUNCTIONS ON STATE SPACES. Contents

MARKOV CHAINS: STATIONARY DISTRIBUTIONS AND FUNCTIONS ON STATE SPACES. Contents MARKOV CHAINS: STATIONARY DISTRIBUTIONS AND FUNCTIONS ON STATE SPACES JAMES READY Abstract. In this paper, we rst introduce the concepts of Markov Chains and their stationary distributions. We then discuss

More information

same literal in a formula), and useful equivalence-preserving operations based on anti-links are

same literal in a formula), and useful equivalence-preserving operations based on anti-links are Fast Subsumption Checks Using Anti-Links Anavai Ramesh y Intel Corporation, MS CH6-418, 5000 W. Chandler Blvd., Chandler, AZ 85226. Email: agramesh@sedona.intel.com Bernhard Beckert and Reiner Hahnle University

More information

Lecture 3: Decision Trees

Lecture 3: Decision Trees Lecture 3: Decision Trees Cognitive Systems - Machine Learning Part I: Basic Approaches of Concept Learning ID3, Information Gain, Overfitting, Pruning last change November 26, 2014 Ute Schmid (CogSys,

More information

On the errors introduced by the naive Bayes independence assumption

On the errors introduced by the naive Bayes independence assumption On the errors introduced by the naive Bayes independence assumption Author Matthijs de Wachter 3671100 Utrecht University Master Thesis Artificial Intelligence Supervisor Dr. Silja Renooij Department of

More information

COMP538: Introduction to Bayesian Networks

COMP538: Introduction to Bayesian Networks COMP538: Introduction to Bayesian Networks Lecture 9: Optimal Structure Learning Nevin L. Zhang lzhang@cse.ust.hk Department of Computer Science and Engineering Hong Kong University of Science and Technology

More information

This second-order avor of default logic makes it especially useful in knowledge representation. An important question is, then, to characterize those

This second-order avor of default logic makes it especially useful in knowledge representation. An important question is, then, to characterize those Representation Theory for Default Logic V. Wiktor Marek 1 Jan Treur 2 and Miros law Truszczynski 3 Keywords: default logic, extensions, normal default logic, representability Abstract Default logic can

More information

7. F.Balarin and A.Sangiovanni-Vincentelli, A Verication Strategy for Timing-

7. F.Balarin and A.Sangiovanni-Vincentelli, A Verication Strategy for Timing- 7. F.Balarin and A.Sangiovanni-Vincentelli, A Verication Strategy for Timing- Constrained Systems, Proc. 4th Workshop Computer-Aided Verication, Lecture Notes in Computer Science 663, Springer-Verlag,

More information

Alvaro Rodrigues-Neto Research School of Economics, Australian National University. ANU Working Papers in Economics and Econometrics # 587

Alvaro Rodrigues-Neto Research School of Economics, Australian National University. ANU Working Papers in Economics and Econometrics # 587 Cycles of length two in monotonic models José Alvaro Rodrigues-Neto Research School of Economics, Australian National University ANU Working Papers in Economics and Econometrics # 587 October 20122 JEL:

More information

A generic framework for resolving the conict in the combination of belief structures E. Lefevre PSI, Universite/INSA de Rouen Place Emile Blondel, BP

A generic framework for resolving the conict in the combination of belief structures E. Lefevre PSI, Universite/INSA de Rouen Place Emile Blondel, BP A generic framework for resolving the conict in the combination of belief structures E. Lefevre PSI, Universite/INSA de Rouen Place Emile Blondel, BP 08 76131 Mont-Saint-Aignan Cedex, France Eric.Lefevre@insa-rouen.fr

More information

Secret-sharing with a class of ternary codes

Secret-sharing with a class of ternary codes Theoretical Computer Science 246 (2000) 285 298 www.elsevier.com/locate/tcs Note Secret-sharing with a class of ternary codes Cunsheng Ding a, David R Kohel b, San Ling c; a Department of Computer Science,

More information

An Algebraic View of the Relation between Largest Common Subtrees and Smallest Common Supertrees

An Algebraic View of the Relation between Largest Common Subtrees and Smallest Common Supertrees An Algebraic View of the Relation between Largest Common Subtrees and Smallest Common Supertrees Francesc Rosselló 1, Gabriel Valiente 2 1 Department of Mathematics and Computer Science, Research Institute

More information

Calculus and linear algebra for biomedical engineering Week 3: Matrices, linear systems of equations, and the Gauss algorithm

Calculus and linear algebra for biomedical engineering Week 3: Matrices, linear systems of equations, and the Gauss algorithm Calculus and linear algebra for biomedical engineering Week 3: Matrices, linear systems of equations, and the Gauss algorithm Hartmut Führ fuehr@matha.rwth-aachen.de Lehrstuhl A für Mathematik, RWTH Aachen

More information

Splitting a Default Theory. Hudson Turner. University of Texas at Austin.

Splitting a Default Theory. Hudson Turner. University of Texas at Austin. Splitting a Default Theory Hudson Turner Department of Computer Sciences University of Texas at Austin Austin, TX 7872-88, USA hudson@cs.utexas.edu Abstract This paper presents mathematical results that

More information

Computation of Floating Mode Delay in Combinational Circuits: Theory and Algorithms. Kurt Keutzer. Synopsys. Abstract

Computation of Floating Mode Delay in Combinational Circuits: Theory and Algorithms. Kurt Keutzer. Synopsys. Abstract Computation of Floating Mode Delay in Combinational Circuits: Theory and Algorithms Srinivas Devadas MIT Cambridge, MA Kurt Keutzer Synopsys Mountain View, CA Sharad Malik Princeton University Princeton,

More information

Design of abstract domains using first-order logic

Design of abstract domains using first-order logic Centrum voor Wiskunde en Informatica REPORTRAPPORT Design of abstract domains using first-order logic E. Marchiori Computer Science/Department of Interactive Systems CS-R9633 1996 Report CS-R9633 ISSN

More information

your eyes is more reliable than the information about the position of the object coming from your ears. But even reliable sources such as domain exper

your eyes is more reliable than the information about the position of the object coming from your ears. But even reliable sources such as domain exper A logic for reasoning with inconsistent knowledge Nico Roos Research Institute for Knowledge Systems Tongersestraat 6 P. O. Box 463, 6200 AL Maastricht The Netherlands This paper has been published in

More information

Rapid Introduction to Machine Learning/ Deep Learning

Rapid Introduction to Machine Learning/ Deep Learning Rapid Introduction to Machine Learning/ Deep Learning Hyeong In Choi Seoul National University 1/32 Lecture 5a Bayesian network April 14, 2016 2/32 Table of contents 1 1. Objectives of Lecture 5a 2 2.Bayesian

More information

An Alternative To The Iteration Operator Of. Propositional Dynamic Logic. Marcos Alexandre Castilho 1. IRIT - Universite Paul Sabatier and

An Alternative To The Iteration Operator Of. Propositional Dynamic Logic. Marcos Alexandre Castilho 1. IRIT - Universite Paul Sabatier and An Alternative To The Iteration Operator Of Propositional Dynamic Logic Marcos Alexandre Castilho 1 IRIT - Universite Paul abatier and UFPR - Universidade Federal do Parana (Brazil) Andreas Herzig IRIT

More information

Selection of Classifiers based on Multiple Classifier Behaviour

Selection of Classifiers based on Multiple Classifier Behaviour Selection of Classifiers based on Multiple Classifier Behaviour Giorgio Giacinto, Fabio Roli, and Giorgio Fumera Dept. of Electrical and Electronic Eng. - University of Cagliari Piazza d Armi, 09123 Cagliari,

More information

Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space

Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) Contents 1 Vector Spaces 1 1.1 The Formal Denition of a Vector Space.................................. 1 1.2 Subspaces...................................................

More information

Axiomatic set theory. Chapter Why axiomatic set theory?

Axiomatic set theory. Chapter Why axiomatic set theory? Chapter 1 Axiomatic set theory 1.1 Why axiomatic set theory? Essentially all mathematical theories deal with sets in one way or another. In most cases, however, the use of set theory is limited to its

More information

A Proof-Theoretic Approach to Irrelevance: Richard E. Fikes. KSL, Stanford University. et al., 1994b].

A Proof-Theoretic Approach to Irrelevance: Richard E. Fikes. KSL, Stanford University. et al., 1994b]. A Proof-Theoretic Approach to Irrelevance: Foundations and Applications Alon Y. Levy AT&T Bell Laboratories Murray Hill, NJ, 7974 levy@research.att.com Richard E. Fikes KSL, Stanford University Palo Alto,

More information

relative accuracy measure in the rule induction algorithm CN2 [2]. The original version of CN2 uses classication accuracy as a rule evaluation measure

relative accuracy measure in the rule induction algorithm CN2 [2]. The original version of CN2 uses classication accuracy as a rule evaluation measure Predictive Performance of Weighted Relative Accuracy Ljupco Todorovski 1,Peter Flach 2, Nada Lavrac 1 1 Department ofintelligent Systems, Jozef Stefan Institute Jamova 39, 1000 Ljubljana, Slovenia Ljupco.Todorovski@ijs.si,

More information

Department of Mathematical Sciences, Norwegian University of Science and Technology, Trondheim

Department of Mathematical Sciences, Norwegian University of Science and Technology, Trondheim Tests for trend in more than one repairable system. Jan Terje Kvaly Department of Mathematical Sciences, Norwegian University of Science and Technology, Trondheim ABSTRACT: If failure time data from several

More information

A Simple Implementation of the Stochastic Discrimination for Pattern Recognition

A Simple Implementation of the Stochastic Discrimination for Pattern Recognition A Simple Implementation of the Stochastic Discrimination for Pattern Recognition Dechang Chen 1 and Xiuzhen Cheng 2 1 University of Wisconsin Green Bay, Green Bay, WI 54311, USA chend@uwgb.edu 2 University

More information

Solvability of Word Equations Modulo Finite Special And. Conuent String-Rewriting Systems Is Undecidable In General.

Solvability of Word Equations Modulo Finite Special And. Conuent String-Rewriting Systems Is Undecidable In General. Solvability of Word Equations Modulo Finite Special And Conuent String-Rewriting Systems Is Undecidable In General Friedrich Otto Fachbereich Mathematik/Informatik, Universitat GH Kassel 34109 Kassel,

More information

FORMALISING SITUATED LEARNING IN COMPUTER-AIDED DESIGN

FORMALISING SITUATED LEARNING IN COMPUTER-AIDED DESIGN FORMALISING SITUATED LEARNING IN COMPUTER-AIDED DESIGN JOHN.S.GERO AND GOURABMOY NATH Key Centre of Design Computing Department of Architectural and Design Science University of Sydney NS W 2006 Australia

More information