Fuzzy Rough Sets with GA-Based Attribute Division
|
|
- Gavin Parks
- 5 years ago
- Views:
Transcription
1 Fuzzy Rough Sets with GA-Based Attribute Division HUGANG HAN, YOSHIO MORIOKA School of Business, Hiroshima Prefectural University 562 Nanatsuka-cho, Shobara-shi, Hiroshima , JAPAN Abstract: Rough set theory is a powerful tool to extract classification rules from a database that usually is given in the form of information system in which information is expressed by attributes and their values. In this paper, we first evaluate every example data (tuple) using, not a singleton but, a fuzzy number (or, suppose the original information system is along with such fuzzy numbers), due to the fact that the fuzzy numbers are easily set in comparison with the singletons. Then, based on the information system with the fuzzy numbers, we give a new definition of fuzzy rough set. As a result, the traditional rough set proposed by Z. Pawlak is a special case of the fuzzy rough set. Consequently, the upper/lower approximation, which is corresponding with the negative/positive rules, is varying based on the system uncertainties. At the same time, the possible rules can be reduced whereas the negative/positive rules are increased. It means that the approximation precision is improved. In addition, the genetic algorithm is adopted to divide the attribute values that have continuous values into the most proper discrete values in order to improve the approximateness of the system. Key-Words: Rough sets, information system, rule extraction, fuzzy sets, fuzzy rough sets, genetic algorithm. 1 Introduction The effective use of computers in various realms of human activities strongly depends on the efficiency of algorithms implemented in these computers. So far, many theoretical foundation stones for the algorithm have been set up, in which the rough set theory [1, 2] is a powerful tool to extract classification rules from a database. In general, such a database regarding the knowledge we are interested in is given in the form of information system (IS), which actually indicates an approximation space. In the traditional IS,as pointed out by some researchers [3, 4, 5], the approach for rule extraction implements the fully correct or certain classification rules without considering other factors such as uncertain class labeling, importance of examples. The limitations above severely reduce the applicability of the rough set approach to problems which are more probabilistic than deterministic in nature. In order to deal with the defects above and improve the reality of IS, some new concepts, such as variable precision rough model (VP-model) [3], uncertain information system (UIS) [5], have been suggested. In an UIS, considering data s noise tolerance degrees, two classification factors, which are corresponding with the positive region, and the negative region, respectively, must firstly be set up for whole system, then, the certainty and importance for each tuple need to be given. However, when we extract rules based on the UIS, there are some tough tasks to be encountered. The following are some of them. (1) It is difficult to set up some singleton values for the classification factors and the every certainty and importance. For example, you say the importance factor for a condition contribute set is 0.85, and I may say the one is Such a little difference 0.01 may lead to a completely different classification rule. Therefore, a nature way to avoid the problem above is adopting fuzzy numbers, say about 0.85 or about What s more, when the classification and each example data s factors, where no matter their values are singleton or fuzzy, are involved in the IS, the traditional rough set [1] is no longer capable of giving the upper/lower approximation; (2) How to divide the attribute values which are continuous into some discrete intervals? Obviouly, each discrete attribute value is one of the most important setting in an UIS or IS. Because the different discrete attribute value may make the corresponding object enter into a different elementary class, further, that leads to a different classification rule. For example, say age, the candidate sets of the attribute (discrete) value could be {0, 1}, {0, 1, 2} and so on where 0 expresses young, 1 expresses old, and 2 expresses middle. Which one should be taken? Even we can choose one, then, what interval age is middle? You say year old, and I may argue that is year old. Who is correct? We do not have a clue. Here in this paper, the genetic algorithm (GA), is adopted to obtain the most proper division. In this paper, we first evaluate every example data using, not a singleton but, a fuzzy number (or, suppose the original information system is along with such fuzzy numbers), due to the fact that the fuzzy numbers are easily set in comparison with the singletons. Then, based on the information system with the fuzzy numbers, we give a new definition of fuzzy rough set. As a result, the traditional rough set proposed by Z. Pawlak is a special case of the fuzzy rough set. Consequently, the upper/lower approximation, which is corresponding with the negative/positive rules, is varying based on the system uncertainties. At the same time, the possible rules can be reduced whereas the negative/positive rules are increased. It means that the approximation precision is improved. In addition, the genetic algorithm is adopted to divide the attribute values that have continuous values into the
2 most proper discrete values in order to improve the approximateness of the system. The remainder of this paper is arranged as follows. Section 2 describes some basic definitions of rough set and classification rule extraction in an IS. In section 3, we focus our attention on the full explanation of fuzzy rough set in an UIS. At the same time, the relation between the traditional rough set and the fuzzy rough set will be made clear. Section 4 describes the GA division method. Finally, the conclusion in this paper is given in section 5. 2 Rough Sets The database regarding the experts know-how is generally given in the form of the information system. The definition of the traditional information system is given by Pawlak [2]. Definition 1 An information systems (IS) is an ordered quadruple IS =(U, Q, V, ρ) (1) where U is the universe which is a non-empty finite set of objects x; Q is a finite set of attributes q; V = U q Q V q, and V q is the domain of attribute q; ρ is a mapping function such that ρ(x, q) V q for every q Q and x U. Q is composed of two parts: a set of condition attributes (C) and a decision attribute (D), i.e., Q = C D. ρ also is called a decision function. If we introduce function ρ x : Q V such that ρ x (q) =ρ(x, q) for every q Q and x U, ρ x is called decision rule in IS, and x is called a label of the decision rule ρ x. Let IS =(U, Q, V, ρ) be an information system, and let q Q, x, y U. Ifρ x (q) =ρ y (q), then we say x, y are indistinguishable, in symbols xr q y where R q is an equivalence relation. Also, objects x, y U are indistinguishable with respect to P Q in IS, in symbols xr P y,ifxr p y for every p P. In particular, if P = Q, x, y are indistinguishable in IS, in symbols xry instead of xr Q y. Therefore each information system IS =(U, Q, V, ρ) defines uniquely an approximation space A =(U, R), where R is an equivalence relation generated by the information system IS. The equivalence relation R partitions U into a family of disjoint subsets which are called Q-elementary sets. Likewise, R C leads to C-elementary sets, and R D leads to D-elementary sets. Given an arbitrary set X U, in general it may not be possible to describe X precisely in A. One may characterize X by a pair lower and upper approximations. Definition 2 Let R be an equivalence relation on a universe U. For any set X U, the lower approximation apr(x) and the upper approximation apr(x) are defined by as follows: apr(x) = {x U [x] R X} (2) apr(x) = {x U [x] R X 0} (3) where [x] R = {y xry} (4) is the equivalence class containing x. The lower approximation apr(x) is the union of elementary sets which are subsets of X, and the upper approximation apr(x) is the union of elementary sets which have a non-empty intersection with X. The set bnd(x) = apr(x) apr(x) is called boundary of X in A. If bnd(x) is empty, then subset X is exactly definable. Note that rough set is a set (pair) of lower and upper approximation. An accuracy measure of set X in the approximation space A =(U, R) is defined as α(x) = apr(x) apr(x) (5) where denotes the cardinality of a set. Clearly, it is true that 0 α(x) 1. Besides, X is called definable in A if α(x) = 1, and X is called undefinable in A if α(x) < 1. Now, let us consider the issue of rule extraction from an information system. A natural way to extract rules, or represent experts knowledge, is to construct a set of conditional productions, each of them having the form IF { set of conditions} THEN { set of decisions} Such a form can be easily induced by taking the advantage of rough set. In an approximation space A =(U, R), regarding a subset X of U, the whole universe U is partitioned into three regions: Positive region pos(x) = apr(x); Negative region neg(x) = U apr(x); Boundary region bnd(x) = apr(x) apr(x) which lead to the following classification rules: Describing pos(x) positive rules; Describing neg(x) negative rules; Describing bnd(x) possible rules. Regarding the classification rule extraction as above, a simple illustration example is shown as follows. [Example] Suppose that there is an uncertain information system IS =(U, C, D, V ), which is a medical database about the diagnosis of influenza (Tab.1). In the information system, U = {p 1,p 2,...,p 6 } in which each object (element) expresses a patient; Q = C D = {temp, sneeze, headache, flu}, V temp = {0, 1, 2} in which 0 expresses normal, 1 expresses high and 2 expresses very high ; V sneeze = V headache = V flu = {0, 1} in which 0 expresses no and 1 expresses yes. Also, the mapping function ρ is given in the table. Clearly, such an approximation space yields the following elementary sets with respect to attributes temp, sneeze and headache : E 1 = {p 1,p 5 }, E 2 = {p 2 }, E 3 = {p 3 }, E 4 = {p 4 }, E 5 = {p 6 } i.e., C-elementary sets = {E 1,E 2,...,E 5 }.
3 Table 1: Influenza data Q U C D temp sneeze headache flu p p p p p p Now, let us consider to approximate a subset X = {p 1,p 2,p 4 } which is a set of patients who are catching a cold. Based on the concepts of regular IS above, we have, apr(x) ={p 2,p 4 } apr(x) ={p 1,p 5,p 2,p 4 } pos(x) ={p 2,p 4 } (6) neg(x) ={p 3,p 6 } (7) bnd(x) ={p 1,p 5 } (8) Therefore, pos(x) follows the positive rules below: (1) IF temp=1 sneeze=1 handache=0 THEN flu= yes ; (2) IF temp=1 sneeze=1 handache=1 THEN flu= yes ; where denotes and. bnd(x) follows the possible rules below: (3) IF temp=2 sneeze=0 handache=0 THEN flu= possible ; we can see that in boundary region bnd(x) ={p 1,p 5 }, though they have the same condition: temp=2 sneeze=0 handache=0, the decisions are different: one is yes and another one is no in the datebase. It means in such a case (condition), you are probably catching a cold as we shown in rule (3). Furthermore, the negative rules are obtained by describing neg(x) as follows: (4) IF temp=1 sneeze=0 handache=1 THEN flu= no. (5) IF temp=0 sneeze=1 handache=1 THEN flu= no. Form (16), the approximation accuracy α(x) = 2/4 = Fuzzy Rough Sets As mentioned previously, we can use the theory of rough set to extract (classification) rules form an information system which leads to an approximation space. However, such a classification is based on the facts such as noise-free, importance-identical for each example (or tuple), and error-free for the final rules. Therefore, in this section we first propose an uncertain information system (UIS) in an attempt to relax the traditional information system. Definition 3 An uncertain information system (UIS) is defined as follows: UIS =(U, C, D, V, ρ, W ) (9) where U is the universe which is a non-empty finite set of objects x; C is a finite condition set of attributes; D is a finite decision set of attributes; V = q C D V q, and V q is the domain of attribute q; ρ is a mapping function such that ρ(x, q) V q for every q C D and x U; W = x U w x, and w x is a fuzzy number defined by membership function µ wx [0, 1], which assigns each tuple an importance (weight) factor to represents how important (weighty) is for the corresponding decision. As in above definition, we added the weight factor into the traditional IS. The main purpose is to give a evaluation to each tuple. It means, for a same decision, there maybe are several tuples which have either same or different conditions, but considering their respective situation such as noise, confidence and so on, they do have different weights. For example, in a medical database there are two patients p 1, p 2 who have different conditions which means they have different condition attribute values, but the decisions, say catching a cold are same. In such diagnosis, which condition is easier to lead to the decision? Naturally, the difference between them occurs. Therefore, if there is a strong causal relationship between the condition and decision in the case of p 1, the weight will be big, say 1; likewise, if the causal relationship in the case of p 2 is weak, the weight will be small, say 0.4. As a result, every object x appears with its own weight in the universe U. The images are depicted in Fig.1 and Fig.2. In the UIS (Fig.2), each object has different weight which is imaged by its size of circle, whereas each object has same size of circle in traditional IS (Fig.1). Let E,X be a non-empty elementary set, and a Figure 1: Image of the traditional IS non-empty subset in the approximation space, respectively. First, similarly in [3], we define a concept which is called relative degree of classification of the set E with respect to set X as follows (Fig.3): c(e,x) = x I w x ; I = E X (10) x E w x Considering the system situation such as admissible level of misclassification, noise, and approximation precision, one can set up two thresholds β P, β N,
4 U X 2QUKVKXG4GIKQP $QWPFCT[4GIKQP 0GICVKXG4GIKQP Figure 2: Image of the UIS E I X Figure 4: The three regions in the rough set U X Figure 3: E, X, and their intersection I which are called positive threshold, negative threshold, respectively. We say that E is included in X, if c(e,x) β P, and E is connected nothing with X, if c(e,x) β N. Based on the relative degree of classification (10), the lower approximation, and upper approximation of a subset X with respect to thresholds β P and β N, in symbols apr βp (X), apr βn (X) respectively, are defined as where, apr βp (X) =pos βp (X) (11) apr βn (X) =U neg βn (X) (12) pos βp (X) = {E R C c(e,x) β P } (13) neg βn (X) = {E RC c(e,x) β N } (14) and RC = {E 1,E 2,...,E N } is the C-elementary sets. Similarly, the boundary region bnd βp, β N (X) ofx is composed of those elementary sets, which are neither in the positive region pos βp (X), nor negative region neg βn (X) ofx, bnd βp, β N (X) = {E RC β N < c(e,x) < β P, } (15) In this way, the accuracy measure of set X in the approximation space A =(U, R) is given by x apr (X) w x βp α(x) = x apr βn (X) w (16) x The difference between the two rough sets can be shown in Figs.4 and 5. Let s pay attention on the boundary regions. Compared with the one in Fig.4, its corresponding part is great reduced in Fig.5, where the elementary sets with arrows outward subset X go to the negative region while the others with arrows inward X go to the positive region. Regarding the fuzzy rough set, we would like to give the following remarks: 2QUKVKXG4GIKQP $QWPFCT[4GIKQP 0GICVKXG4GIKQP Figure 5: The three regions in the fuzzy rough set The traditional IS is a special case of the UIS, as well as rough sets. Namely, IS =(U, C, D, V, ρ, W ) (17) where W = x U w x, and w x = 1. In other words, in the traditional IS, all data have equal evaluations (weights). Besides, positive region pos(x), and negative region neg(x) are the special cases of pos βp (X), and neg βn (X), respectively, where, c(e,x) = pos(x) =pos 1 (X) = {E RC c(e,x) =1} (18) neg(x) =neg 0 (X) = {E RC c(e,x) =0} (19) x I w x x E w x ; I = E X (20) Compared with the UIS proposed in [5], there are two main differences in this paper: (1) For each tuple, there are two parameters (d, g) that should be set up in which d is a importance function: U [0, 1] corresponding to the condition attribute set C, and g is a certainty function: U [0, 1] corresponding to the decision attribute set D. How to exactly set up the parameters? Of course, you can say g = 0.67,d = 0.70, however, there is not a clue to set up the parameters like that. In this paper, for every tuple
5 (example) data, there is only one parameter to be set up based on the situation such as noise, decision confidence and so on, and to improve the flexibility, (2) fuzzy number w x is adopted. In this way, the fuzzy membership function can cover the inaccurate parameter setting to a large extent. For example, you are difficult to say that w is 0.78, but it is easer to say that w is about Even though the real value w is exactly 0.78, and you set as w is about 0.8, such a missetting surely is covered by the membership function. For the fuzzy number setting, we can consider to use triangular fuzzy numbers (TFN) from 0 to 1 as shown in Fig.6. Namely, fuzzy number 1 is employed if the weight is very high, whereas fuzzy number 0 is employed if the weight is very low. Also, according to the decision situation, the width l of the membership function flexibly varies. In other words, for a decision, if the corresponding noises is larger or confidence is weaker, l is set bigger, and vice versa. Besides, there is another reason for employing such a TFN, i.e., it is easier to perform the four rules of arithmetic, compared with other kind of fuzzy membership functions like the Gaussion one. In addition, in order to give the order of two fuzzy sets, a socalled removal method [8] is available. 1 µ w ~ x w ~ x 0 x0 l Figure 6: Triangular fuzzy membership function 4 GA-Based Attribute Division In the example in section 2, regarding the attribute values of temp, we used that V temp = {0, 1, 2} ={ normal, high, very high }. Actually, the temperatures of patients are continuous. For example, temperatures of six patients (temp(p 1 ) temp(p 6 )) are given in Tab.2. In order to divide the continuous Table 2: Real temperatures ( o C) patient p 1 p 2 p 3 p 4 p 5 p 6 temp attribute values into a discrete (or digital) attribute values like (0, 1, 2), one of the most common method is to give some appropriate intervals, each of which represents one discrete value like 1 or 2. Here, one case is shown in Fig.7. Obviously, the discrete attribute values of temp in Tab.2 matches the division x Figure 7: One case of division shown in Fig.1. However, if we change such a division like Fig.7 Figure 8: Another case of division Table 3: Influenza data with different conversion Q U C D temp sneeze headache flu p p p p p p to Fig.8, then Tab.1 will become Tab.3 in which the (discrete) attribute values of temp are different form Tab.1. In this case, regarding the same subset X =(p 1,p 2,p 4 ), apr(x) = {p 1,p 2,p 4 } apr(x) = {p 1,p 2,p 4 } α(x) = 1 Clearly, the approximation accuracy is improved. Therefore, even with a same original database in which continuous attribute values are contained, the different division has a different approximation accuracy, which directly influence the rules extracted from the database. Consequently, when we consider such a division for the originally continuous attribute values, there are two problems we have to answer: (1) How to divide the continuous values into some intervals, each of which corresponds to a discrete number? (2) How many intervals should be taken? In this paper, to solve the problems above, here we employ GA, which has been widely used in various problems as a robust search method, especially in optimum seeking. As an important branch of evolutionary computation, GA is characterized by its current effectiveness, strong robustness, and simple implementation. It also has the advantage of not being restrained by certain restrictive factors of search space. GA simulates
6 the evolutionary process of a set of genomes over time. Genome is a biological term that is corresponding with as a set of genes and gene is the basic building block of any living entity. For our use now, genome represents two-figure hexadecimal such as A8, which finally can be transferred to the division points like 37.5 and 38.5 in Fig.8, and gene represents a binary digit in the binary coded hexadecimal code (BCHC) such as , which is the BCHC of A8. A GA starts with a set of genomes, which is referred to as a generation, created randomly and then the evolutionary process of the survival of the fittest genomes takes place. The unfitted genomes are removed and the remaining genomes reproduce a new set of genomes. Reproduction of the genomes is accomplished by applying the simulation of the two well-known genetic processes: mutation and crossover. This process is repeated and in each repetition a fitter generation is created. To fit the use in this paper, our GA is composed of the following steps: Step 1. Find the biggest value x max and the smallest value x min in the continuous attribute values x to be considered, and evenly divide [x min,x max ] into seven interval so that 8 points correspond to a BCHC. Actually, the 8 points are candidates of division points. Binary digit 1 in BCHC means that it is a division point, and 0 means that it is not a division point. For example, a hexadecimal 4C leads to a division shown in Fig xmin xmax Figure 9: Division of a BCHC Step 2. Create a set of N random BCHC (first generation of genomes). Step 3. Calculate the fitness of each BCHC. Each BCHC leads to an information system like Tab.1 or Tab.3, therefore for a subset X to be approximate, there are different approximation accuracy α(x). Here we use α(x) as the fitness. Naturally, a better accuracy presents a better fitness. In Fig.9, the interval number is 3, which equals the sum of binary digit 1 in BCHC(4C). We may suppose that the maximum number Itvl max of intervals is given. So when we calculate the fitness of each BCHC, we will give the worst fitness, say 0, if the sum of binary digit 1 in a BCHC exceeds Itvl max. GA goes to the end if the desired fitness is obtained. At the same time, the problem (2) described in previous sub-section is resolved, namely, the sum of binary digit 1 in a BCHC with the best fitness is interval number we should take. Step 4. Sort the BCHCs based on their fitness in descending order. Step 5. Keep M (M < N) fitter BCHCs and remove the rest of the BCHCs. Step 6. Create the next generation by making N M BCHCs out of M BCHCs using crossover and mutation operations. Step 7. Go to step 3. 5 Conclusion In this paper, an uncertain information system (UIS), in which fuzzy sets are employed to cover the certainties, importance, and classification factors, is proposed. Afterward, a new fuzzy rough model is led out based on the UIS. Consequently, the possible rules can be reduced whereas the negative/positive rules are increased in the rule extraction from the UIS. As a result, the approximation precision can be improved. In addition, the genetic algorithm is adopted to divide the attribute values that have continuous values into the most proper discrete values in order to improve the approximateness of the system. References: [1] Z. Pawlak, Rough sets, Int. J. of Information and Computer Sciences, vol. 11, no. 5, pp , [2] Z. Pawlak, Rough classification, Int. J. of Man-Machine Studies, vol. 20, pp , [3] W. Ziarko, Variable precision rough set model, Journal of Computer and System Sciences, no. 46, p , [4] W. Ziarko, Rough Sets, Fuzzy Sets and Knowledge Discovery, Springer-Verlag, 1994 [5] J. Han, X. Hu, and N. Cercone, Supervised learning: a generalized rough set approach, Proceedings of the Second International Conference on Rough Sets and Current Trends in Computing (RSCTC 2000), pp , Banff, Canada, October, [6] Z. Pawlak, S. K. M. Wong, and W. Ziarko, Rough sets: probabilistic versus deterministic approach, Int. J. Man-Machine Studies, vol. 29, pp , [7] H. Han, Y. Morioka, and K. Takano, Rule extraction using GA-based fuzzy modeling, Advances in Intelligent Systems, Fuzzy Systems, Evolutionary Computation, A. Grmela, and N. E. Mastorakis (eds), pp , WSEAS, [8] A. Kaufmann, and M. M. Gupta, Fuzzy Mathematical Models in Engineering and Management Science, Elsevier Science Publishers B.V., 1988.
Naive Bayesian Rough Sets
Naive Bayesian Rough Sets Yiyu Yao and Bing Zhou Department of Computer Science, University of Regina Regina, Saskatchewan, Canada S4S 0A2 {yyao,zhou200b}@cs.uregina.ca Abstract. A naive Bayesian classifier
More informationRough Set Model Selection for Practical Decision Making
Rough Set Model Selection for Practical Decision Making Joseph P. Herbert JingTao Yao Department of Computer Science University of Regina Regina, Saskatchewan, Canada, S4S 0A2 {herbertj, jtyao}@cs.uregina.ca
More informationA new Approach to Drawing Conclusions from Data A Rough Set Perspective
Motto: Let the data speak for themselves R.A. Fisher A new Approach to Drawing Conclusions from Data A Rough et Perspective Zdzisław Pawlak Institute for Theoretical and Applied Informatics Polish Academy
More informationModel Complexity of Pseudo-independent Models
Model Complexity of Pseudo-independent Models Jae-Hyuck Lee and Yang Xiang Department of Computing and Information Science University of Guelph, Guelph, Canada {jaehyuck, yxiang}@cis.uoguelph,ca Abstract
More informationSelected Algorithms of Machine Learning from Examples
Fundamenta Informaticae 18 (1993), 193 207 Selected Algorithms of Machine Learning from Examples Jerzy W. GRZYMALA-BUSSE Department of Computer Science, University of Kansas Lawrence, KS 66045, U. S. A.
More informationFuzzy Systems. Introduction
Fuzzy Systems Introduction Prof. Dr. Rudolf Kruse Christian Moewes {kruse,cmoewes}@iws.cs.uni-magdeburg.de Otto-von-Guericke University of Magdeburg Faculty of Computer Science Department of Knowledge
More informationDrawing Conclusions from Data The Rough Set Way
Drawing Conclusions from Data The Rough et Way Zdzisław Pawlak Institute of Theoretical and Applied Informatics, Polish Academy of ciences, ul Bałtycka 5, 44 000 Gliwice, Poland In the rough set theory
More informationCS6375: Machine Learning Gautam Kunapuli. Decision Trees
Gautam Kunapuli Example: Restaurant Recommendation Example: Develop a model to recommend restaurants to users depending on their past dining experiences. Here, the features are cost (x ) and the user s
More informationNotes on Rough Set Approximations and Associated Measures
Notes on Rough Set Approximations and Associated Measures Yiyu Yao Department of Computer Science, University of Regina Regina, Saskatchewan, Canada S4S 0A2 E-mail: yyao@cs.uregina.ca URL: http://www.cs.uregina.ca/
More informationComputational Intelligence, Volume, Number, VAGUENES AND UNCERTAINTY: A ROUGH SET PERSPECTIVE. Zdzislaw Pawlak
Computational Intelligence, Volume, Number, VAGUENES AND UNCERTAINTY: A ROUGH SET PERSPECTIVE Zdzislaw Pawlak Institute of Computer Science, Warsaw Technical University, ul. Nowowiejska 15/19,00 665 Warsaw,
More informationINVARIANT SUBSETS OF THE SEARCH SPACE AND THE UNIVERSALITY OF A GENERALIZED GENETIC ALGORITHM
INVARIANT SUBSETS OF THE SEARCH SPACE AND THE UNIVERSALITY OF A GENERALIZED GENETIC ALGORITHM BORIS MITAVSKIY Abstract In this paper we shall give a mathematical description of a general evolutionary heuristic
More informationSome remarks on conflict analysis
European Journal of Operational Research 166 (2005) 649 654 www.elsevier.com/locate/dsw Some remarks on conflict analysis Zdzisław Pawlak Warsaw School of Information Technology, ul. Newelska 6, 01 447
More informationSeminaar Abstrakte Wiskunde Seminar in Abstract Mathematics Lecture notes in progress (27 March 2010)
http://math.sun.ac.za/amsc/sam Seminaar Abstrakte Wiskunde Seminar in Abstract Mathematics 2009-2010 Lecture notes in progress (27 March 2010) Contents 2009 Semester I: Elements 5 1. Cartesian product
More informationFuzzy and Rough Sets Part I
Fuzzy and Rough Sets Part I Decision Systems Group Brigham and Women s Hospital, Harvard Medical School Harvard-MIT Division of Health Sciences and Technology Aim Present aspects of fuzzy and rough sets.
More informationA Critique of Solving the P/NP Problem Under Intrinsic Uncertainty
A Critique of Solving the P/NP Problem Under Intrinsic Uncertainty arxiv:0904.3927v1 [cs.cc] 24 Apr 2009 Andrew Keenan Richardson April 24, 2009 Abstract Cole Arthur Brown Although whether P equals NP
More informationFuzzy Systems. Introduction
Fuzzy Systems Introduction Prof. Dr. Rudolf Kruse Christoph Doell {kruse,doell}@iws.cs.uni-magdeburg.de Otto-von-Guericke University of Magdeburg Faculty of Computer Science Department of Knowledge Processing
More informationClassification Based on Logical Concept Analysis
Classification Based on Logical Concept Analysis Yan Zhao and Yiyu Yao Department of Computer Science, University of Regina, Regina, Saskatchewan, Canada S4S 0A2 E-mail: {yanzhao, yyao}@cs.uregina.ca Abstract.
More informationREDUCTS AND ROUGH SET ANALYSIS
REDUCTS AND ROUGH SET ANALYSIS A THESIS SUBMITTED TO THE FACULTY OF GRADUATE STUDIES AND RESEARCH IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF MASTER OF SCIENCE IN COMPUTER SCIENCE UNIVERSITY
More informationCSE 20 DISCRETE MATH. Winter
CSE 20 DISCRETE MATH Winter 2017 http://cseweb.ucsd.edu/classes/wi17/cse20-ab/ Today's learning goals Determine whether a relation is an equivalence relation by determining whether it is Reflexive Symmetric
More informationThis article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and
This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and education use, including for instruction at the authors institution
More informationRough Sets. V.W. Marek. General introduction and one theorem. Department of Computer Science University of Kentucky. October 2013.
General introduction and one theorem V.W. Marek Department of Computer Science University of Kentucky October 2013 What it is about? is a popular formalism for talking about approximations Esp. studied
More informationINVESTIGATING DECISION MAKING WITH GAME-THEORETIC ROUGH SETS
INVESTIGATING DECISION MAKING WITH GAME-THEORETIC ROUGH SETS A Thesis Submitted to the Faculty of Graduate Studies and Research In Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy
More informationApplication of Rough Set Theory in Performance Analysis
Australian Journal of Basic and Applied Sciences, 6(): 158-16, 1 SSN 1991-818 Application of Rough Set Theory in erformance Analysis 1 Mahnaz Mirbolouki, Mohammad Hassan Behzadi, 1 Leila Karamali 1 Department
More informationRough Sets, Rough Relations and Rough Functions. Zdzislaw Pawlak. Warsaw University of Technology. ul. Nowowiejska 15/19, Warsaw, Poland.
Rough Sets, Rough Relations and Rough Functions Zdzislaw Pawlak Institute of Computer Science Warsaw University of Technology ul. Nowowiejska 15/19, 00 665 Warsaw, Poland and Institute of Theoretical and
More informationVPRSM BASED DECISION TREE CLASSIFIER
Computing and Informatics, Vol. 26, 2007, 663 677 VPRSM BASED DECISION TREE CLASSIFIER Jin-Mao Wei, Ming-Yang Wang, Jun-Ping You Institute of Computational Intelligence Key Laboratory for Applied Statistics
More informationCSE 20 DISCRETE MATH. Fall
CSE 20 DISCRETE MATH Fall 2017 http://cseweb.ucsd.edu/classes/fa17/cse20-ab/ Today's learning goals Determine whether a relation is an equivalence relation by determining whether it is Reflexive Symmetric
More informationJohns Hopkins Math Tournament Proof Round: Automata
Johns Hopkins Math Tournament 2018 Proof Round: Automata February 9, 2019 Problem Points Score 1 10 2 5 3 10 4 20 5 20 6 15 7 20 Total 100 Instructions The exam is worth 100 points; each part s point value
More informationSolving Classification Problems By Knowledge Sets
Solving Classification Problems By Knowledge Sets Marcin Orchel a, a Department of Computer Science, AGH University of Science and Technology, Al. A. Mickiewicza 30, 30-059 Kraków, Poland Abstract We propose
More informationMathematical Approach to Vagueness
International Mathematical Forum, 2, 2007, no. 33, 1617-1623 Mathematical Approach to Vagueness Angel Garrido Departamento de Matematicas Fundamentales Facultad de Ciencias de la UNED Senda del Rey, 9,
More informationGenetic Algorithms: Basic Principles and Applications
Genetic Algorithms: Basic Principles and Applications C. A. MURTHY MACHINE INTELLIGENCE UNIT INDIAN STATISTICAL INSTITUTE 203, B.T.ROAD KOLKATA-700108 e-mail: murthy@isical.ac.in Genetic algorithms (GAs)
More informationA Generalized Decision Logic in Interval-set-valued Information Tables
A Generalized Decision Logic in Interval-set-valued Information Tables Y.Y. Yao 1 and Qing Liu 2 1 Department of Computer Science, University of Regina Regina, Saskatchewan, Canada S4S 0A2 E-mail: yyao@cs.uregina.ca
More informationARTICLE IN PRESS. Information Sciences xxx (2016) xxx xxx. Contents lists available at ScienceDirect. Information Sciences
Information Sciences xxx (2016) xxx xxx Contents lists available at ScienceDirect Information Sciences journal homepage: www.elsevier.com/locate/ins Three-way cognitive concept learning via multi-granularity
More informationFinding Prime Factors
Section 3.2 PRE-ACTIVITY PREPARATION Finding Prime Factors Note: While this section on fi nding prime factors does not include fraction notation, it does address an intermediate and necessary concept to
More informationSome Properties of a Set-valued Homomorphism on Modules
2012, TextRoad Publication ISSN 2090-4304 Journal of Basic and Applied Scientific Research www.textroad.com Some Properties of a Set-valued Homomorphism on Modules S.B. Hosseini 1, M. Saberifar 2 1 Department
More informationResearch Article Decision Analysis via Granulation Based on General Binary Relation
International Mathematics and Mathematical Sciences Volume 2007, Article ID 12714, 13 pages doi:10.1155/2007/12714 esearch Article Decision Analysis via Granulation Based on General Binary elation M. M.
More informationSection 1 What Is Physics? Chapter 1. The Branches of Physics. Houghton Mifflin Harcourt Publishing Company
Section 1 What Is Physics? The Branches of Physics Section 1 What Is Physics? Physics The goal of physics is to use a small number of basic concepts, equations, and assumptions to describe the physical
More informationApplications of Some Topological Near Open Sets to Knowledge Discovery
IJACS International Journal of Advanced Computer Science Applications Vol 7 No 1 216 Applications of Some Topological Near Open Sets to Knowledge Discovery A S Salama Tanta University; Shaqra University
More informationLecture 22. Introduction to Genetic Algorithms
Lecture 22 Introduction to Genetic Algorithms Thursday 14 November 2002 William H. Hsu, KSU http://www.kddresearch.org http://www.cis.ksu.edu/~bhsu Readings: Sections 9.1-9.4, Mitchell Chapter 1, Sections
More informationAn algorithm for induction of decision rules consistent with the dominance principle
An algorithm for induction of decision rules consistent with the dominance principle Salvatore Greco 1, Benedetto Matarazzo 1, Roman Slowinski 2, Jerzy Stefanowski 2 1 Faculty of Economics, University
More informationA Logical Formulation of the Granular Data Model
2008 IEEE International Conference on Data Mining Workshops A Logical Formulation of the Granular Data Model Tuan-Fang Fan Department of Computer Science and Information Engineering National Penghu University
More informationIntuitionistic Fuzzy Estimation of the Ant Methodology
BULGARIAN ACADEMY OF SCIENCES CYBERNETICS AND INFORMATION TECHNOLOGIES Volume 9, No 2 Sofia 2009 Intuitionistic Fuzzy Estimation of the Ant Methodology S Fidanova, P Marinov Institute of Parallel Processing,
More informationFuzzy Systems. Possibility Theory.
Fuzzy Systems Possibility Theory Rudolf Kruse Christian Moewes {kruse,cmoewes}@iws.cs.uni-magdeburg.de Otto-von-Guericke University of Magdeburg Faculty of Computer Science Department of Knowledge Processing
More information(Refer Slide Time: 0:21)
Theory of Computation Prof. Somenath Biswas Department of Computer Science and Engineering Indian Institute of Technology Kanpur Lecture 7 A generalisation of pumping lemma, Non-deterministic finite automata
More informationPattern Recognition Approaches to Solving Combinatorial Problems in Free Groups
Contemporary Mathematics Pattern Recognition Approaches to Solving Combinatorial Problems in Free Groups Robert M. Haralick, Alex D. Miasnikov, and Alexei G. Myasnikov Abstract. We review some basic methodologies
More informationComparison of Rough-set and Interval-set Models for Uncertain Reasoning
Yao, Y.Y. and Li, X. Comparison of rough-set and interval-set models for uncertain reasoning Fundamenta Informaticae, Vol. 27, No. 2-3, pp. 289-298, 1996. Comparison of Rough-set and Interval-set Models
More informationA PRIMER ON ROUGH SETS:
A PRIMER ON ROUGH SETS: A NEW APPROACH TO DRAWING CONCLUSIONS FROM DATA Zdzisław Pawlak ABSTRACT Rough set theory is a new mathematical approach to vague and uncertain data analysis. This Article explains
More informationInduction of Decision Trees
Induction of Decision Trees Peter Waiganjo Wagacha This notes are for ICS320 Foundations of Learning and Adaptive Systems Institute of Computer Science University of Nairobi PO Box 30197, 00200 Nairobi.
More informationInterpreting Low and High Order Rules: A Granular Computing Approach
Interpreting Low and High Order Rules: A Granular Computing Approach Yiyu Yao, Bing Zhou and Yaohua Chen Department of Computer Science, University of Regina Regina, Saskatchewan, Canada S4S 0A2 E-mail:
More informationDETECTING THE FAULT FROM SPECTROGRAMS BY USING GENETIC ALGORITHM TECHNIQUES
DETECTING THE FAULT FROM SPECTROGRAMS BY USING GENETIC ALGORITHM TECHNIQUES Amin A. E. 1, El-Geheni A. S. 2, and El-Hawary I. A **. El-Beali R. A. 3 1 Mansoura University, Textile Department 2 Prof. Dr.
More informationDecision Trees. Nicholas Ruozzi University of Texas at Dallas. Based on the slides of Vibhav Gogate and David Sontag
Decision Trees Nicholas Ruozzi University of Texas at Dallas Based on the slides of Vibhav Gogate and David Sontag Supervised Learning Input: labelled training data i.e., data plus desired output Assumption:
More informationChapter 8: Introduction to Evolutionary Computation
Computational Intelligence: Second Edition Contents Some Theories about Evolution Evolution is an optimization process: the aim is to improve the ability of an organism to survive in dynamically changing
More informationParameters to find the cause of Global Terrorism using Rough Set Theory
Parameters to find the cause of Global Terrorism using Rough Set Theory Sujogya Mishra Research scholar Utkal University Bhubaneswar-751004, India Shakti Prasad Mohanty Department of Mathematics College
More informationBasic counting techniques. Periklis A. Papakonstantinou Rutgers Business School
Basic counting techniques Periklis A. Papakonstantinou Rutgers Business School i LECTURE NOTES IN Elementary counting methods Periklis A. Papakonstantinou MSIS, Rutgers Business School ALL RIGHTS RESERVED
More informationON SOME PROPERTIES OF ROUGH APPROXIMATIONS OF SUBRINGS VIA COSETS
ITALIAN JOURNAL OF PURE AND APPLIED MATHEMATICS N. 39 2018 (120 127) 120 ON SOME PROPERTIES OF ROUGH APPROXIMATIONS OF SUBRINGS VIA COSETS Madhavi Reddy Research Scholar, JNIAS Budhabhavan, Hyderabad-500085
More informationAPPLICATION FOR LOGICAL EXPRESSION PROCESSING
APPLICATION FOR LOGICAL EXPRESSION PROCESSING Marcin Michalak, Michał Dubiel, Jolanta Urbanek Institute of Informatics, Silesian University of Technology, Gliwice, Poland Marcin.Michalak@polsl.pl ABSTRACT
More informationRough Sets for Uncertainty Reasoning
Rough Sets for Uncertainty Reasoning S.K.M. Wong 1 and C.J. Butz 2 1 Department of Computer Science, University of Regina, Regina, Canada, S4S 0A2, wong@cs.uregina.ca 2 School of Information Technology
More informationROUGH SETS THEORY AND DATA REDUCTION IN INFORMATION SYSTEMS AND DATA MINING
ROUGH SETS THEORY AND DATA REDUCTION IN INFORMATION SYSTEMS AND DATA MINING Mofreh Hogo, Miroslav Šnorek CTU in Prague, Departement Of Computer Sciences And Engineering Karlovo Náměstí 13, 121 35 Prague
More informationOpleiding Informatica
Opleiding Informatica Tape-quantifying Turing machines in the arithmetical hierarchy Simon Heijungs Supervisors: H.J. Hoogeboom & R. van Vliet BACHELOR THESIS Leiden Institute of Advanced Computer Science
More informationthe tree till a class assignment is reached
Decision Trees Decision Tree for Playing Tennis Prediction is done by sending the example down Prediction is done by sending the example down the tree till a class assignment is reached Definitions Internal
More information1 Chapter 1: SETS. 1.1 Describing a set
1 Chapter 1: SETS set is a collection of objects The objects of the set are called elements or members Use capital letters :, B, C, S, X, Y to denote the sets Use lower case letters to denote the elements:
More informationGENETIC ALGORITHM FOR CELL DESIGN UNDER SINGLE AND MULTIPLE PERIODS
GENETIC ALGORITHM FOR CELL DESIGN UNDER SINGLE AND MULTIPLE PERIODS A genetic algorithm is a random search technique for global optimisation in a complex search space. It was originally inspired by an
More informationReasoning with Uncertainty
Reasoning with Uncertainty Representing Uncertainty Manfred Huber 2005 1 Reasoning with Uncertainty The goal of reasoning is usually to: Determine the state of the world Determine what actions to take
More informationAn Evolution Strategy for the Induction of Fuzzy Finite-state Automata
Journal of Mathematics and Statistics 2 (2): 386-390, 2006 ISSN 1549-3644 Science Publications, 2006 An Evolution Strategy for the Induction of Fuzzy Finite-state Automata 1,2 Mozhiwen and 1 Wanmin 1 College
More informationEvolutionary Computation
Evolutionary Computation - Computational procedures patterned after biological evolution. - Search procedure that probabilistically applies search operators to set of points in the search space. - Lamarck
More informationEnsembles of classifiers based on approximate reducts
Fundamenta Informaticae 34 (2014) 1 10 1 IOS Press Ensembles of classifiers based on approximate reducts Jakub Wróblewski Polish-Japanese Institute of Information Technology and Institute of Mathematics,
More information(2) Generalize De Morgan s laws for n sets and prove the laws by induction. 1
ARS DIGITA UNIVERSITY MONTH 2: DISCRETE MATHEMATICS PROFESSOR SHAI SIMONSON PROBLEM SET 2 SOLUTIONS SET, FUNCTIONS, BIG-O, RATES OF GROWTH (1) Prove by formal logic: (a) The complement of the union of
More informationFUZZY ASSOCIATION RULES: A TWO-SIDED APPROACH
FUZZY ASSOCIATION RULES: A TWO-SIDED APPROACH M. De Cock C. Cornelis E. E. Kerre Dept. of Applied Mathematics and Computer Science Ghent University, Krijgslaan 281 (S9), B-9000 Gent, Belgium phone: +32
More informationPart 01 - Notes: Identifying Significant Figures
Part 01 - Notes: Identifying Significant Figures Objectives: Identify the number of significant figures in a measurement. Compare relative uncertainties of different measurements. Relate measurement precision
More informationThe Fourth International Conference on Innovative Computing, Information and Control
The Fourth International Conference on Innovative Computing, Information and Control December 7-9, 2009, Kaohsiung, Taiwan http://bit.kuas.edu.tw/~icic09 Dear Prof. Yann-Chang Huang, Thank you for your
More informationFinancial Informatics IX: Fuzzy Sets
Financial Informatics IX: Fuzzy Sets Khurshid Ahmad, Professor of Computer Science, Department of Computer Science Trinity College, Dublin-2, IRELAND November 19th, 2008 https://www.cs.tcd.ie/khurshid.ahmad/teaching.html
More informationNew Similarity Measures for Intuitionistic Fuzzy Sets
Applied Mathematical Sciences, Vol. 8, 2014, no. 45, 2239-2250 HIKARI Ltd, www.m-hikari.com http://dx.doi.org/10.12988/ams.2014.43171 New Similarity Measures for Intuitionistic Fuzzy Sets Peerasak Intarapaiboon
More informationRough Set Approaches for Discovery of Rules and Attribute Dependencies
Rough Set Approaches for Discovery of Rules and Attribute Dependencies Wojciech Ziarko Department of Computer Science University of Regina Regina, SK, S4S 0A2 Canada Abstract The article presents an elementary
More informationRough Approach to Fuzzification and Defuzzification in Probability Theory
Rough Approach to Fuzzification and Defuzzification in Probability Theory G. Cattaneo and D. Ciucci Dipartimento di Informatica, Sistemistica e Comunicazione Università di Milano Bicocca, Via Bicocca degli
More informationComputational Intelligence Lecture 3: Simple Neural Networks for Pattern Classification
Computational Intelligence Lecture 3: Simple Neural Networks for Pattern Classification Farzaneh Abdollahi Department of Electrical Engineering Amirkabir University of Technology Fall 2011 arzaneh Abdollahi
More informationTOPOLOGICAL ASPECTS OF YAO S ROUGH SET
Chapter 5 TOPOLOGICAL ASPECTS OF YAO S ROUGH SET In this chapter, we introduce the concept of transmissing right neighborhood via transmissing expression of a relation R on domain U, and then we study
More informationReading 11 : Relations and Functions
CS/Math 240: Introduction to Discrete Mathematics Fall 2015 Reading 11 : Relations and Functions Instructor: Beck Hasti and Gautam Prakriya In reading 3, we described a correspondence between predicates
More informationBanacha Warszawa Poland s:
Chapter 12 Rough Sets and Rough Logic: A KDD Perspective Zdzis law Pawlak 1, Lech Polkowski 2, and Andrzej Skowron 3 1 Institute of Theoretical and Applied Informatics Polish Academy of Sciences Ba ltycka
More informationTwo Semantic Issues in a Probabilistic Rough Set Model
Fundamenta Informaticae 108 (2011) 249 265 249 IOS Press Two Semantic Issues in a Probabilistic Rough Set Model Yiyu Yao Department of Computer Science University of Regina Regina, Canada yyao@cs.uregina.ca
More informationROUGH SET THEORY FOR INTELLIGENT INDUSTRIAL APPLICATIONS
ROUGH SET THEORY FOR INTELLIGENT INDUSTRIAL APPLICATIONS Zdzisław Pawlak Institute of Theoretical and Applied Informatics, Polish Academy of Sciences, Poland, e-mail: zpw@ii.pw.edu.pl ABSTRACT Application
More informationarxiv: v1 [cs.lo] 16 Jul 2017
SOME IMPROVEMENTS IN FUZZY TURING MACHINES HADI FARAHANI arxiv:1707.05311v1 [cs.lo] 16 Jul 2017 Department of Computer Science, Shahid Beheshti University, G.C, Tehran, Iran h farahani@sbu.ac.ir Abstract.
More informationMath 105A HW 1 Solutions
Sect. 1.1.3: # 2, 3 (Page 7-8 Math 105A HW 1 Solutions 2(a ( Statement: Each positive integers has a unique prime factorization. n N: n = 1 or ( R N, p 1,..., p R P such that n = p 1 p R and ( n, R, S
More informationDiscovery of Pseudo-Independent Models from Data
Discovery of Pseudo-Independent Models from Data Yang Xiang, University of Guelph, Canada June 4, 2004 INTRODUCTION Graphical models such as Bayesian networks (BNs) and decomposable Markov networks (DMNs)
More informationData Warehousing & Data Mining
13. Meta-Algorithms for Classification Data Warehousing & Data Mining Wolf-Tilo Balke Silviu Homoceanu Institut für Informationssysteme Technische Universität Braunschweig http://www.ifis.cs.tu-bs.de 13.
More informationLearning Classification with Auxiliary Probabilistic Information Quang Nguyen Hamed Valizadegan Milos Hauskrecht
Learning Classification with Auxiliary Probabilistic Information Quang Nguyen Hamed Valizadegan Milos Hauskrecht Computer Science Department University of Pittsburgh Outline Introduction Learning with
More informationFeature Selection with Fuzzy Decision Reducts
Feature Selection with Fuzzy Decision Reducts Chris Cornelis 1, Germán Hurtado Martín 1,2, Richard Jensen 3, and Dominik Ślȩzak4 1 Dept. of Mathematics and Computer Science, Ghent University, Gent, Belgium
More informationOutlier Detection Using Rough Set Theory
Outlier Detection Using Rough Set Theory Feng Jiang 1,2, Yuefei Sui 1, and Cungen Cao 1 1 Key Laboratory of Intelligent Information Processing, Institute of Computing Technology, Chinese Academy of Sciences,
More informationNon-impeding Noisy-AND Tree Causal Models Over Multi-valued Variables
Non-impeding Noisy-AND Tree Causal Models Over Multi-valued Variables Yang Xiang School of Computer Science, University of Guelph, Canada Abstract To specify a Bayesian network (BN), a conditional probability
More informationTutorial 6. By:Aashmeet Kalra
Tutorial 6 By:Aashmeet Kalra AGENDA Candidate Elimination Algorithm Example Demo of Candidate Elimination Algorithm Decision Trees Example Demo of Decision Trees Concept and Concept Learning A Concept
More informationInternational Journal of Approximate Reasoning
International Journal of Approximate Reasoning 53 (2012) 988 1002 Contents lists available at SciVerse ScienceDirect International Journal of Approximate Reasoning journal homepage:www.elsevier.com/locate/ijar
More informationUncertain Logic with Multiple Predicates
Uncertain Logic with Multiple Predicates Kai Yao, Zixiong Peng Uncertainty Theory Laboratory, Department of Mathematical Sciences Tsinghua University, Beijing 100084, China yaok09@mails.tsinghua.edu.cn,
More informationOn Improving the k-means Algorithm to Classify Unclassified Patterns
On Improving the k-means Algorithm to Classify Unclassified Patterns Mohamed M. Rizk 1, Safar Mohamed Safar Alghamdi 2 1 Mathematics & Statistics Department, Faculty of Science, Taif University, Taif,
More informationRough Sets and Conflict Analysis
Rough Sets and Conflict Analysis Zdzis law Pawlak and Andrzej Skowron 1 Institute of Mathematics, Warsaw University Banacha 2, 02-097 Warsaw, Poland skowron@mimuw.edu.pl Commemorating the life and work
More informationROUGH set methodology has been witnessed great success
IEEE TRANSACTIONS ON FUZZY SYSTEMS, VOL. 14, NO. 2, APRIL 2006 191 Fuzzy Probabilistic Approximation Spaces and Their Information Measures Qinghua Hu, Daren Yu, Zongxia Xie, and Jinfu Liu Abstract Rough
More informationDecision Tree Learning and Inductive Inference
Decision Tree Learning and Inductive Inference 1 Widely used method for inductive inference Inductive Inference Hypothesis: Any hypothesis found to approximate the target function well over a sufficiently
More informationClassification of Voice Signals through Mining Unique Episodes in Temporal Information Systems: A Rough Set Approach
Classification of Voice Signals through Mining Unique Episodes in Temporal Information Systems: A Rough Set Approach Krzysztof Pancerz, Wies law Paja, Mariusz Wrzesień, and Jan Warcho l 1 University of
More informationGroup Decision Making Using Comparative Linguistic Expression Based on Hesitant Intuitionistic Fuzzy Sets
Available at http://pvamu.edu/aam Appl. Appl. Math. ISSN: 932-9466 Vol. 0, Issue 2 December 205), pp. 082 092 Applications and Applied Mathematics: An International Journal AAM) Group Decision Making Using
More informationA Generalized Quantum-Inspired Evolutionary Algorithm for Combinatorial Optimization Problems
A Generalized Quantum-Inspired Evolutionary Algorithm for Combinatorial Optimization Problems Julio M. Alegría 1 julio.alegria@ucsp.edu.pe Yván J. Túpac 1 ytupac@ucsp.edu.pe 1 School of Computer Science
More informationDecision Tree Learning
Decision Tree Learning Berlin Chen Department of Computer Science & Information Engineering National Taiwan Normal University References: 1. Machine Learning, Chapter 3 2. Data Mining: Concepts, Models,
More informationCSC Linear Programming and Combinatorial Optimization Lecture 8: Ellipsoid Algorithm
CSC2411 - Linear Programming and Combinatorial Optimization Lecture 8: Ellipsoid Algorithm Notes taken by Shizhong Li March 15, 2005 Summary: In the spring of 1979, the Soviet mathematician L.G.Khachian
More information3. DIFFERENT MODEL TYPES
3-1 3. DIFFERENT MODEL TYPES It is important for us to fully understand a physical problem before we can select a solution strategy for it. Models are convenient tools that enhance our understanding and
More information