Granular Computing: Granular Classifiers and Missing Values
|
|
- Silvester Holmes
- 5 years ago
- Views:
Transcription
1 1 Granular Computing: Granular Classifiers and Missing Values Lech Polkowski 1,2 and Piotr Artiemjew 2 Polish-Japanese Institute of Information Technology 1 Koszykowa str. 86, Warsaw, Poland; Department of Mathematics and Computer Science University of Warmia and Mazury 2 Zolnierska 14, Olsztyn, Poland polkow@pjwstk.edu.pl; artem@matman.uwm.edu.pl Abstract Granular Computing is a paradigm destined to study how to compute with granules of knowledge that are collective objects formed from individual objects by means of a similarity measure. The idea of granulation was put forth by Lotfi Zadeh: granulation is inculcated in fuzzy set theory by the very definition of a fuzzy set and inverse values of fuzzy membership functions are elementary forms of granules. Similarly, rough sets admit granules defined naturally as classes of indiscernibility relations; the search for more flexible granules has led to granules based on blocks (Grzymala Busse), templates (H.S.Nguyen), rough inclusions (Polkowski, Skowron), and tolerance or similarity relations, and more generally, binary relations (T.Y. Lin, Y. Y. Yao). Rough inclusions establish a form of similarity relations that are reflexive but not necessarily symmetric; in applications presented in this work, we restrict ourselves to symmetric rough inclusions based on the set DIS(u, v) = {a A : a(u) a(v)} of attributes discerning between given objects u, v without any additional parameters. Our rough inclusions are induced in their basic forms in a unified framework of continuous t norms; in this work we apply the rough inclusion µ L induced from the Łukasiewicz t norm L(x, y) = max{0, x+y 1} by means of the formula g( DIS(u,v) ) = IND(u,v), where g is the function that occurs in the functional representation of L and IND(u, v) = U U \ DIS(u, v). Granules of knowledge induced by rough inclusions are formed as neighborhoods of given radii of objects by means of the class operator of mereology (see below). L.Polkowski in his feature talks at conferences 2005, 2006 IEEE GrC, put forth the hypothesis that similarity of objects in a granule should lead to closeness of sufficiently many attribute values on objects in the granule and thus averaging in a sense values of attributes on objects in a granule should lead to a new data set, the granular one, which should preserve information encoded in the original data set to a satisfactory degree. This hypothesis is borne out in this work with tests on real data sets. We also address the problem of missing values in data sets; this problem has been addressed within rough set theory by many authors, e.g., Grzymala Busse, Kryszkiewicz, Rybinski. We propose a novel approach to this problem: an object with missing values is absorbed in a granule and takes part in determining a granular object; then, at classification stage, objects with missing values are matched against closest granular objects. We present details of this approach along with tests on real data sets. This paper is a companion to [19] where theoretical principles of granule formation are emphasized. Index Terms Granulation of knowledge, Rough sets, Rough inclusions, Granular decision systems, Missing values. I. INTRODUCTION: ROUGH SETS Knowledge is represented in this work along lines of rough set theory [13], i.e., the basic object is an information system (U, A) where U is a set of objects and A is a set of attributes; each attribute a A is a mapping a : U V a from U into the value set of a, V a. Knowledge is encoded in this setting in the family of indiscernibility relations IND = {ind(a) : a A} where ind(a) = {(u, v) : a(u) = a(v)}. For any set B A, the B indiscernibility relation is ind(b) = a B ind(a); classes [u] B = {v U : (u, v) ind(b)} of B indiscernibility form B elementary granules of knowledge; their unions are B granules of knowledge. A decision system is a triple (U, A, d) where the decision d : U V d is not in A; reasoning about d is carried out by means of descriptors; a descriptor is a formula of the form (a = v) where v V a. From descriptors, formulas are formed by means of sentential connectives,,,. The meaning of a descriptor (a = v) is [a = v] = {u U : a(u) = v} and the meaning is extended by recursion: [α β]=[α] [β], [α β]=[α] [β], [ α]=u \ [α].
2 A decision rule is a descriptor formula of the form a B (a = v a) (d = v); it is true when [ a B (a = v a )] [d = v]; otherwise it is partially true, see, e.g., [14] for a deeper discussion. A set of decision rules is a decision algorithm; when applied to classification of new objects, it is called also a classifier. Inducing classifiers of a satisfactory quality is the problem studied intensively in rough set theory, see, e.g., [31], where three main kinds of classifiers are distinguished: minimal, i.e., consisting of minimum possible number of descriptors describing decision classes in the universe, exhaustive, i.e., consisting of all possible rules, satisfactory, i.e., containing rules tailored to a specific use. Classifiers are evaluated globally with respect to their ability to properly classify objects, usually by error which is the ratio of the number of correctly classified objects to the number of test objects, total accuracy being the ratio of the number of correctly classified cases to the number of recognized cases, and total coverage, i.e, the ratio of the number of recognized test cases to the number of test cases. Minimum size algorithms include LEM2 algorithm by Grzymala Busse, see, e.g., [5], [3] and covering algorithm in RSES package [28]; exhaustive algorithms include, e.g., LERS system due to Grzymala Busse [4], systems based on discernibility matrices and Boolean reasoning according to Skowron, see, e.g., [26],[27], [1], [25], implemented in the RSES package [28]. Minimal consistent sets of rules were introduced in [29]. Further developments include dynamic rules, approximate rules, local rules, and relevant rules [1]. Rough set based classification algorithms, especially those implemented in the RSES system [28], were discussed extensively in [2]. An important class of methods for classifier induction are those based on similarity or analogy reasoning; most generally, this method of reasoning assigns to an object u the value of an attribute a from the knowledge of values of a on a set N(u) of objects whose elements are selected on the basis of a similarity relation, usually but not always based on an appropriate metric. An extensive and deep study of algorithms based on similarity relations is [25]. A realization of analogy based reasoning idea is, e.g., the k nearest neighbors (k-nn) method, see, e.g, [7], in which for a fixed number k, and a given test object u, the value a(u) is assigned from values of a at k nearest to u objects in the training set. Finding nearest objects is based on some similarity measure among objects that in practice is a metric. An extensive study of this topic is given in [34]. II. GRANULES OF KNOWLEDGE In addition to traditional granules based on indiscernibility of objects, new forms of granules have been searched for; in many works, see, e.g., e.g., [11], [24], [30], [35], [36], granulation based on similarity relations and in general on binary relations was studied along with applications to concept approximation. Our approach is based on the method proposed and studied in [21], [15], [16], [17], [18] which employs ideas of mereology and uses as the main tool rough inclusions. We briefly indicate the main ideas and facts relevant to this approach. A. Rough inclusions A rough inclusion is a relation µ U U [0, 1] which satisfies the following requirements, relative to a given irreflexive and transitive part relation π on a set U, 1. µ(x, y, 1) x π y x = y; 2. µ(x, y, 1) [µ(z, x, r) µ(z, y, r)]; 3. µ(x, y, r) s < r µ(x, y, s). Condition 1 states that on U an exact decomposition into parts π is given and that µ extends this exact scheme into an approximate one; the exact scheme is a skeleton along which approximate reasoning is carried out. We apply here the rough inclusion µ L induced by the t norm L(x, y) = max{0, x + y 1} due to Jan Łukasiewicz, see, e.g., [6] for a discussion of t norms.this t norm admits a functional characterization, (1) L(x, y) = g L (f L (x) + f L (y)), (2) where f L (x) = 1 x = g L (x), see, e.g., [14]. In order to define µ L, we let, µ L (u, v, r) g L ( DIS(u, v) ) r, (3) where DIS(u, v) = {a A : a(u) a(v)}, and its complement IND(u, v) = U U \ DIS(u, v). For the Łukasiewicz t norm, µ L is of the form, or dually, µ L (u, v, r) 1 µ L (u, v, r) DIS(u, v) IND(u, v) r, (4) r. (5)
3 B. Granules based on rough inclusions The class operator ClsF, informally, forms for a property F of objects, the list of all objects in U that have the property F ; a formal description is given in the companion paper [19]. For an object u and a real number r [0, 1], we define the granule g µ (u, r) about u of the radius r, relative to µ, by letting, g µ (u, r) is ClsF (u, r), (6) where the property F (u, r) is satisfied with an object v if and only if µ(v, u, r) holds. It was shown, see,e.g., [15], Theorem 4, that in case of µ L, v ing g µl (u, r) µ L (v, u, r). (7) Property (7) allows for representing the granule g µl (u, r) as the list of those v that µ L (v, u, r). Granules in this work are defined with respect to the rough inclusion µ L and thus the granule g µl (u, r) consists of objects v for which IND(u, v) r, i.e., at least the fraction r of attributes agree on u and v; one may regard this measure as the reduced Hamming distance on objects in U. III. GRANULAR DECISION SYSTEMS The idea of a granular decision system was posed in [17]; for a given information system (U, A), a rough inclusion µ, and r [0, 1], the new universe U G r,µ is given. We apply a strategy G to choose a covering Cov G r,µ of the universe U by granules from U G r,µ. We apply a strategy S in order to assign the value a (g) of each attribute a A to each granule g Cov G r,µ: a (g) = S({a(u) : u g}). The granular counterpart to the information system (U, A) is a tuple (U G r,µ, G, S, {a : a A}); analogously, we define granular counterparts to decision systems by adding the factored decision d. The heuristic principle that objects, similar with respect to conditional attributes in the set A, should also reveal similar (i.e., close) decision values, and therefore, granular counterparts to decision systems should lead to classifiers satisfactorily close in quality to those induced from original decision systems, was stated in [17]. IV. SELECTED TEST RESULTS For space considerations, only few tests can be included here. We have chosen the simplest possible rough inclusion µ L without any enhancements. For any granule g and any attribute b in the set A d of attributes, the reduced attribute s b value at the granule g has been TABLE I HEART DATASET:R=GRANULE RADIUS,TST=TEST SAMPLE SIZE,TRN=TRAINING SAMPLE SIZE,RULEX=NUMBER OF RULES WITH EXHAUSTIVE ALGORITHM, RULLEM=NUMBER OF RULES WITH LEM2, AEX=TOTAL ACCURACY WITH EXHAUSTIVE ALGORITHM,CEX=TOTAL COVERAGE WITH EXHAUSTIVE ALGORITHM,ALEM=TOTAL ACCURACY WITH LEM2, CLEM=TOTAL COVERAGE WITH LEM2 r tst trn rulex rullem aex cex alem clem nil estimated by means of the majority voting strategy and ties have been resolved at random; we select coverings by random choice of granules. As well established algorithms for classifier induction, we select the RSES exhaustive algorithm, see [28]; LEM2 algorithm, with p=.5, see [5], [28]. A. Train and test 1:1 with Heart Disease data set (Cleveland) B. Heart disease data set Table I gives results of experiments with Heart disease data set (Cleveland). The procedure applied has been train and test in the ratio 1:1, i.e., the rules have been trained on 50 percent of data and tested on the remaining 50 percent. The training set has been granulated at all distinct radii and granular systems have been formed by means of random choice of coverings. Rules induced by either exhaustive or LEM2 algorithms on the granulated training set have been tested on the test set and results have been compared with classification results given on the test set by rule sets induced from non granulated training set. The strategy S applied in determining attribute values on granules has been the majority voting with random resolution of ties. 1) Conclusions for Heart data sets: In case of exhaustive algorithm, accuracy falls within 0.27 (27 percent of the value with original data set), and coverage within of values for original data set at the radius of , where object size reduction is 97.8 percent and rule set size reduction is 99.5 percent. Accuracy falls within error of 11.5 percent of the original value from the radius of on, where reduction in object set size is 51.2 percent and reduction in rule set size is 58.3 percent; accuracy error is less than 3 percent
4 TABLE II 10-FOLD CV; PIMA; EXHAUSTIVE ALGORITHM. R=RADIUS,MACC=MEAN ACCURACY,MCOV=MEAN COVERAGE,MRULES=MEAN RULE NUMBER, MTRN=MEAN SIZE OF TRAINING SET nil TABLE III 10-FOLD CV; PIMA; LEM2 ALGORITHM. R=RADIUS,MACC=MEAN ACCURACY,MCOV=MEAN COVERAGE,MRULES=MEAN RULE NUMBER, MTRN=MEAN SIZE OF TRAINING SET nil from r = on, with maximal coverage of 1.0, when reduction in object number is 20.4 percent and in rule size 18 percent. LEM2 algorithm achieves with granular systems error in accuracy less than (11.5 percent) and error in coverage less than 0.02 (0.4 percent) from the radius of on, with reduction in object size of 51 percent and reduction of rule set size of 78 percent. C. 10 fold cross validation with Pima Indians Diabetes data set A parallel study has been performed on Pima Indians Diabetes data set [33] and the test has been carried out with 10-fold cross-validation [7]. Results are reported in Tables II and III. 1) Conclusions for CV-10 on Pima Indians Diabetes data set: For exhaustive algorithm, accuracy in granular case is 95.4 percent of accuracy in non granular case, from the radius of.25 with reduction in size of the training set of 48 percent, and from the radius of.5 on, the difference is less than 3 percent. The difference in coverage is less than.4 percent from r =.25 on, where reduction in training set size is 82.5 percent. For LEM2, accuracy in both cases differs by less than 1 percent from r =.25 on, and it is better in granular case from r =.125 on; coverage is better in granular case from r =.375 on. V. MISSING VALUES An information/decision system is incomplete in case some values of conditional attributes from A are not known. Analysis of systems with missing values requires a decision on how to treat missing values; Grzymala Busse in his work [5], analyzes nine such methods, among them, 4. assigning all possible values to the missing location, 9. treating the unknown value as a new valid value, etc. etc. Results in [5] indicate that methods 4,9 perform very well among all nine methods. In this work we consider and adopt two methods, i.e.4, 9. We will use the symbol commonly used for denoting the missing value; we will use two methods 4, 9 for treating, i.e, either is a don t care symbol meaning that any value of the respective attribute can be substituted for, thus = v for each value v of the attribute, or is a new value on its own, i.e., if = v then v can be only. Our procedure for treating missing values is based on the granular structure (U G r,µ, G, S, {a : a A}); the strategy S is the majority voting, i.e., for each attribute a, the value a (g) is the most frequent of values in {a(u) : u g}. The strategy G consists in random selection of granules for a covering. For an object u with the value of at an attribute a,, and a granule g = g(v, r) U G r,µ, the question whether u is included in g is resolved according to the adopted strategy of treating : in case = don t care, the value of is regarded as identical with any value of a hence IND(u, v) is automatically increased by 1, which increases the granule; in case =, the granule size is decreased. Assuming that is sparse in data, majority voting on g would produce values of a distinct from in most cases; nevertheless the value of may appear in new objects g, and then in the process of classification, such value is repaired by means of the granule closest to g with respect to the rough inclusion µ L, in accordance with the chosen method for treating. In plain words, objects with missing values are in a sense absorbed by close to them granules and missing values are replaced with most frequent values in objects collected in the granule; in this way the method 3 or 4 in [5] is combined with the idea of a frequent value, in a novel way. We have thus four possible strategies: Strategy A: in building granules =don t care, in repairing values of, =don t care; Strategy B: in building granules =don t care, in repairing values of, = ; Strategy C: in building granules =, in repairing values of, =don t care; Strategy D: in building granules =, in repairing values of, =.
5 TABLE IV STRATEGY A FOR MISSING VALUES. 10-FOLD CV; PIMA; EXHAUSTIVE ALGORITHM. R=RADIUS, MACC=MEAN ACCURACY, MCOV=MEAN COVERAGE, MRULES=MEAN RULE NUMBER, MTRN=MEAN SIZE OF GRANULAR TRAINING SET AS FRACTION OF THE ORIGINAL TRAINING SET TABLE VII STRATEGY D FOR MISSING VALUES. 10-FOLD CV; PIMA; EXHAUSTIVE ALGORITHM. R=RADIUS, MACC=MEAN ACCURACY, MCOV=MEAN COVERAGE, MRULES=MEAN RULE NUMBER, MTRN=MEAN SIZE OF GRANULAR TRAINING SET AS FRACTION OF THE ORIGINAL TRAINING SET nil nil TABLE V STRATEGY B FOR MISSING VALUES. 10-FOLD CV; PIMA; EXHAUSTIVE ALGORITHM. R=RADIUS,MACC=MEAN ACCURACY, MCOV=MEAN COVERAGE, MRULES=MEAN RULE NUMBER, MTRN=MEAN SIZE OF GRANULAR TRAINING SET AS FRACTION OF THE ORIGINAL TRAINING SET TABLE VIII AVERAGE NUMBER OF VALUES IN GRANULAR SYSTEMS. 10-FOLD CV; PIMA; EXHAUSTIVE ALGORITHM. R=RADIUS,MA=MEAN VALUE FOR A, MB=MEAN VALUE FOR B, MC=MEAN VALUE FOR C, MD=MEAN VALUE FOR D nil A. Results of tests with perturbed data set We record in Tables IV VII, the test results with Pima Indians Diabetes data set [33] in which 10 percent of attribute values chosen at random have been replaced with the value of. Exhaustive algorithm of RSES system [28] has been used as the rule inducing algorithm; 10 fold cross validation (CV 10), see, e.g., [7] has been applied in testing. 1) Conclusions on test results: In case of perturbed Pima Indians diabetes data set, Strategy A attains accuracy value better than 97 percent and coverage value greater or equal to values in non perturbed case from the radius of.625 on. With Strategy B, accuracy is within 94 percent and coverage not smaller than values in non perturbed case from the radius of.625 on. Strategy C yields accuracy within 96.3 percent of accuracy in non perturbed case from the radius of.625, and within 95 TABLE VI STRATEGY C FOR MISSING VALUES. 10-FOLD CV; PIMA; EXHAUSTIVE ALGORITHM. R=RADIUS,MACC=MEAN ACCURACY, MCOV=MEAN COVERAGE, MRULES=MEAN RULE NUMBER, MTRN=MEAN SIZE OF GRANULAR TRAINING SET AS FRACTION OF THE ORIGINAL TRAINING SET nil r ma mb mc md percent from the radius of.250; coverage is within percent from the radius of.250. Strategy D gives results slightly better than C with the same radii. Results for C and D are better than results for A or B. We conclude that essential for results of classification is the strategy of treating the missing value of as = in both strategies C and D; the repairing strategy has almost no effect: C and D differ with respect to this strategy but results for accuracy and coverage in cases C and D differ very slightly. Let us notice that strategies C and D cope with a larger number of values to be repaired as Table VIII shows. B. Results of test with real data set Hepatitis with missing values We record here results of tests with Hepatitis data set [33] with 155 objects, 20 attributes and 167 missing values. We apply the exhaustive algorithm of RSES system [28] and 5 fold cross validation (CV 5). Below we give averaged results for strategies A, B, C, and D. As before, radius nil indicates non granulated case. First, we record the number of missing values that have fallen in training and test sets, respectively, in Table IX. Next, we record in Table X the average number of values that fall into granulated data set (i.e., no. of to be repaired) depending on the strategy applied. Now, we record in Tables XI XIV the results of classification for Hepatitis with exhaustive algorithm and CV 5 cross validation for strategies A, B, C, D.
6 TABLE IX AVERAGE NUMBER OF VALUES IN TRAINING AS WELL AS TEST SET. CV 5; HEPATITIS; EXHAUSTIVE ALGORITHM. FN=FOLD NO., TST-NIL=NO. IN TEST SET, TRN-NIL=NO. IN TABLE XII STRATEGY B. CV 5; HEPATITIS; EXHAUSTIVE ALGORITHM. R=RADIUS,MACC=MEAN ACCURACY, MCOV=MEAN COVERAGE, MRUL=MEAN NUMBER OF RULES, MTRN=MEAN TRAINING GRANULAR SAMPLE SIZE fn tst nil trn nil TRAINING SET TABLE X AVERAGE NUMBER OF VALUES IN GRANULAR SYSTEMS. CV 5; HEPATITIS; EXHAUSTIVE ALGORITHM. R=RADIUS,MA=MEAN VALUE FOR A, MB=MEAN VALUE FOR B, MC=MEAN VALUE r macc mcov mrul mtrn FOR C, MD=MEAN VALUE FOR D r ma mb mc md nil = TABLE XI STRATEGY A. CV 5; HEPATITIS; EXHAUSTIVE ALGORITHM. R=RADIUS,MACC=MEAN ACCURACY, MCOV=MEAN COVERAGE, MRUL=MEAN NUMBER OF RULES, MTRN=MEAN TRAINING GRANULAR SAMPLE SIZE r macc mcov mrul mtrn TABLE XIII STRATEGY C. CV 5; HEPATITIS; EXHAUSTIVE ALGORITHM. R=RADIUS,MACC=MEAN ACCURACY, MCOV=MEAN COVERAGE, MRUL=MEAN NUMBER OF RULES, MTRN=MEAN TRAINING GRANULAR SAMPLE SIZE r macc mcov mrul mtrn TABLE XIV STRATEGY D. CV 5; HEPATITIS; EXHAUSTIVE ALGORITHM. R=RADIUS,MACC=MEAN ACCURACY, MCOV=MEAN COVERAGE, MRUL=MEAN NUMBER OF RULES, MTRN=MEAN TRAINING GRANULAR SAMPLE SIZE r macc mcov mrul mtrn
7 1) Conclusions for Hepatitis data set: Results for particular strategies compared radius by radius show that the ranking of strategies is C > D > B > A; thus, the strategy C is most effective with D giving slightly worse results. As with perturbed Pima Indians Diabetes set, strategies C and D cope with a larger number of values in the test set. In [3], Hepatitis data set was studied with naive LERS algorithm and with new LERS algorithm augmented with parameters like strength and specificity of rules; the results in case of method 9, were accuracy of for new LERS and for naive LERS; the best result obtained by our approach with the strategy C implementing method 9 is , i.e. it falls almost in the middle between the two results for LERS. VI. CONCLUSION The results of tests reported in this work bear out the hypothesis that granulated data sets preserve information allowing for satisfactory classification. Also the novel approach to the problem of data with missing values has proved to be very effective. Further studies will lead to novel algorithms for rule induction based on granules of knowledge. VII. ACKNOWLEDGEMENT The authors acknowledge the service rendered rough set community by Professor Skowron and Grzymala Busse by sharing their algorithms to be used in this work. REFERENCES [1] J. G. Bazan, A comparison of dynamic and non dynamic rough set methods for extracting laws from decision tables, In: Rough Sets in Knowledge Discovery 1, L. Polkowski and A.Skowron, Eds., Physica Verlag: Heidelberg, 1998, [2] J. G. Bazan, Hung Son Nguyen, Sinh Hoa Nguyen, P. Synak and J. Wróblewski, Rough set algorithms in classification problems, In: Rough Set Methods and Applications, L.Polkowski, S.Tsumoto and T.Y.Lin Eds., Physica Verlag: Heidelberg, 2000, [3] J.W. Grzymala Busse and Ming Hu, A comparison of several approaches to missing attribute values in Data Mining, In: Proceedings RSCTC 2000; LNAI 2005, Springer Verlag: Berlin, 2000, [4] J.W. Grzymala Busse, LERS a system for learning from examples based on rough sets, In: Intelligent Decision Support: Handbook of Advances and Applications of the Rough Sets Theory, R. Słowiński Ed., Kluwer: Dordrecht, 1992, [5] J.W. Grzymala Busse, Data with missing attribute values: Generalization of rule indiscernibility relation and rule induction, Transactions on Rough Sets I, Springer Verlag: Berlin, 2004, [6] P. Hájek, Metamathematics of Fuzzy Logic, Kluwer: Dordrecht, [7] T. Hastie, R. Tibshirani and J.Friedman, The Elements of Statistical Learning, Springer Verlag: New York, [8] M. Kryszkiewicz, Rules in incomplete information systems, Information Sciences 113, 1999, [9] M. Kryszkiewicz and H. Rybiński, Data mining in incomplete information systems from rough set perspective, in: Rough Set Methods and Applications, L.Polkowski, S.Tsumoto and T.Y.Lin, Eds., Physica Verlag: Heidelberg, 2000, [10] S. Leśniewski, On the foundations of set theory, Topoi 2, 1982, [11] T. Y. Lin, Granular computing: Examples, Intuitions, and Modeling, in: [22], [12] Rough Neural Computing. Techniques for Computing with Words, S. K. Pal, L. Polkowski and A. Skowron Eds., Springer Verlag: Berlin, [13] Z. Pawlak, Rough Sets: Theoretical Aspects of Reasoning about Data, Kluwer: Dordrecht, [14] L. Polkowski, Rough Sets. Mathematical Foundations, Physica Verlag: Heidelberg, [15] L. Polkowski, Toward rough set foundations. Mereological approach (a plenary lecture), In: Proceedings RSCTC04, Uppsala, Sweden, 2004, LNAI vol. 3066,Springer Verlag: Berlin, 2004, [16] L. Polkowski, Rough fuzzy neurocomputing based on rough mereological calculus of granules, Intern. J. Hybrid Intell. Systems 2, 2005, [17] L. Polkowski, Formal granular calculi based on rough inclusions (a feature talk), In: [22], [18] L.Polkowski, A model of granular computing with applications (a feature talk), in: [23], [19] L.Polkowski, The paradigm of granular computing:foundations and Applications, in these Proceedings. [20] L.Polkowski and A. Skowron, Rough mereology: a new paradigm for approximate reasoning, International Journal of Approximate Reasoning 15(4), 1997, [21] L.Polkowski and A. Skowron, Rough mereology: a new paradigm for approximate reasoning,international Journal of Approximate Reasoning 15(4), 1997, [22] Proceedings of IEEE 2005 Conference on Granular Computing,GrC05, Beijing, China, July 2005, IEEE Press, [23] Proceedings of IEEE 2006 Conference on Granular Computing, GrC06, Atlanta, USA, May 2006, IEEE Press, [24] Qing Liu and Hui Sun, Theoretical study of granular computing, In: Proceedings RSKT06, Chongqing, China, 2006; Lecture Notes in Artificial Intelligence 4062, Springer Verlag: Berlin, 2006, [25] Sinh Hoa Nguyen, Regularity analysis and its applications in Data Mining, in: Rough Set Methods and Applications, L.Polkowski, S.Tsumoto and T.Y.Lin, Eds., Physica Verlag: Heidelberg, 2000, [26] A. Skowron, Boolean reasoning for decision rules generation, In: Methodologies for Intelligent Systems,
8 J.Komorowski and Z. Ras Eds., LNAI 689, Springer Verlag: Berlin, 1993, [27] A. Skowron, Extracting laws from decision tables, Computational Intelligence. An International Journal 11(2), 1995, [28] A. Skowron et al., RSES: A system for data analysis ; available at http: logic.mimuw.edu.pl/ rses/ [29] A. Skowron and C. Rauszer, The discernibility matrices and functions in decision systems, In: Intelligent Decision Support. Handbook of Applications and Advances of the Rough Sets Theory, R. Słowiński Ed., Kluwer: Dordrecht, 1992, [30] A. Skowron and J. Stepaniuk, Information granules and rough neural computing, in:[12], [31] J. Stefanowski, On rough set based approaches to induction of decision rules, In: Rough Sets in Knowledge Discovery 1, L. Polkowski and A.Skowron Eds., Physica Verlag: Heidelberg, 1998, [32] J. Stefanowski and A. Tsoukias, Incomplete information tables and rough classification, Computational Intelligence 17, 2001, [33] mlearn/databases/ [34] A. Wojna, Analogy based reasoning in classifier construction, Transactions on Rough Sets IV, subseries of Lecture Notes in Computer Science, LNCS 3700, Springer Verlag, Berlin, 2005, [35] Y. Y. Yao, Information granulation and approximation in a decision theoretic model of rough sets, In: [12], [36] Y.Y. Yao, Perspectives of granular computing, In: [22],
On Granular Rough Computing: Factoring Classifiers through Granulated Decision Systems
On Granular Rough Computing: Factoring Classifiers through Granulated Decision Systems Lech Polkowski 1,2, Piotr Artiemjew 2 Department of Mathematics and Computer Science University of Warmia and Mazury
More information2 Rough Sets In Data Analysis: Foundations and Applications
2 Rough Sets In Data Analysis: Foundations and Applications Lech Polkowski 1,2 and Piotr Artiemjew 2 1 Polish-Japanese Institute of Information Technology, Koszykowa 86, 02008 Warszawa, Poland polkow@pjwstk.edu.pl
More informationOn Knowledge Granulation and Applications to Classifier Induction in the Framework of Rough Mereology
International Journal of Computational Intelligence Systems, Vol.2, No. 4 (December, 2009), 315-331 On Knowledge Granulation and Applications to Classifier Induction in the Framework of Rough Mereology
More informationInterpreting Low and High Order Rules: A Granular Computing Approach
Interpreting Low and High Order Rules: A Granular Computing Approach Yiyu Yao, Bing Zhou and Yaohua Chen Department of Computer Science, University of Regina Regina, Saskatchewan, Canada S4S 0A2 E-mail:
More informationEnsembles of classifiers based on approximate reducts
Fundamenta Informaticae 34 (2014) 1 10 1 IOS Press Ensembles of classifiers based on approximate reducts Jakub Wróblewski Polish-Japanese Institute of Information Technology and Institute of Mathematics,
More informationClassification of Voice Signals through Mining Unique Episodes in Temporal Information Systems: A Rough Set Approach
Classification of Voice Signals through Mining Unique Episodes in Temporal Information Systems: A Rough Set Approach Krzysztof Pancerz, Wies law Paja, Mariusz Wrzesień, and Jan Warcho l 1 University of
More informationSimilarity-based Classification with Dominance-based Decision Rules
Similarity-based Classification with Dominance-based Decision Rules Marcin Szeląg, Salvatore Greco 2,3, Roman Słowiński,4 Institute of Computing Science, Poznań University of Technology, 60-965 Poznań,
More informationBanacha Warszawa Poland s:
Chapter 12 Rough Sets and Rough Logic: A KDD Perspective Zdzis law Pawlak 1, Lech Polkowski 2, and Andrzej Skowron 3 1 Institute of Theoretical and Applied Informatics Polish Academy of Sciences Ba ltycka
More informationMinimal Attribute Space Bias for Attribute Reduction
Minimal Attribute Space Bias for Attribute Reduction Fan Min, Xianghui Du, Hang Qiu, and Qihe Liu School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu
More informationFeature Selection with Fuzzy Decision Reducts
Feature Selection with Fuzzy Decision Reducts Chris Cornelis 1, Germán Hurtado Martín 1,2, Richard Jensen 3, and Dominik Ślȩzak4 1 Dept. of Mathematics and Computer Science, Ghent University, Gent, Belgium
More informationA new Approach to Drawing Conclusions from Data A Rough Set Perspective
Motto: Let the data speak for themselves R.A. Fisher A new Approach to Drawing Conclusions from Data A Rough et Perspective Zdzisław Pawlak Institute for Theoretical and Applied Informatics Polish Academy
More informationAndrzej Skowron, Zbigniew Suraj (Eds.) To the Memory of Professor Zdzisław Pawlak
Andrzej Skowron, Zbigniew Suraj (Eds.) ROUGH SETS AND INTELLIGENT SYSTEMS To the Memory of Professor Zdzisław Pawlak Vol. 1 SPIN Springer s internal project number, if known Springer Berlin Heidelberg
More informationClassification Based on Logical Concept Analysis
Classification Based on Logical Concept Analysis Yan Zhao and Yiyu Yao Department of Computer Science, University of Regina, Regina, Saskatchewan, Canada S4S 0A2 E-mail: {yanzhao, yyao}@cs.uregina.ca Abstract.
More informationThe size of decision table can be understood in terms of both cardinality of A, denoted by card (A), and the number of equivalence classes of IND (A),
Attribute Set Decomposition of Decision Tables Dominik Slezak Warsaw University Banacha 2, 02-097 Warsaw Phone: +48 (22) 658-34-49 Fax: +48 (22) 658-34-48 Email: slezak@alfa.mimuw.edu.pl ABSTRACT: Approach
More informationAn algorithm for induction of decision rules consistent with the dominance principle
An algorithm for induction of decision rules consistent with the dominance principle Salvatore Greco 1, Benedetto Matarazzo 1, Roman Slowinski 2, Jerzy Stefanowski 2 1 Faculty of Economics, University
More informationChapter 18 Rough Neurons: Petri Net Models and Applications
Chapter 18 Rough Neurons: Petri Net Models and Applications James F. Peters, 1 Sheela Ramanna, 1 Zbigniew Suraj, 2 Maciej Borkowski 1 1 University of Manitoba, Winnipeg, Manitoba R3T 5V6, Canada jfpeters,
More informationA Logical Formulation of the Granular Data Model
2008 IEEE International Conference on Data Mining Workshops A Logical Formulation of the Granular Data Model Tuan-Fang Fan Department of Computer Science and Information Engineering National Penghu University
More informationA Scientometrics Study of Rough Sets in Three Decades
A Scientometrics Study of Rough Sets in Three Decades JingTao Yao and Yan Zhang Department of Computer Science University of Regina [jtyao, zhang83y]@cs.uregina.ca Oct. 8, 2013 J. T. Yao & Y. Zhang A Scientometrics
More informationFuzzy Modal Like Approximation Operations Based on Residuated Lattices
Fuzzy Modal Like Approximation Operations Based on Residuated Lattices Anna Maria Radzikowska Faculty of Mathematics and Information Science Warsaw University of Technology Plac Politechniki 1, 00 661
More information1 Introduction Rough sets theory has been developed since Pawlak's seminal work [6] (see also [7]) as a tool enabling to classify objects which are on
On the extension of rough sets under incomplete information Jerzy Stefanowski 1 and Alexis Tsouki as 2 1 Institute of Computing Science, Poznań University oftechnology, 3A Piotrowo, 60-965 Poznań, Poland,
More informationComputational Intelligence, Volume, Number, VAGUENES AND UNCERTAINTY: A ROUGH SET PERSPECTIVE. Zdzislaw Pawlak
Computational Intelligence, Volume, Number, VAGUENES AND UNCERTAINTY: A ROUGH SET PERSPECTIVE Zdzislaw Pawlak Institute of Computer Science, Warsaw Technical University, ul. Nowowiejska 15/19,00 665 Warsaw,
More informationROUGH set methodology has been witnessed great success
IEEE TRANSACTIONS ON FUZZY SYSTEMS, VOL. 14, NO. 2, APRIL 2006 191 Fuzzy Probabilistic Approximation Spaces and Their Information Measures Qinghua Hu, Daren Yu, Zongxia Xie, and Jinfu Liu Abstract Rough
More informationA version of rough mereology suitable for rough sets
A version of rough mereology suitable for rough sets Lech T. Polkowski Polish-Japanese Academy IT Koszykowa str. 86, 02-008 Warszawa, Poland email: lech.polkowski@pja.edu.pl;polkow@pjwstk.edu.pl Abstract.
More informationHigh Frequency Rough Set Model based on Database Systems
High Frequency Rough Set Model based on Database Systems Kartik Vaithyanathan kvaithya@gmail.com T.Y.Lin Department of Computer Science San Jose State University San Jose, CA 94403, USA tylin@cs.sjsu.edu
More informationRough Sets and Conflict Analysis
Rough Sets and Conflict Analysis Zdzis law Pawlak and Andrzej Skowron 1 Institute of Mathematics, Warsaw University Banacha 2, 02-097 Warsaw, Poland skowron@mimuw.edu.pl Commemorating the life and work
More informationGranularity, Multi-valued Logic, Bayes Theorem and Rough Sets
Granularity, Multi-valued Logic, Bayes Theorem and Rough Sets Zdzis law Pawlak Institute for Theoretical and Applied Informatics Polish Academy of Sciences ul. Ba ltycka 5, 44 000 Gliwice, Poland e-mail:zpw@ii.pw.edu.pl
More informationOn Probability of Matching in Probability Based Rough Set Definitions
2013 IEEE International Conference on Systems, Man, and Cybernetics On Probability of Matching in Probability Based Rough Set Definitions Do Van Nguyen, Koichi Yamada, and Muneyuki Unehara Department of
More informationRough Sets, Rough Relations and Rough Functions. Zdzislaw Pawlak. Warsaw University of Technology. ul. Nowowiejska 15/19, Warsaw, Poland.
Rough Sets, Rough Relations and Rough Functions Zdzislaw Pawlak Institute of Computer Science Warsaw University of Technology ul. Nowowiejska 15/19, 00 665 Warsaw, Poland and Institute of Theoretical and
More informationComputers and Mathematics with Applications
Computers and Mathematics with Applications 59 (2010) 431 436 Contents lists available at ScienceDirect Computers and Mathematics with Applications journal homepage: www.elsevier.com/locate/camwa A short
More informationRough Set Model Selection for Practical Decision Making
Rough Set Model Selection for Practical Decision Making Joseph P. Herbert JingTao Yao Department of Computer Science University of Regina Regina, Saskatchewan, Canada, S4S 0A2 {herbertj, jtyao}@cs.uregina.ca
More informationDrawing Conclusions from Data The Rough Set Way
Drawing Conclusions from Data The Rough et Way Zdzisław Pawlak Institute of Theoretical and Applied Informatics, Polish Academy of ciences, ul Bałtycka 5, 44 000 Gliwice, Poland In the rough set theory
More informationIndex. C, system, 8 Cech distance, 549
Index PF(A), 391 α-lower approximation, 340 α-lower bound, 339 α-reduct, 109 α-upper approximation, 340 α-upper bound, 339 δ-neighborhood consistent, 291 ε-approach nearness, 558 C, 443-2 system, 8 Cech
More informationOn Improving the k-means Algorithm to Classify Unclassified Patterns
On Improving the k-means Algorithm to Classify Unclassified Patterns Mohamed M. Rizk 1, Safar Mohamed Safar Alghamdi 2 1 Mathematics & Statistics Department, Faculty of Science, Taif University, Taif,
More informationENSEMBLES OF DECISION RULES
ENSEMBLES OF DECISION RULES Jerzy BŁASZCZYŃSKI, Krzysztof DEMBCZYŃSKI, Wojciech KOTŁOWSKI, Roman SŁOWIŃSKI, Marcin SZELĄG Abstract. In most approaches to ensemble methods, base classifiers are decision
More informationA Simple Implementation of the Stochastic Discrimination for Pattern Recognition
A Simple Implementation of the Stochastic Discrimination for Pattern Recognition Dechang Chen 1 and Xiuzhen Cheng 2 1 University of Wisconsin Green Bay, Green Bay, WI 54311, USA chend@uwgb.edu 2 University
More informationA Generalized Decision Logic in Interval-set-valued Information Tables
A Generalized Decision Logic in Interval-set-valued Information Tables Y.Y. Yao 1 and Qing Liu 2 1 Department of Computer Science, University of Regina Regina, Saskatchewan, Canada S4S 0A2 E-mail: yyao@cs.uregina.ca
More informationEasy Categorization of Attributes in Decision Tables Based on Basic Binary Discernibility Matrix
Easy Categorization of Attributes in Decision Tables Based on Basic Binary Discernibility Matrix Manuel S. Lazo-Cortés 1, José Francisco Martínez-Trinidad 1, Jesús Ariel Carrasco-Ochoa 1, and Guillermo
More informationRough sets: Some extensions
Information Sciences 177 (2007) 28 40 www.elsevier.com/locate/ins Rough sets: Some extensions Zdzisław Pawlak, Andrzej Skowron * Institute of Mathematics, Warsaw University, Banacha 2, 02-097 Warsaw, Poland
More informationOn the Structure of Rough Approximations
On the Structure of Rough Approximations (Extended Abstract) Jouni Järvinen Turku Centre for Computer Science (TUCS) Lemminkäisenkatu 14 A, FIN-20520 Turku, Finland jjarvine@cs.utu.fi Abstract. We study
More informationThree Discretization Methods for Rule Induction
Three Discretization Methods for Rule Induction Jerzy W. Grzymala-Busse, 1, Jerzy Stefanowski 2 1 Department of Electrical Engineering and Computer Science, University of Kansas, Lawrence, Kansas 66045
More informationIN the areas of machine learning, artificial intelligence, as
INTL JOURNAL OF ELECTRONICS AND TELECOMMUNICATIONS, 22, VOL. 58, NO., PP. 7 76 Manuscript received December 3, 2; revised March 22. DOI:.2478/v77-2--x Features Reduction Using Logic Minimization Techniques
More informationA PRIMER ON ROUGH SETS:
A PRIMER ON ROUGH SETS: A NEW APPROACH TO DRAWING CONCLUSIONS FROM DATA Zdzisław Pawlak ABSTRACT Rough set theory is a new mathematical approach to vague and uncertain data analysis. This Article explains
More informationSets with Partial Memberships A Rough Set View of Fuzzy Sets
Sets with Partial Memberships A Rough Set View of Fuzzy Sets T. Y. Lin Department of Mathematics and Computer Science San Jose State University San Jose, California 95192-0103 E-mail: tylin@cs.sjsu.edu
More informationHierarchical Structures on Multigranulation Spaces
Yang XB, Qian YH, Yang JY. Hierarchical structures on multigranulation spaces. JOURNAL OF COMPUTER SCIENCE AND TECHNOLOGY 27(6): 1169 1183 Nov. 2012. DOI 10.1007/s11390-012-1294-0 Hierarchical Structures
More informationRough Set Approaches for Discovery of Rules and Attribute Dependencies
Rough Set Approaches for Discovery of Rules and Attribute Dependencies Wojciech Ziarko Department of Computer Science University of Regina Regina, SK, S4S 0A2 Canada Abstract The article presents an elementary
More informationSome remarks on conflict analysis
European Journal of Operational Research 166 (2005) 649 654 www.elsevier.com/locate/dsw Some remarks on conflict analysis Zdzisław Pawlak Warsaw School of Information Technology, ul. Newelska 6, 01 447
More informationInternational Journal of Approximate Reasoning
International Journal of Approximate Reasoning 52 (2011) 231 239 Contents lists available at ScienceDirect International Journal of Approximate Reasoning journal homepage: www.elsevier.com/locate/ijar
More informationData Analysis - the Rough Sets Perspective
Data Analysis - the Rough ets Perspective Zdzisław Pawlak Institute of Computer cience Warsaw University of Technology 00-665 Warsaw, Nowowiejska 15/19 Abstract: Rough set theory is a new mathematical
More informationMining Approximative Descriptions of Sets Using Rough Sets
Mining Approximative Descriptions of Sets Using Rough Sets Dan A. Simovici University of Massachusetts Boston, Dept. of Computer Science, 100 Morrissey Blvd. Boston, Massachusetts, 02125 USA dsim@cs.umb.edu
More informationNaive Bayesian Rough Sets
Naive Bayesian Rough Sets Yiyu Yao and Bing Zhou Department of Computer Science, University of Regina Regina, Saskatchewan, Canada S4S 0A2 {yyao,zhou200b}@cs.uregina.ca Abstract. A naive Bayesian classifier
More informationARPN Journal of Science and Technology All rights reserved.
Rule Induction Based On Boundary Region Partition Reduction with Stards Comparisons Du Weifeng Min Xiao School of Mathematics Physics Information Engineering Jiaxing University Jiaxing 34 China ABSTRACT
More informationComparison of Rough-set and Interval-set Models for Uncertain Reasoning
Yao, Y.Y. and Li, X. Comparison of rough-set and interval-set models for uncertain reasoning Fundamenta Informaticae, Vol. 27, No. 2-3, pp. 289-298, 1996. Comparison of Rough-set and Interval-set Models
More informationResearch on Complete Algorithms for Minimal Attribute Reduction
Research on Complete Algorithms for Minimal Attribute Reduction Jie Zhou, Duoqian Miao, Qinrong Feng, and Lijun Sun Department of Computer Science and Technology, Tongji University Shanghai, P.R. China,
More informationTHE LOCALIZATION OF MINDSTORMS NXT IN THE MAGNETIC UNSTABLE ENVIRONMENT BASED ON HISTOGRAM FILTERING
THE LOCALIZATION OF MINDSTORMS NXT IN THE MAGNETIC UNSTABLE ENVIRONMENT BASED ON HISTOGRAM FILTERING Piotr Artiemjew Department of Mathematics and Computer Sciences, University of Warmia and Mazury, Olsztyn,
More informationROUGH SET THEORY FOR INTELLIGENT INDUSTRIAL APPLICATIONS
ROUGH SET THEORY FOR INTELLIGENT INDUSTRIAL APPLICATIONS Zdzisław Pawlak Institute of Theoretical and Applied Informatics, Polish Academy of Sciences, Poland, e-mail: zpw@ii.pw.edu.pl ABSTRACT Application
More informationConcept Lattices in Rough Set Theory
Concept Lattices in Rough Set Theory Y.Y. Yao Department of Computer Science, University of Regina Regina, Saskatchewan, Canada S4S 0A2 E-mail: yyao@cs.uregina.ca URL: http://www.cs.uregina/ yyao Abstract
More informationRough operations on Boolean algebras
Rough operations on Boolean algebras Guilin Qi and Weiru Liu School of Computer Science, Queen s University Belfast Belfast, BT7 1NN, UK Abstract In this paper, we introduce two pairs of rough operations
More informationModeling the Real World for Data Mining: Granular Computing Approach
Modeling the Real World for Data Mining: Granular Computing Approach T. Y. Lin Department of Mathematics and Computer Science San Jose State University, San Jose, California 95192-0103 and Berkeley Initiative
More informationDialectics of Approximation of Semantics of Rough Sets
Dialectics of of Rough Sets and Deptartment of Pure Mathematics University of Calcutta 9/1B, Jatin Bagchi Road Kolkata-700029, India E-Mail: a.mani.cms@gmail.com Web: www.logicamani.in CLC/SLC 30th Oct
More informationFUZZY PARTITIONS II: BELIEF FUNCTIONS A Probabilistic View T. Y. Lin
FUZZY PARTITIONS II: BELIEF FUNCTIONS A Probabilistic View T. Y. Lin Department of Mathematics and Computer Science, San Jose State University, San Jose, California 95192, USA tylin@cs.sjsu.edu 1 Introduction
More informationPUBLICATIONS OF CECYLIA RAUSZER
PUBLICATIONS OF CECYLIA RAUSZER [CR1] Representation theorem for semi-boolean algebras I, Bull. Acad. Polon. Sci., Sér. Sci. Math. Astronom. Phys. 19(1971), 881 887. [CR2] Representation theorem for semi-boolean
More informationOn rule acquisition in incomplete multi-scale decision tables
*Manuscript (including abstract) Click here to view linked References On rule acquisition in incomplete multi-scale decision tables Wei-Zhi Wu a,b,, Yuhua Qian c, Tong-Jun Li a,b, Shen-Ming Gu a,b a School
More informationThe Fourth International Conference on Innovative Computing, Information and Control
The Fourth International Conference on Innovative Computing, Information and Control December 7-9, 2009, Kaohsiung, Taiwan http://bit.kuas.edu.tw/~icic09 Dear Prof. Yann-Chang Huang, Thank you for your
More informationRoman Słowiński. Rough or/and Fuzzy Handling of Uncertainty?
Roman Słowiński Rough or/and Fuzzy Handling of Uncertainty? 1 Rough sets and fuzzy sets (Dubois & Prade, 1991) Rough sets have often been compared to fuzzy sets, sometimes with a view to introduce them
More informationOn Proofs and Rule of Multiplication in Fuzzy Attribute Logic
On Proofs and Rule of Multiplication in Fuzzy Attribute Logic Radim Belohlavek 1,2 and Vilem Vychodil 2 1 Dept. Systems Science and Industrial Engineering, Binghamton University SUNY Binghamton, NY 13902,
More informationDiscovery of Concurrent Data Models from Experimental Tables: A Rough Set Approach
From: KDD-95 Proceedings. Copyright 1995, AAAI (www.aaai.org). All rights reserved. Discovery of Concurrent Data Models from Experimental Tables: A Rough Set Approach Andrzej Skowronl* and Zbigniew Suraj2*
More informationIterative Laplacian Score for Feature Selection
Iterative Laplacian Score for Feature Selection Linling Zhu, Linsong Miao, and Daoqiang Zhang College of Computer Science and echnology, Nanjing University of Aeronautics and Astronautics, Nanjing 2006,
More informationMathematical Approach to Vagueness
International Mathematical Forum, 2, 2007, no. 33, 1617-1623 Mathematical Approach to Vagueness Angel Garrido Departamento de Matematicas Fundamentales Facultad de Ciencias de la UNED Senda del Rey, 9,
More information2 WANG Jue, CUI Jia et al. Vol.16 no", the discernibility matrix is only a new kind of learning method. Otherwise, we have to provide the specificatio
Vol.16 No.1 J. Comput. Sci. & Technol. Jan. 2001 Investigation on AQ11, ID3 and the Principle of Discernibility Matrix WANG Jue (Ξ ±), CUI Jia ( ) and ZHAO Kai (Π Λ) Institute of Automation, The Chinese
More informationON SOME PROPERTIES OF ROUGH APPROXIMATIONS OF SUBRINGS VIA COSETS
ITALIAN JOURNAL OF PURE AND APPLIED MATHEMATICS N. 39 2018 (120 127) 120 ON SOME PROPERTIES OF ROUGH APPROXIMATIONS OF SUBRINGS VIA COSETS Madhavi Reddy Research Scholar, JNIAS Budhabhavan, Hyderabad-500085
More informationLearning Sunspot Classification
Fundamenta Informaticae XX (2006) 1 15 1 IOS Press Learning Sunspot Classification Trung Thanh Nguyen, Claire P. Willis, Derek J. Paddon Department of Computer Science, University of Bath Bath BA2 7AY,
More informationKnowledge Discovery. Zbigniew W. Ras. Polish Academy of Sciences, Dept. of Comp. Science, Warsaw, Poland
Handling Queries in Incomplete CKBS through Knowledge Discovery Zbigniew W. Ras University of orth Carolina, Dept. of Comp. Science, Charlotte,.C. 28223, USA Polish Academy of Sciences, Dept. of Comp.
More informationISSN Article. Discretization Based on Entropy and Multiple Scanning
Entropy 2013, 15, 1486-1502; doi:10.3390/e15051486 OPEN ACCESS entropy ISSN 1099-4300 www.mdpi.com/journal/entropy Article Discretization Based on Entropy and Multiple Scanning Jerzy W. Grzymala-Busse
More informationStatistical Model for Rough Set Approach to Multicriteria Classification
Statistical Model for Rough Set Approach to Multicriteria Classification Krzysztof Dembczyński 1, Salvatore Greco 2, Wojciech Kotłowski 1 and Roman Słowiński 1,3 1 Institute of Computing Science, Poznań
More informationAdditive Preference Model with Piecewise Linear Components Resulting from Dominance-based Rough Set Approximations
Additive Preference Model with Piecewise Linear Components Resulting from Dominance-based Rough Set Approximations Krzysztof Dembczyński, Wojciech Kotłowski, and Roman Słowiński,2 Institute of Computing
More informationResearch Article Special Approach to Near Set Theory
Mathematical Problems in Engineering Volume 2011, Article ID 168501, 10 pages doi:10.1155/2011/168501 Research Article Special Approach to Near Set Theory M. E. Abd El-Monsef, 1 H. M. Abu-Donia, 2 and
More informationRelationship between Loss Functions and Confirmation Measures
Relationship between Loss Functions and Confirmation Measures Krzysztof Dembczyński 1 and Salvatore Greco 2 and Wojciech Kotłowski 1 and Roman Słowiński 1,3 1 Institute of Computing Science, Poznań University
More informationNotes on Rough Set Approximations and Associated Measures
Notes on Rough Set Approximations and Associated Measures Yiyu Yao Department of Computer Science, University of Regina Regina, Saskatchewan, Canada S4S 0A2 E-mail: yyao@cs.uregina.ca URL: http://www.cs.uregina.ca/
More informationarxiv: v1 [cs.lo] 16 Jul 2017
SOME IMPROVEMENTS IN FUZZY TURING MACHINES HADI FARAHANI arxiv:1707.05311v1 [cs.lo] 16 Jul 2017 Department of Computer Science, Shahid Beheshti University, G.C, Tehran, Iran h farahani@sbu.ac.ir Abstract.
More informationThis article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and
This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and education use, including for instruction at the authors institution
More informationFoundations of Classification
Foundations of Classification J. T. Yao Y. Y. Yao and Y. Zhao Department of Computer Science, University of Regina Regina, Saskatchewan, Canada S4S 0A2 {jtyao, yyao, yanzhao}@cs.uregina.ca Summary. Classification
More informationNEAR GROUPS ON NEARNESS APPROXIMATION SPACES
Hacettepe Journal of Mathematics and Statistics Volume 414) 2012), 545 558 NEAR GROUPS ON NEARNESS APPROXIMATION SPACES Ebubekir İnan and Mehmet Ali Öztürk Received 26:08:2011 : Accepted 29:11:2011 Abstract
More informationOn the Relation of Probability, Fuzziness, Rough and Evidence Theory
On the Relation of Probability, Fuzziness, Rough and Evidence Theory Rolly Intan Petra Christian University Department of Informatics Engineering Surabaya, Indonesia rintan@petra.ac.id Abstract. Since
More informationENTROPIES OF FUZZY INDISCERNIBILITY RELATION AND ITS OPERATIONS
International Journal of Uncertainty Fuzziness and Knowledge-Based Systems World Scientific ublishing Company ENTOIES OF FUZZY INDISCENIBILITY ELATION AND ITS OEATIONS QINGUA U and DAEN YU arbin Institute
More informationCompenzational Vagueness
Compenzational Vagueness Milan Mareš Institute of information Theory and Automation Academy of Sciences of the Czech Republic P. O. Box 18, 182 08 Praha 8, Czech Republic mares@utia.cas.cz Abstract Some
More informationData Mining und Maschinelles Lernen
Data Mining und Maschinelles Lernen Ensemble Methods Bias-Variance Trade-off Basic Idea of Ensembles Bagging Basic Algorithm Bagging with Costs Randomization Random Forests Boosting Stacking Error-Correcting
More informationTolerance Approximation Spaces. Andrzej Skowron. Institute of Mathematics. Warsaw University. Banacha 2, Warsaw, Poland
Tolerance Approximation Spaces Andrzej Skowron Institute of Mathematics Warsaw University Banacha 2, 02-097 Warsaw, Poland e-mail: skowron@mimuw.edu.pl Jaroslaw Stepaniuk Institute of Computer Science
More informationEffect of Rule Weights in Fuzzy Rule-Based Classification Systems
506 IEEE TRANSACTIONS ON FUZZY SYSTEMS, VOL. 9, NO. 4, AUGUST 2001 Effect of Rule Weights in Fuzzy Rule-Based Classification Systems Hisao Ishibuchi, Member, IEEE, and Tomoharu Nakashima, Member, IEEE
More informationSemantic Rendering of Data Tables: Multivalued Information Systems Revisited
Semantic Rendering of Data Tables: Multivalued Information Systems Revisited Marcin Wolski 1 and Anna Gomolińska 2 1 Maria Curie-Skłodowska University, Department of Logic and Cognitive Science, Pl. Marii
More informationarxiv: v1 [cs.ai] 25 Sep 2012
Condition for neighborhoods in covering based rough sets to form a partition arxiv:1209.5480v1 [cs.ai] 25 Sep 2012 Abstract Hua Yao, William Zhu Lab of Granular Computing, Zhangzhou Normal University,
More informationData mining using Rough Sets
Data mining using Rough Sets Alber Sánchez 1 alber.ipia@inpe.br 1 Instituto Nacional de Pesquisas Espaciais, São José dos Campos, SP, Brazil Referata Geoinformatica, 2015 1 / 44 Table of Contents Rough
More informationRough Sets for Uncertainty Reasoning
Rough Sets for Uncertainty Reasoning S.K.M. Wong 1 and C.J. Butz 2 1 Department of Computer Science, University of Regina, Regina, Canada, S4S 0A2, wong@cs.uregina.ca 2 School of Information Technology
More informationAction Rule Extraction From A Decision Table : ARED
Action Rule Extraction From A Decision Table : ARED Seunghyun Im 1 and Zbigniew Ras 2,3 1 University of Pittsburgh at Johnstown, Department of Computer Science Johnstown, PA. 15904, USA 2 University of
More informationCRITERIA REDUCTION OF SET-VALUED ORDERED DECISION SYSTEM BASED ON APPROXIMATION QUALITY
International Journal of Innovative omputing, Information and ontrol II International c 2013 ISSN 1349-4198 Volume 9, Number 6, June 2013 pp. 2393 2404 RITERI REDUTION OF SET-VLUED ORDERED DEISION SYSTEM
More informationCMSC 422 Introduction to Machine Learning Lecture 4 Geometry and Nearest Neighbors. Furong Huang /
CMSC 422 Introduction to Machine Learning Lecture 4 Geometry and Nearest Neighbors Furong Huang / furongh@cs.umd.edu What we know so far Decision Trees What is a decision tree, and how to induce it from
More informationRough Set Approach for Generation of Classification Rules for Jaundice
Rough Set Approach for Generation of Classification Rules for Jaundice Sujogya Mishra 1, Shakti Prasad Mohanty 2, Sateesh Kumar Pradhan 3 1 Research scholar, Utkal University Bhubaneswar-751004, India
More informationHandbook of Logic and Proof Techniques for Computer Science
Steven G. Krantz Handbook of Logic and Proof Techniques for Computer Science With 16 Figures BIRKHAUSER SPRINGER BOSTON * NEW YORK Preface xvii 1 Notation and First-Order Logic 1 1.1 The Use of Connectives
More informationBeyond Sequential Covering Boosted Decision Rules
Beyond Sequential Covering Boosted Decision Rules Krzysztof Dembczyński 1, Wojciech Kotłowski 1,andRomanSłowiński 2 1 Poznań University of Technology, 60-965 Poznań, Poland kdembczynski@cs.put.poznan.pl,
More informationFriedman s test with missing observations
Friedman s test with missing observations Edyta Mrówka and Przemys law Grzegorzewski Systems Research Institute, Polish Academy of Sciences Newelska 6, 01-447 Warsaw, Poland e-mail: mrowka@ibspan.waw.pl,
More informationA novel k-nn approach for data with uncertain attribute values
A novel -NN approach for data with uncertain attribute values Asma Trabelsi 1,2, Zied Elouedi 1, and Eric Lefevre 2 1 Université de Tunis, Institut Supérieur de Gestion de Tunis, LARODEC, Tunisia trabelsyasma@gmail.com,zied.elouedi@gmx.fr
More information