Using boxes and proximity to classify data into several categories

Similar documents
Sample width for multi-category classifiers

The Performance of a New Hybrid Classifier Based on Boxes and Nearest Neighbors

The Performance of a New Hybrid Classifier Based on Boxes and Nearest Neighbors

Maximal Width Learning of Binary Functions

Maximal Width Learning of Binary Functions

A Result of Vapnik with Applications

R u t c o r Research R e p o r t. A New Imputation Method for Incomplete Binary Data. Mine Subasi a Martin Anthony c. Ersoy Subasi b P.L.

R u t c o r Research R e p o r t. Large margin case-based reasoning. Martin Anthony a. Joel Ratsaby b. RRR , February 2013

Yale University Department of Computer Science. The VC Dimension of k-fold Union

Computational Learning Theory for Artificial Neural Networks

On the Sample Complexity of Noise-Tolerant Learning

The sample complexity of agnostic learning with deterministic labels

Uniform Glivenko-Cantelli Theorems and Concentration of Measure in the Mathematical Modelling of Learning

Statistical and Computational Learning Theory

Understanding Generalization Error: Bounds and Decompositions

A Bound on the Label Complexity of Agnostic Active Learning

THE VAPNIK- CHERVONENKIS DIMENSION and LEARNABILITY

The information-theoretic value of unlabeled data in semi-supervised learning

COMP9444: Neural Networks. Vapnik Chervonenkis Dimension, PAC Learning and Structural Risk Minimization

Computational Learning Theory

Machine Learning. VC Dimension and Model Complexity. Eric Xing , Fall 2015

Computational and Statistical Learning theory

Neural Network Learning: Testing Bounds on Sample Complexity

Vapnik-Chervonenkis Dimension of Neural Nets

Large width nearest prototype classification on general distance spaces

Relating Data Compression and Learnability

COMPUTATIONAL LEARNING THEORY

Computational Learning Theory. CS534 - Machine Learning

Introduction to Machine Learning

FORMULATION OF THE LEARNING PROBLEM

An Introduction to Statistical Theory of Learning. Nakul Verma Janelia, HHMI

EVALUATING MISCLASSIFICATION PROBABILITY USING EMPIRICAL RISK 1. Victor Nedel ko

Machine Learning. Lecture 9: Learning Theory. Feng Li.

A Large Deviation Bound for the Area Under the ROC Curve

Discriminative Learning can Succeed where Generative Learning Fails

The Decision List Machine

MACHINE LEARNING. Vapnik-Chervonenkis (VC) Dimension. Alessandro Moschitti

10.1 The Formal Model

Solving Classification Problems By Knowledge Sets

Does Unlabeled Data Help?

Machine Learning

A Pseudo-Boolean Set Covering Machine

Consistency of Nearest Neighbor Methods

R u t c o r Research R e p o r t. Relations of Threshold and k-interval Boolean Functions. David Kronus a. RRR , April 2008

Vapnik-Chervonenkis Dimension of Neural Nets

1 Active Learning Foundations of Machine Learning and Data Science. Lecturer: Maria-Florina Balcan Lecture 20 & 21: November 16 & 18, 2015

PATTERN-BASED CLUSTERING AND ATTRIBUTE ANALYSIS

Small sample size generalization

Machine Learning. Computational Learning Theory. Eric Xing , Fall Lecture 9, October 5, 2016

MACHINE LEARNING. Support Vector Machines. Alessandro Moschitti

A note on the generalization performance of kernel classifiers with margin. Theodoros Evgeniou and Massimiliano Pontil

Rademacher Averages and Phase Transitions in Glivenko Cantelli Classes

The PAC Learning Framework -II

Voronoi Networks and Their Probability of Misclassification

Lecture Support Vector Machine (SVM) Classifiers

Introduction to Statistical Learning Theory

Machine Learning

On Learnability, Complexity and Stability

Tutorial on Venn-ABERS prediction

Empirical Risk Minimization

A Necessary Condition for Learning from Positive Examples

Computational Learning Theory

Bennett-type Generalization Bounds: Large-deviation Case and Faster Rate of Convergence

Hence C has VC-dimension d iff. π C (d) = 2 d. (4) C has VC-density l if there exists K R such that, for all

Generalization theory

Martin Anthony. The London School of Economics and Political Science, Houghton Street, London WC2A 2AE, United Kingdom.

On Learning µ-perceptron Networks with Binary Weights

IFT Lecture 7 Elements of statistical learning theory

12. Structural Risk Minimization. ECE 830 & CS 761, Spring 2016

PAC-learning, VC Dimension and Margin-based Bounds

Vapnik-Chervonenkis Dimension of Axis-Parallel Cuts arxiv: v2 [math.st] 23 Jul 2012

CONSIDER a measurable space and a probability

CSE 648: Advanced algorithms

Quadratization of Symmetric Pseudo-Boolean Functions

Learning Theory. Piyush Rai. CS5350/6350: Machine Learning. September 27, (CS5350/6350) Learning Theory September 27, / 14

Generalization bounds

Probably Approximately Correct (PAC) Learning

1 Differential Privacy and Statistical Query Learning

3.1 Decision Trees Are Less Expressive Than DNFs

Generalization theory

Fast Rates for Estimation Error and Oracle Inequalities for Model Selection

PAC-Bayes Risk Bounds for Sample-Compressed Gibbs Classifiers

12.1 A Polynomial Bound on the Sample Size m for PAC Learning

Part of the slides are adapted from Ziko Kolter

Nearly-tight VC-dimension bounds for piecewise linear neural networks

Set, functions and Euclidean space. Seungjin Han

Rademacher Complexity Bounds for Non-I.I.D. Processes

Computational and Statistical Learning Theory

CS 6375: Machine Learning Computational Learning Theory

POL502: Foundations. Kosuke Imai Department of Politics, Princeton University. October 10, 2005

International Journal "Information Theories & Applications" Vol.14 /

Machine Learning. Computational Learning Theory. Le Song. CSE6740/CS7641/ISYE6740, Fall 2012

Narrowing confidence interval width of PAC learning risk function by algorithmic inference

18.9 SUPPORT VECTOR MACHINES

Probably Approximately Correct Learning - III

Adaptive Sampling Under Low Noise Conditions 1

Connections between Neural Networks and Boolean Functions

Learnability of the Superset Label Learning Problem

Class 2 & 3 Overfitting & Regularization

From Batch to Transductive Online Learning

Transcription:

R u t c o r Research R e p o r t Using boxes and proximity to classify data into several categories Martin Anthony a Joel Ratsaby b RRR 7-01, February 01 RUTCOR Rutgers Center for Operations Research Rutgers University 640 Bartholomew Road Piscataway, New Jersey 08854-8003 Telephone: 73-445-3804 Telefax: 73-445-547 Email: rrr@rutcor.rutgers.edu http://rutcor.rutgers.edu/ rrr a Department of Mathematics, The London School of Economics and Political Science, Houghton Street, London WCA AE, UK. m.anthony@lse.ac.uk b Electrical and Electronics Engineering Department, Ariel University Center of Samaria, Ariel 40700, Israel. ratsaby@ariel.ac.il

Rutcor Research Report RRR 7-01, February 01 Using boxes and proximity to classify data into several categories Martin Anthony Joel Ratsaby Abstract.The use of boxes for pattern classification has been widespread and is a fairly natural way in which to partition data into different classes or categories. In this paper we consider multi-category classifiers which are based on unions of boxes. The classification method studied may be described as follows: find boxes such that all points in the region enclosed by each box are assumed to belong to the same category, and then classify remaining points by considering their distances to these boxes, assigning to a point the category of the nearest box. This extends the simple method of classifying by unions of boxes by incorporating a natural way based on proximity of classifying points outside the boxes. We analyse the generalization accuracy of such classifiers and we obtain generalization error bounds that depend on a measure of how definitive is the classification of training points. Acknowledgements: This work was supported in part by the IST Programme of the European Community, under the PASCAL Network of Excellence, IST-007-16886. Part of the work was carried out while Martin Anthony was a visitor at Rutgers University Center for Operations Research.

Page RRR 7-01 1 Box-based multi-category classifiers Classification in which each category or class is a union of boxes is a long-studied and natural method for pattern classification. It is central, for instance, to the methods used for logical analysis of data see, for example [9, 10, 15, 6] and has been more widely studied as a geometrical classifier see [11], for instance. More recently, unions of boxes have been used in combination with a nearest-neighbor or proximity paradigm for binary classification [5] and multi-category classification [13], enabling meaningful classification for points of the domain that lie outside any of the boxes. In this paper, we analyse multi-category classifiers of the type decribed by [13]. In that paper, they describe a set of classifiers based on boxes and nearest-neighbor, where the metric used for the nearest-neighbor measure is the Manhattan or taxicab metric. We give explicit details shortly. They use an agglomerative box-clustering method to produce a set of candidate classifiers of this type. They then select from these one that is, in a sense they define, optimal. First they focus on the classifiers which are, with respect to the two dimensions of error on the sample, E, and complexity number of boxes, B, Pareto-optimal. Among these they then select a classifier that minimizes an objective function of the form E E 0 + B B 0 effecting a tradeoff between error and complexity and, if there is more that one such classifier, they choose that which minimizes E. They provide some experimental evidence that this approach works. Here, we obtain generalization error bounds for the box-based classifiers of the type considered in [13], within the standard PAC model of probabilistic learning. The bounds we obtain depend on the error and complexity and they improve that is, they decrease the more definite is the classification of the sample points. Suppose points of [0, 1] n are to be classified into C classes, which we will assume are labeled 1,,..., C. We let [C] denote the set {1,,..., C}. A box or, more exactly, an axis-parallel box in R n is a set of the form Iu, v = {x R n : u i x i v i, 1 i n}, where u, v [0, 1] n and u v, meaning that u i v i for each i. We consider multi-category classifiers which are based on C unions of boxes, as we now describe. For k = 1,..., C, suppose that S k is a union of some number, B k, of boxes: S k = B k j=1 Iuk, j, vk, j. Here, the jth box is defined by uk, j, vk, j where uk, j, vk, j [0, 1] n and uk, j vk, j so, for each i between 1 and n, uk, j i vk, j i. We assume, further, that for

RRR 7-01 Page 3 k l, S k S l =. We think of S k as being a region of the domain all of whose points we assume to belong to class k. So, as in [13] and [15], for instance, the boxes in S k might be constructed by agglomerative box-clustering methods. To define our classifiers, we will make use of a metric on [0, 1] n. To be specific, as in [13], d will be the d 1 or Manhattan or taxicab metric: for x, y [0, 1] n, dx, y = n x i y i. i=1 We could equally well as in [5], where the two-class case is the focus use the supremum or d metric, defined by d x, y = max{ x i y i : 1 i n} and similar results would be obtained. For x [0, 1] n and S [0, 1] n, the distance from x to S is dx, S = inf dx, y. y S Let S = S 1, S,..., S C and denote by h S follows: for x [0, 1] n, the classifier from [0, 1] n into [C] defined as h S x = argmin 1 k C dx, S k, where if dx, S k is minimized for more than one value of k, one of these is chosen randomly as the value of h S. So, in other words, the class label assigned to x is k where S k is the closest to x of the regions S 1, S,..., S C. We refer to B = B 1 + + B C as the number of boxes in S and in h S. We will denote by H B the set of all such classifiers where the number of boxes is B. The set of all possible classifiers we cosider is then H = B=1 H B. These classifiers, therefore, are based, as a starting point, on regions assumed to be of particular categories. These regions are each unions of boxes, and the regions do not overlap. In practice, these boxes and the corresponding regions will likely have been constructed directly from a training sample by finding boxes containing sample points of a particular class, and merging, or agglomerating these; see [13]. See, for example, Figure 1. The three types of boxes are indicated, and the pale gray region is the region not covered by any of the boxes.

Page 4 RRR 7-01 Figure 1: Boxes of three categories. Then, for all other points of the domain, the classification of a point is given by the class of the region to which it is closest in the d 1 metric. For the initial configuration of boxes indicated in Figure 1, the final classification of the whole domain is as indicated in Figure. Grid lines have been inserted in these figures to make it easier to see the correspondence between them. Figure : Classification of the whole domain resulting from the boxes of Figure 1.

RRR 7-01 Page 5 These classifiers seem quite natural, from a geometrical point of view; and unlike black-box classifiers such as neural networks, can be described and understood: there are box-shaped regions where we assert a known classification, and the classification elsewhere is determined by an arguably fairly sensible nearest-neighbor approach. It is also potentially useful that the classification is explicitly based on distance, and so there is a real-valued function f underlying the classifier: if the classification of x is k, then we can consider how much further x is from the next-nearest category of box. This real number, fx = min l k dx, S l dx, S k, quantifies, in a sense, how sure or definitive the classification is. In particular, if fx is relatively large, it means that x is quite far from boxes of the other categories. We could interpret the value of the function f as an indicator of how confident we might be about the classification of a point. A point in the domain with a large value of f will be classified more definitely than one with a small value of f and we might think that the classification of the first point is more reliable than that of the second, because the larger value of f indicates that the first point is further from boxes of other categories than is the second point. Furthermore, by considering the value of f on the sample points, we have some measure of how robust the classifier is on the training sample. This measure of robustness plays a role in our generalization error bounds. Generalization error bounds for definitive classification Guermeur[14] describes a fairly general framework for PAC-analysis of multi-category classifiers and we will show how we can use his results to bound the generalization error for our classifiers. This involves defining a certain function class and then bounding the covering numbers of that class. First, however, we obtain a tighter bound for the case in which the margin error is zero. What this means is that we bound the error of the classifier in terms of how definitive the classification of the training sample is..1 Classifiers with definitive classification on all sample points The following definition describes what we mean by definitive classification on the sample. Definition.1 Let Z = X Y where X = [0, 1] n and Y = [C] = {1,,..., C}. For 0, 1, and for z Z m, we say that h S H achieves margin on the sample point z = x, y in z if for all l y, dx, S l > dx, S y +. We say that h S achieves margin on the sample z if it achieves margin on each of the x i, y i in z.

Page 6 RRR 7-01 So, h S achieves margin on a sample point if S y is the closest of the S k to x so that the class label assigned to x will be y and, also, every other S l has distance from x that is at least greater than the distance from x to S y. To quantify the performance of a classifier after training, we use a form of the PAC model of computational learning theory. See, for instance [8, 4, 19].This assumes that we have training examples z i = x i, y i Z = [0, 1] n [C], each of which has been generated randomly according to some fixed probability measure P on Z. Then, we can regard a training sample of length m, which is an element of Z m, as being randomly generated according to the product probability measure P m. The error of a classifier h S is the probability that it does not definitely assign the correct class to a subsequent randomly-drawn instance and we include in this the cases in which there are more than one equally close S k, for the random choice then made might be incorrect. So, the error is the probability that it is not true that for x, y Z we have dx, S y < dx, S k for all k y; that is, er P h S = P {x, y : dx, S y min dx, S k}. k y What we would hope is that, if a classifier performs well on a large enough sample, then its error is likely to be low. The following result is of this type where good performance on the sample means correct, and definitively correct, classification on the sample. Theorem. Let δ 0, 1 and suppose P is a probability measure on Z = [0, 1] n [C]. With P m -probability at least 1 δ, z Z m will be such that we have the following: for all B and for all 0, 1, for all h S H B, if h S achieves margin on z, then er P h S < m 1n 4 nb log + B log C + log. δ In particular, Theorem. provides a high-probability guarantee on the real error of a classifier once training has taken place, based on the observed margin that has been obtained this being the largest value of such that a margin of has been achieved.. Proof of Theorem. We first derive a result in which B and are fixed in advance. We then remove the requirement that these parameters be fixed, to obtain Theorem..

RRR 7-01 Page 7 For 0, 1, let A [0, 1] be the set of all integer multiples of /4n belonging to [0, 1], together with 1. So, { A = 0, 4n, 4n, 3 } 4n 4n,..., 4n, 1. We have A 4n + 6n. Let ĤB H B be the set of all classifiers of the form h S where, for each k, S k is a union of boxes of the form Iu, v where u, v A n. Lemma.3 With the above notation, we have ĤB nb 6n C B. Proof: The number of possible boxes Iu, v with u, v A n is n 6n n A n 6n. A classifier in ĤB is obtained by choosing B such boxes and labeling each with a category between 1 and C. So, 6n/ n ĤB B C B nb 6n C B, as required. For 0, we define the -margin error of h S on a sample z. Denoted by Ez h S, this is simply the proportion of x, y in z in which h S does not achieve a margin of. In other words, Ez h S = 1 m I { l y i : dx i, S l dx, S yi + }. m i=1 Here, IA denotes the indicator function of a set or event A. Evidently, to say that h S achieves margin on z is to say that E z h S = 0.

Page 8 RRR 7-01 The next part of the proof uses a symmetrization technique similar to those first used in [0, 18, 1, 17] and in subsequent work extending those techniques to learning with realvalued functions, such as [16, 1,, 7]. Lemma.4 If Q = {s Z m : h S H B with Es h S = 0, er P h S ɛ} and T = {s, s Z m Z m : h S H B with Es h S = 0, Es 0 h S ɛ/}, then, for m 8/ɛ, P m Q P m T. Proof: We have P m T P { m h S H B : Es h S = 0, er P h ɛ and Es 0 h S ɛ/ } = P { m s : h S H B, Es h S = 0, er P h S ɛ and Es 0 h S ɛ/ } dp m s Q 1 P m Q, for m 8/ɛ. The final inequality follows from the fact that if er P h S ɛ, then for m 8/ɛ, P m E 0 s h S ɛ/ 1/, for any h S H, something that follows by a Chernoff bound. We next bound the probability of T and use this to obtain a generalization error bound for fixed and B. Proposition.5 Let B N, 0, 1 and δ 0, 1. Then, with probability at least 1 δ, if h S H B and Es h S = 0, then er P h S < 6n nb log m + B log C + log. δ Proof: Let G be the permutation group the swapping group on the set {1,,..., m} generated by the transpositions i, m + i for i = 1,,..., m. Then G acts on Z m by

RRR 7-01 Page 9 permuting the coordinates: for σ G, σz 1, z,..., z m = z σ1,..., z σm. By invariance of P m under the action of G, P m T max{pσz T : z Z m }, where P denotes the probability over uniform choice of σ from G. See, for instance, [17, 3]. Fix z Z m. Suppose τz = s, s T and that h S H B is such that E s h S = 0 and E 0 s h S ɛ/. Suppose that S = S 1, S,..., S C where, for each k, S k = B k j=1 Let ûk, j, ˆvj, k A n be such that, for all r, These exist by definition of A. Let Iuk, j, vk, j. ûk, j r uk, j r 4n, ˆvk, j r vk, j r 4n. Ŝ k = B k j=1 Iûk, j, ˆvk, j and Ŝ = Ŝ1,..., ŜC. Let ĥs be the corresponding classifier in ĤB; that is, ĥs = hŝ. The following geometrical fact easily seen will be useful: for any k, for any a S k, there exists â Ŝk such that da, â /4; and, conversely, for any â Ŝk, there exists a S k such that da, â /4. For any k, there is some a S k such that dx, S k = dx, a. If â Ŝk is such that a r â r /4n for all r, then it follows that da, â = n a r â r /4. r=1 So, dx, Ŝk dx, â dx, a + da, â dx, S k + /4. A similar argument shows that, for each k, dx, S k dx, Ŝk + /4. Suppose that, for all l y, dx, S l dx, S y +. Then, if l y, we have dx, Ŝl dx, S l 4 dx, S y + 4 dx, Ŝy 4 + 4 = dx, Ŝy +.

Page 10 RRR 7-01 So, if h S achieves margin on x, y, then ĥs achieves margin /. It follows that if Es h S = 0 then Es / ĥs = 0. Now suppose that there is l y such that dx, S l < dx, S y. Then dx, Ŝl dx, S l + 4 < dx, S y + 4 dx, Ŝy + 4 + 4 = dx, Ŝy +. This argument shows that if Es 0 h S ɛ/, then E / s h S ɛ/. It now follows that if τz T, then, for some ĥs H B, τz RĥS, where RĥS = {s, s Z m Z m : Es / ĥs = 0, E / s ĥs ɛ/}. By symmetry, P σz RĥS = P στz RĥS. Suppose that E / s ĥs = r/m, where r ɛm/ is the number of x i, y i in s on which ĥs does not achieve margin /. Then those permutations σ such that στz RĥS are precisely those that do not transpose these r coordinates, and there are m r m ɛm/ such σ. It follows that, for each fixed ĥ S ĤB, P σz RĥS m1 ɛ/ = ɛm/. G We therefore have Pσz T P σz RĥS Pσz RĥS ĤB ɛm/. ĥ S ĤB ĥ S ĤB So, This is at most δ when P m Q P m T ĤB ɛm/ ɛ = m nb log 6n nb 6n C B ɛm/ + B log C + log δ, as stated. Next, we use this to obtain a result in which and B are not prescribed in advance. For α 1, α, δ 0, 1, let Eα 1, α, δ be the set of z Z m for which there exists some h S H B which achieves margin α on z and which has er P f ɛ 1 m, α 1, δ, B, where ɛ 1 m, α 1, δ, B = m nb log 6n α 1 + B log C + log δ.

RRR 7-01 Page 11 Proposition.5 tells us that P m Eα, α, δ δ. It is also clear that if α 1 α α and δ 1 δ, then Eα 1, α, δ 1 Eα, α, δ. It follows, from [7], that P m E/,, δ/ δ. 0,1] In other words, for fixed B, with probability at least 1 δ, for all 0, 1], we have that if h S H B achieves margin on the sample, then er P h S < ɛ m,, δ, B, where ɛ m,, δ, B = 1n 4 nb log m + B log C + log. δ Note that now need not be prescribed in advance. It now follows that with probability at least 1 δ/ B, for any 0, 1, if h S H B achieves margin, then er P h S ɛ m,, δ/ B, B. So, with probability at least 1 B=1 δ/b = 1 δ, we have: for all B, for all 0, 1, if h S H B achieves margin, then er P h S ɛ m,, δ/ B, B = 1n 4 nb log m + B log α C + log + B. 1 δ Theorem. now follows on noting that log C 1..3 Allowing less definitive classification on some sample points As mentioned, Guermeur [14] has developed a fairly general framework in which to analyse multi-category classification, and we can apply one of his results to obtain a generalization error bound applicable to the case in which a margin of is not obtained on all the training examples. We describe his result and explain how to formulate our problem in his framework. This requires us to define a certain real-valued class of functions, the values of which may themselves impart some useful information as to how confident one might be about classification. To use his result to obtain a generalization error bound, we then bound the covering numbers of this class of functions. What we obtain is a high probability bound that takes the following form: for all B, for all 0, 1, if h S H B, then er P h S E z h S + ɛm,, δ, B, where ɛ tends to 0 as m and ɛ decreases as increases. Recall that E z h S is the -margin error of h S on the sample. The rationale for seeking such a bound is that there is likely to be a trade-off between margin error on the sample and the value of ɛ: taking

Page 1 RRR 7-01 small so that the margin error term is zero might entail a large value of ɛ; and, conversely, choosing large will make ɛ relatively small, but lead to a large margin error term. So, in principle, since the value is free to be chosen, one could optimize the choice of on the right-hand side of the bound to minimize it. We now describe Guermeur s framework with some adjustments to the notation to make it consistent with the notation used here. There is a set G of functions from X = [0, 1] n into R C, and a typical g G is represented by its component functions g = g k C k=1. Each g G satisfies the constraint g k x = 0, x X. k=1 A function of this type acts as a classifier as follows: it assigns category l [C] to x X if and only if g l x > max k l g k x. If more than one value of k maximizes g k x, then the classification is left undefined, assigned some value not in [C]. The risk of g G, when the underlying probability measure on X Y is P, is defined to be Rg = P {x, y X [C] : g y x max g kx}. k y For v, k R C [C], let Mv, k = 1 v k max v l and, for g G, let g be the function l k X R C given by gx = g k x C k=1 = Mgx, k C k=1. Given a sample z X [C] m, let R,z g = 1 m m I { g yi x i < }. i=1 To describe Guermeur s result, we need covering numbers. Suppose H is a set of functions from X to R C. For x X m, define the metric d x on H by d x h, h = max 1 i m hx i h x i = max 1 i m max 1 k C h kx i h kx i. For α > 0, a finite subset Ĥ of H, is said to be a proper α-cover of H with respect to metric d x if for each h H there exists ĥ Ĥ such that d xh, ĥ α. The class H is totally bounded if for each α > 0, for each m, and for each x X m, there is a finite α-cover of H with respect to d x. The smallest cardinality of an α-cover of H with respect to d x is denoted N α, H, d x. The covering number N α, H, m is defined by N α, H, m = max x X m N α, H, d x. A result following from [14] is in the above notation as follows:

RRR 7-01 Page 13 Theorem.6 Let δ 0, 1 and suppose P is a probability measure on Z = [0, 1] n [C]. With P m -probability at least 1 δ, z Z m will be such that we have the following: for all 0, 1 and for all g G, Rg R,m g + m ln N /4, G, m + ln + 1 δ m. We now use Theorem.6, with an appropriate choice of function space G. Let us fix B N and let S = S 1,..., S C be, as before, C unions of boxes, with B boxes in total. Let g S be the function X = [0, 1] n R C defined by g S = g S k C k=1, where g S k x = 1 C dx, S i dx, S k. i=1 Let G B = {g S : h S H B }. Then these functions satisfy the constraint that their coordinate functions sum to the zero function, since g k x = k=1 For each k, k=1 1 C dx, S i i=1 dx, S k = k=1 dx, S k k=1 gk S x = Mg S x, k = 1 gk S x max l k gs l x = 1 1 1 dx, S i dx, S k max C l k C i=1 = 1 dx, S k max dx, S l l k = 1 min dx, S l dx, S k. l k From the definition of g, g S y x max k y g kx 1 C i=1 dx, S i dx, S y max k y min dx, S k dx, S y. k y So it follows that Rg S = er P h S. Similarly, R,z g S = E z h S. dx, S k = 0. k=1 dx, S i dx, S l i=1 1 C dx, S i dx, S k By bounding the covering numbers of our class G B of functions, and then by removing the restriction that B be specified in advance, we obtain the following result. i=1

Page 14 RRR 7-01 Theorem.7 Let δ 0, 1 and suppose P is a probability measure on Z = [0, 1] n [C]. With P m -probability at least 1 δ, z Z m will be such that we have the following: for all B and for all 0, 1, for all h S H B, er P h S E z h S + m nb ln 6n + B ln C + ln + 1 δ m..4 Proof of Theorem.7 As the preceding discussion makes clear, the functions in G B are all of the form g1 S,..., gc S, where gk S = 1 min dx, S l dx, S k. l k Here, S, as before, involves B boxes. To simplify, we will bound the covering numbers of G B, noting that an α-covering of G B is an α/-covering of G B. So, consider the class F = F B = {f S : h S H B }, where f S = g S ; that is, f S : [0, 1] n R C is given by f S k x = min l k dx, S l dx, S k. We use some of the same notations and ideas as developed in the proof of Theorem.. We will define ˆF to be the subset of F = F B in which each S k is of the form S k = B k j=1 Iûk, j, ˆvk, j where ûk, j, ˆvk, j A n. We will show that ˆF is a /-cover for F B, and hence a /4 cover for G B. Suppose f S F B, where S = S 1, S,..., S C. Suppose ûk, j, ˆvj, k A n are chosen as in the proof of Theorem., and that Ŝk = B k j=1 Iûk, j, ˆvk, j and Ŝ = Ŝ1,..., ŜC. Let ˆf S be the corresponding function in ˆF B : ˆfS = fŝ. As argued in the proof of Theorem., for all k and all x, dx, S k dx, Ŝk /4. For any k, and for any x, f S x ˆf Ŝk S x = min dx, S l dx, S k min dx, Ŝl dx, l k l k min dx, S l min dx, Ŝl dx, l k l k + Ŝ k dx, S k. The second term in this last line is bounded by /4, by the observation just made. Consider the first term. Suppose min l k dx, S l = dx, S r. Then, since dx, S r dx, Ŝr /4, it

RRR 7-01 Page 15 follows that min l k dx, Ŝk dx, Ŝr dx, S r + /4 = min l k dx, S l + /4. Similarly, min l k dx, S k min l k dx, Ŝk + /4. So, min dx, S l min dx, Ŝl l k l k 4. Therefore, f S x ˆf S x /4 + /4 = /. This establishes that ˆF B is a /-cover of F B with respect to the supremum metric on functions X R C, meaning that for every f S F B, there is ˆf S ˆF B with sup f S x ˆf S x /. x X In particular, therefore, for any m and for any x X m, ˆFB is a / cover for F B with respect to the d x metric. We now see that the covering number N /4, G B, m is, for all m, bounded above by ˆF B. Bounding this cardinality as in the proof of Theorem. gives N /4, G B, m nb 6n C B. Taking δ/ B in place of δ, using the covering number bound now obtained, and noting that Rg S = er P h S and R,z g S = E z h S, Theorem.6 shows that, with probability at least 1 δ/ B, for all 0, 1, if h S H B, then er P h S E z h S + m nb ln 6n + B ln C + ln + 1 δ m. We have used B ln B ln C. So, with probabilty at least 1 δ, for all B, this bound holds, completing the proof. Acknowledgements This work was supported in part by the IST Programme of the European Community, under the PASCAL Network of Excellence, IST-007-16886. Part of the work was carried out while Martin Anthony was a visitor at Rutgers University Center for Operations Research.

Page 16 RRR 7-01 References [1] N. Alon, S. Ben-David, N. Cesa-Bianchi, and D. Haussler. Scale-sensitive dimensions, uniform convergence, and learnability. Journal of the ACM, 444:615 631, 1997. [] M. Anthony and P. L. Bartlett. Function learning from interpolation. Combinatorics, Probability and Computing, 9:13 5, 000. [3] M. Anthony and P.L. Bartlett. Neural Network Learning: Theoretical Foundations. Cambridge University Press, 1999. [4] M. Anthony and N. Biggs. Computational Learning Theory. Cambridge Tracts in Theoretical Computer Science 30. Cambridge University Press, 199. Reprinted 1997. [5] M. Anthony and J. Ratsaby. The performance of a new hybrid classifier based on boxes and nearest neighbors. In International Symposium on Artificial Intelligence and Mathematics January 01. Also RUTCOR Research Report RRR 17-011, Rutgers University, 011.. [6] M. Anthony and J. Ratsaby. Robust cutpoints in the logical analysis of numerical data. Discrete Applied Mathematics, 160:355 364, 01. [7] P. L. Bartlett. The sample complexity of pattern classification with neural networks: the size of the weights is more important than the size of the network. IEEE Transactions on Information Theory, 44:55 536, 1998. [8] Anselm Blumer, A. Ehrenfeucht, David Haussler, and Manfred K. Warmuth. Learnability and the Vapnik-Chervonenkis dimension. J. ACM, 364:99 965, 1989. [9] Endre Boros, Peter L. Hammer, Toshihide Ibaraki, and Alexander Kogan. Logical analysis of numerical data. Mathematical Programming, 79:163 190, 1997. [10] Endre Boros, Peter L. Hammer, Toshihide Ibaraki, Alexander Kogan, Eddy Mayoraz, and Ilya Muchnik. An implementation of logical analysis of data. IEEE Transactions on Knowledge and Data Engineering, 1:9 306, 000. [11] N. H. Bshouty, P.W. Goldberg, S.A. Goldman, and H.D. Mathias. Exact learning of discretized geometric concepts. SIAM journal on Computing, 8:674 699, 1998. [1] R. M. Dudley. Central limit theorems for empirical measures. Annals of Probability, 66:899 99, 1978. [13] Giovanni Felici, Bruno Simeone, and Vincenzo Spinelli. Classification techniques and error control in logic mining. In Data Mining 010, volume 8 of Annals of Information Systems, pages 99 119, 010.

RRR 7-01 Page 17 [14] Yann Guermeur. VC theory of large margin multi-category classifiers. Journal of Machine Learning Research, 8:551 594, 007. [15] P.L. Hammer, Y. Liu, S. Szedmák, and B. Simeone. Saturated systems of homogeneous boxes and the logical analysis of numerical data. Discrete Applied Mathematics, 1441- :103 109, 004. [16] D. Haussler. Decision theoretic generalizations of the PAC model for neural net and other learning applications. Inform. Comput., 1001:78 150, September 199. [17] David Pollard. Convergence of Stochastic Processes. Springer-Verlag, 1984. [18] V. N. Vapnik. Statistical Learning Theory. Wiley, 1998. [19] Vladimir N. Vapnik. The nature of statistical learning theory. Springer-Verlag, New York, 1995. [0] Vladimir N. Vapnik and A. Y. Chervonenkis. On the uniform convergence of relative frequencies of events to their probabilities. Theory of Probab. and its Applications, 16:64 80, 1971.