Understanding the Yarowsky Algorithm

Similar documents
Understanding the Yarowsky Algorithm

Understanding the Yarowsky Algorithm

Technical Details about the Expectation Maximization (EM) Algorithm

Bootstrapping. 1 Overview. 2 Problem Setting and Notation. Steven Abney AT&T Laboratories Research 180 Park Avenue Florham Park, NJ, USA, 07932

PAC Generalization Bounds for Co-training

CS261: A Second Course in Algorithms Lecture #12: Applications of Multiplicative Weights to Games and Linear Programs

Estimating Gaussian Mixture Densities with EM A Tutorial

NONLINEAR CLASSIFICATION AND REGRESSION. J. Elder CSE 4404/5327 Introduction to Machine Learning and Pattern Recognition

Limits and Continuity

Lecture 2 Machine Learning Review

10.1 The Formal Model

But if z is conditioned on, we need to model it:

Lecture 16: Introduction to Neural Networks

Variables which are always unobserved are called latent variables or sometimes hidden variables. e.g. given y,x fit the model p(y x) = z p(y x,z)p(z)

Series 6, May 14th, 2018 (EM Algorithm and Semi-Supervised Learning)

Algorithm-Independent Learning Issues

Learning Tetris. 1 Tetris. February 3, 2009

Maximization of Submodular Set Functions

The Simplex Method: An Example

Neural Networks and the Back-propagation Algorithm

Performance Comparison of K-Means and Expectation Maximization with Gaussian Mixture Models for Clustering EE6540 Final Project

Chapter 2 The Naïve Bayes Model in the Context of Word Sense Disambiguation

Neural Networks Learning the network: Backprop , Fall 2018 Lecture 4

Prioritized Sweeping Converges to the Optimal Value Function

Some Formal Analysis of Rocchio s Similarity-Based Relevance Feedback Algorithm

10-701/15-781, Machine Learning: Homework 4

Latent Variable Models and EM algorithm

9 Classification. 9.1 Linear Classifiers

Design and Analysis of Algorithms Lecture Notes on Convex Optimization CS 6820, Fall Nov 2 Dec 2016

Mixture Models & EM. Nicholas Ruozzi University of Texas at Dallas. based on the slides of Vibhav Gogate

Partial cubes: structures, characterizations, and constructions

MODULE -4 BAYEIAN LEARNING

Complexity Theory VU , SS The Polynomial Hierarchy. Reinhard Pichler

Outline. Complexity Theory EXACT TSP. The Class DP. Definition. Problem EXACT TSP. Complexity of EXACT TSP. Proposition VU 181.

Expectation Maximization (EM)

"SYMMETRIC" PRIMAL-DUAL PAIR

IEOR E4570: Machine Learning for OR&FE Spring 2015 c 2015 by Martin Haugh. The EM Algorithm

The constructible universe

Multilayer Perceptron

arxiv: v1 [math.co] 13 Dec 2014

SUPPLEMENTARY MATERIAL TO IRONING WITHOUT CONTROL

Manifold Regularization

Unsupervised Learning with Permuted Data

Pattern Recognition and Machine Learning. Perceptrons and Support Vector machines

Mixture Models & EM. Nicholas Ruozzi University of Texas at Dallas. based on the slides of Vibhav Gogate

NOTES ON EXISTENCE AND UNIQUENESS THEOREMS FOR ODES

Statistics: Learning models from data

Packing and Covering Dense Graphs

Lecture 4. 1 Learning Non-Linear Classifiers. 2 The Kernel Trick. CS-621 Theory Gems September 27, 2012

Lecture 5: Linear models for classification. Logistic regression. Gradient Descent. Second-order methods.

Clustering by Mixture Models. General background on clustering Example method: k-means Mixture model based clustering Model estimation

Study Notes on the Latent Dirichlet Allocation

Introduction: The Perceptron

Lecture 11. Linear Soft Margin Support Vector Machines

Machine Learning, Midterm Exam: Spring 2008 SOLUTIONS. Q Topic Max. Score Score. 1 Short answer questions 20.

Asymptotically optimal induced universal graphs

Machine Learning and Computational Statistics, Spring 2017 Homework 2: Lasso Regression

Lecture 7: Passive Learning

On the Complexity of Computing an Equilibrium in Combinatorial Auctions

A CLASSROOM NOTE: ENTROPY, INFORMATION, AND MARKOV PROPERTY. Zoran R. Pop-Stojanović. 1. Introduction

Optimization and Gradient Descent

Supplementary appendix to the paper Hierarchical cheap talk Not for publication

3.10 Lagrangian relaxation

Deep Linear Networks with Arbitrary Loss: All Local Minima Are Global

An Introduction to Expectation-Maximization

The common-line problem in congested transit networks

Supplementary Information for cryosparc: Algorithms for rapid unsupervised cryo-em structure determination

Online Learning, Mistake Bounds, Perceptron Algorithm

On proximal-like methods for equilibrium programming

Introduction to Machine Learning Spring 2018 Note 18

Extension of continuous functions in digital spaces with the Khalimsky topology

Machine Learning for NLP

COS 511: Theoretical Machine Learning. Lecturer: Rob Schapire Lecture 24 Scribe: Sachin Ravi May 2, 2013

1 Lyapunov theory of stability

Notes on the framework of Ando and Zhang (2005) 1 Beyond learning good functions: learning good spaces

Lebesgue Measure on R n

Machine Learning Lecture 5

Generalization bounds

Machine Learning Ensemble Learning I Hamid R. Rabiee Jafar Muhammadi, Alireza Ghasemi Spring /

Lecture 8: Bayesian Estimation of Parameters in State Space Models

2. The Concept of Convergence: Ultrafilters and Nets

Final Exam, Fall 2002

Lecture 14. Clustering, K-means, and EM

6.867 Machine learning: lecture 2. Tommi S. Jaakkola MIT CSAIL

Functional Limits and Continuity

Convex Functions and Optimization

σ(a) = a N (x; 0, 1 2 ) dx. σ(a) = Φ(a) =

Lecture 5: Logistic Regression. Neural Networks

Heuristic Search Algorithms

Lecture 1: Entropy, convexity, and matrix scaling CSE 599S: Entropy optimality, Winter 2016 Instructor: James R. Lee Last updated: January 24, 2016

UC Berkeley Department of Electrical Engineering and Computer Science Department of Statistics. EECS 281A / STAT 241A Statistical Learning Theory

Expectation Maximization and Mixtures of Gaussians

Linear & nonlinear classifiers

Machine Learning Lecture Notes

Search Directions for Unconstrained Optimization

Near-Potential Games: Geometry and Dynamics

Ahlswede Khachatrian Theorems: Weighted, Infinite, and Hamming

Name (NetID): (1 Point)

CSE 417T: Introduction to Machine Learning. Final Review. Henry Chai 12/4/18

Transcription:

Understanding the Yarowsy Algorithm Steven Abney August, 2003 Introduction Bootstrapping, or semi-supervised learning, has become an important topic in computational linguistics. For many language-processing tass, there is an abundance of unlabeled data, but labeled data is lacing and too expensive to create in large quantities, maing bootstrapping techniques desirable. The Yarowsy algorithm [5] was one of the first bootstrapping algorithms to become widely nown in computational linguistics. An alternative algorithm, co-training [2], has subsequently become more popular, perhaps in part because it has proven amenable to theoretical analysis [4], in contrast to the Yarowsy algorithm, which is as yet mathematically poorly understood. The current paper aims to rectify that lac, increasing the attractiveness of the Yarowsy algorithm as an alternative to co-training. The Yarowsy algorithm does have the advantage of placing less restriction on the data sets it can be applied to. Co-training requires data attributes to be separable into two views that are conditionally independent given the target label; the Yarowsy algorithm maes no such assumption about its data. In previous wor, I did propose an assumption about the data called precision independence under which the Yarowsy algorithm could be shown effective []. That assumption is ultimately unsatisfactory, however, not only because it restricts the data sets on which the algorithm can be shown effective, but for additional internal reasons. A detailed discussion would tae us too far afield here, but suffice it to say that precision independence is a property that it would be preferable not to assume, but rather to derive from more basic properties of a data set; and that closer empirical study shows that precision independence fails to be satisfied in some data sets on which the Yarowsy algorithm is effective. This paper proposes a different approach. Instead of maing assumptions about the data, it proposes viewing the Yarowsy algorithm as optimizing an obective function. We will show that several variants of the algorithm (though not the algorithm in precisely its original form) optimize either lielihood or a second, related obective function. To the extent that these variants capture the essence of the original algorithm, we have a better formal understanding of its effectiveness. Even if the variants are deemed to depart substantially from the original algorithm, we have at least obtained a family of new bootstrapping algorithms that are mathematically understood.

We consider two obective functions and three families of Yarowsy algorithm variants. The first obective function, H, is the negative of log lielihood; to minimize H is to maximize lielihood. The second obective function, K, is an upper bound on H that can in principle be reduced to zero, squeezing H to zero as well. The first Yarowsy variant, called Y-/DL-EM, combines the outer loop of the Yarowsy algorithm with a different inner loop based on the Expectation- Maximization (EM) algorithm. It is shown to decrease H in each iteration. The second variant, Y-/DL-, uses the same outer loop, but an inner loop that is closer to the original Yarowsy inner loop. It is shown to decrease K in each iteration. The third variant, YS, differs from the first two in that it does sequential update (adding a single rule in each iteration), rather than parallel update (updating all rules in each iteration). It optimizes K. Within each variant, there are subvariants that differ with respect to the update rule that is used to compute rule weights, yielding a total of seven different algorithms that optimize either H or K. 2 The Generic Yarowsy Algorithm 2. The Original Algorithm Y-0 The original Yarowsy algorithm, which we call Y-0, is given in table. It is an iterative algorithm. One begins with a seed set Λ 0 of labeled examples, and a set V 0 of unlabeled examples. At each iteration, a classifier is constructed from the labeled examples; then the classifier is applied to the unlabeled examples to create a new labeled set. Formally, we assume a set X of examples and a series of partial functions Y (t) assigning labels to examples; t represents the iteration number. An unlabeled example x is one for which Y (t) (x) =. The set of labeled examples Λ t and unlabeled examples V t are defined as follows: Λ t {x Y (t) (x) } V t {x Y (t) (x) = } It will also be useful to have a notation for the set of examples with label : Y (t) {x Y (t) (x) = } Note that Λ t is the disoint union of the sets Y (t). In each iteration, one uses a supervised learning algorithm to train a classifier on the labeled examples. Let us call this algorithm the base learning algorithm; it is a function from (X, Y (t) ) to a classifier π drawn from a space of classifiers Π. It is assumed that the classifier maes confidence-weighted predictions. That is, the classifier defines a scoring function π(x, ), and the predicted label for example x is ŷ arg max π(x, ) () 2

() Given: X, Y (0) (2) For t {0,,...} (2.) Train classifier on (Λ t, Y (t) ), where Λ t = {x Y (t) } The result is π (t+) (2.2) For each example x: (2.2.) Set ŷ = arg max π x (t+) () { (2.2.2) Set Y (t+) (t+) ŷ if π x (ŷ) > ζ (x) = otherwise (2.3) If Y (t+) = Y (t), stop Table : The generic Yarowsy algorithm (Y-0) Ties are broen arbitrarily. Technically, we assume a fixed order over labels, and define the maximization to return the first label in the ordering, in case of a tie. It will be convenient to assume that the scoring function is nonnegative and bounded, in which case we can normalize it to mae π(x, ) a conditional distribution over labels for a given example x. Henceforward, we write π x () instead of π(x, ), understanding π x to be a probability distribution over labels. We call this distribution the prediction distribution of the classifier on example x. To complete an iteration of the Yarowsy algorithm, one recomputes labels for examples. Specifically, the label ŷ is assigned to example x if the score π x (ŷ) exceeds a threshold ζ, called the labeling threshold. The new labeled set Λ t+ contains all examples for which π x (ŷ) > ζ. The algorithm continues until convergence. The particular base learning algorithm that Yarowsy uses is deterministic, in the sense that the classifier induced is a deterministic function of the labeled data. Hence, the algorithm is nown to have converged whenever the labeling remains unchanged. There is one point on which the text of the original article is somewhat ambiguous. It is not clear whether the labeling in step (2.2) actually applies (a) to all examples, (b) to all originally unlabeled examples, or (c) only to examples that are unlabeled at the beginning of the current iteration. The text is never explicit, and different passages suggest different interpretations. My best guess is that (a) is the intended interpretation, and I have defined Y-0 accordingly. Note that the algorithm as stated leaves the base learning algorithm unspecified. We can distinguish between the generic Yarowsy algorithm Y-0, for which the base learning algorithm is an open parameter, and the specific Yarowsy algorithm, which includes a specification of the base learner. Informally, we call the generic algorithm the outer loop and the base learner the inner loop of the specific Yarowsy algorithm. The base learner that Yarowsy assumes is a decision list induction algorithm. We postpone discussion of it until section 3. 3

2.2 An Obective Function Machine learning algorithms are typically designed to optimize some obective function that represents a formal measure of performance. The maximum lielihood criterion is the most commonly used obective function. Suppose we have a set of examples Λ with labels Y (x) for x Λ, and a parametric family of models π θ such that π( x; θ) represents the probability of assigning label to example x, according to the model. The lielihood of θ is the probability of the full data set according to the model, viewed as a function of θ, and the maximum lielihood criterion instructs us to choose the parameter settings ˆθ that maximize lielihood, or equivalently, log lielihood: l(θ) = log x Λ π(y (x) x; θ) = log π(y (x) x; θ) x Λ = [[ = Y (x)]] log π( x; θ) x Λ (The notation [[Φ]] represents the truth value of the proposition Φ; it is if Φ is true and 0 otherwise.) Let us define: φ x () = [[ = Y (x)]] Note that φ x satisfies the formal requirements of a probability distribution over labels : specifically, it is a point distribution with all its mass concentrated on Y (x). We call it the labeling distribution. Now we can write: l(θ) = φ x () log π( x; θ) x Λ = x Λ H(φ x π x ) (2) In (2) we have written π x for the distribution π( x; θ), leaving the dependence on θ implicit. We have also used the nonstandard notation H(p q) for what is sometimes called cross entropy. It is easy to verify that: H(p q) = H(p) + D(p q) where H(p) is the entropy of p and D is Kullbach-Leibler divergence. Note that, when p is a point distribution, H(p) = 0 and hence H(p q) = D(p q). In particular: l(θ) = D(φ x π x ) (3) x Λ Thus when, as here, φ x is a point distribution, we can restate the maximum lielihood criterion as instructing us to choose the model that minimizes the 4

total divergence between the empirical labeling distributions φ x and the model s prediction distributions π x. To extend l(θ) to unlabeled examples, we need only observe that unlabeled examples are ones about whose labels the data provide no information. Accordingly, we revise the definition of φ x to treat unlabeled examples as ones whose labeling distribution is the maximally uncertain distribution, which is to say, the uniform distribution: { [[ = Y (x)]] for x Λ φ x () = where L is the number of labels. Equivalently: L for x V φ x () = [[x Y ]] + [[x V]] L With this revision, however, the expressions (2) and (3) are no longer equivalent; we must use (2). Since H(φ x π x ) = H(φ x ) + D(φ x π x ), and H(φ x ) is minimized when x is labeled, minimizing H(φ x π x ) forces one to label unlabeled examples. On labeled examples, H(φ x π x ) = D(φ x π x ), and D(φ x π x ) is minimized when the labels of examples agree with the predictions of the model. In short, we adopt as obective function H x H(φ x π x ) = l(φ, θ) We see to minimize H. 2.3 The Modified Algorithm Y- We can show that a modified version of the Yarowsy algorithm finds a local minimum of H. Two modifications are necessary. The labeling function Y is recomputed in each iteration as before, but with the constraint that an example once labeled stays labeled. The label may change, but a labeled example cannot become unlabeled again. We fix the threshold ζ at /L. As a result, the only examples that remain unlabeled after the labeling step are those for which π x is the uniform distribution. The problem with an arbitrary threshold is that it prevents the algorithm from converging to a minimum of H. A threshold that gradually decreases to /L would also address the problem, but would complicate the analysis. The modified algorithm, Y-, is given in table 2. To obtain a proof, it will be necessary to mae an assumption about the supervised classifier π (t+) induced by the base learner in step (2.). A natural assumption is that the base learner chooses π (t+) so as to minimize x Λ t D(φ (t) x π (t+) x ). 5

() Given: X, Y (0) (2) For t {0,,...} (2.) Train classifier on (Λ t, Y (t) ); result is π (t+) (2.2) For each example x: (2.2.) Set ŷ = arg max π x (t+) () { (2.2.2) Set Y (t+) ŷ if x Λt π x (t+) (ŷ) > /L (x) = otherwise (2.3) If Y (t+) = Y (t), stop Table 2: The modified generic Yarowsy algorithm (Y-) A weaer assumption will suffice, however. We assume that the base learner reduces divergence, if possible. That is, we assume: D Λ x Λ t D(φ (t) x π x (t+) ) x Λ t D(φ (t) x π (t) x ) 0 (4) with equality only if there is no classifier π (t+) Π that maes D Λ < 0. Note that any learning algorithm that minimizes x Λ t D(φ (t) x π x (t+) ) satisfies the weaer assumption (4), inasmuch as the option of setting π x (t+) = π x (t) is always available. We also consider a somewhat stronger assumption, namely, that the base learner reduces divergence over all examples, not ust labeled examples: D X x X D(φ (t) x π x (t+) ) x X D(φ (t) x π (t) x ) 0 (5) We can now state the main theorem of this section. Theorem If the base learning algorithm satisfies (4) or (5), algorithm Y- decreases H at each iteration until it reaches a critical point of H. There is a lemma that we require in order to prove the theorem. Lemma For all distributions p H(p) log max p() with equality iff p is the uniform distribution. Proof. By definition, for all : p() max p() log p() 6 log max p()

Since this is true for all, it is true if we tae the expectation with respect to p: p() log p() H(p) log p() log max p() max p() We have equality only if p() = max p() for all, that is, only if p is the uniform distribution. We now prove the theorem. Proof of theorem. The algorithm produces a sequence of labelings φ (0), φ (),... and a sequence of classifiers π (), π (2),.... The classifier π (t+) is trained on φ (t), and the labeling φ (t+) is created using π (t+). Recall that: H = [H(φ x ) + D(φ x π x )] x In the training step (2.), we hold φ fixed and change π, and in the labeling step (2.2), we hold π fixed and change φ. We will show that the training step minimizes H as a function of π, and the labeling step minimizes H as a function of φ except on examples where it is at a critical point of H. Hence, H is nonincreasing in each iteration of the algorithm, and is strictly decreasing unless (φ (t), π (t) ) is a critical point of H. Let us consider the labeling step first. In this step, π is held constant, but φ (possibly) changes, and we have: H = x H(x) where H(x) H(φ (t+) x π x (t+) ) H(φ (t) x π x (t+) ) We can show that H is nonpositive if we can show that H(x) is nonpositive for all x. We can guarantee that H(x) 0 if φ (t+) minimizes H(p π x (t+) ) viewed as a function of p. By definition: H(p π x (t+) ) = p log π x (t+) () We wish to find the distribution p that minimizes H(p π x (t+) ). Clearly, we accomplish that by placing all the mass of p in p where minimizes log π x (t+) (). If there is more than one minimizer, H(p π x (t+) ) is minimized by any distribution p that distributes all its mass among the minimizers of log π x (t+) (). 7

Observe further that arg min log π x (t+) () = arg max = ŷ π x (t+) () That is, we minimize H(p π x (t+) ) by setting p = [[ = ŷ]], which is to say, by labeling x as predicted by π (t+). That is how algorithm Y- defines φ (t+) x for all examples x Λ t+. Note that φ (t+) x does not minimize H(p π x (t+) ) for examples x V t+, that is, for examples x that remain unlabeled at t +. However, in algorithm Y-, any example that is unlabeled at t + is necessarily also unlabeled at t, so for any such example, H(x) = 0. Hence, if any label changes in the labeling step, H decreases, and if no label changes, H remains unchanged; in either case, H does not increase. We can show further that even for examples x V t+, the labeling distribution φ (t+) x assigned by Y- represents a critical point of H. For any example x V t+, the prediction distribution π x (t+) is the uniform distribution (otherwise Y- would have labeled x). Hence the divergence between φ (t+) and π (t+) is zero, and thus at a minimum. It would be possible to decrease H(φ (t+) x π x (t+) ) by decreasing H(φ (t+) x ) at the cost of an increase in D(φ (t+) x π x (t+) ), but all directions of motion (all ways of selecting labels to receive increased probability mass) are equally good. That is to say, the gradient of H is zero; we are at a critical point. Essentially, we have reached a saddle point. We have minimized H with respect to φ x along those dimensions with a non-zero gradient. Along the remaining dimensions, we are actually at a local maximum, but without a gradient to choose a direction of descent. Now let us consider the training step (2.). In this step, φ is held constant, so the change in H is equal to the change in D recall that H(φ π) = H(φ) + D(φ π). By the hypothesis of the theorem, there are two cases: either the base learner satisfies (4) or (5). If it satisfies (5), the base learner minimizes D as a function of π, hence it follows immediately that it minimizes H as a function of π. Suppose instead that the base learner satisfies (4). We can express H as H = H(φ x ) + D(φ x π x ) + D(φ x π x ) x x Λ t x V t In the training step, the first term remains constant. The second term decreases, by hypothesis. But the third term may increase. However, we can show that any increase in the third term is more than offset in the labeling step. Consider an arbitrary example in V t. Since it is unlabeled at time t, we now that φ (t) x is the uniform distribution u: u() = L 8

Moreover, π x (t) must also be the uniform distribution; otherwise example x would have been labeled in a previous iteration. Therefore the value of H(x) = H(φ x π x ) at the beginning of iteration t is H 0 : H 0 = φ (t) x () log π x (t) () = After the training step, the value is H : u() log u() = H(u) H = φ (t) x () log π x (t+) () If π x remains unchanged in the training step, then the new distribution π x (t+), lie the old one, is the uniform distribution, and the example remains unlabeled. Hence there is no change in H, and in particular, H is non-increasing, as desired. On the other hand, if π x does change, then the new distribution π x (t+) is nonuniform, and the example is labeled in the labeling step. Hence the value of H(x) at the end of the iteration, after the labeling step, is H 2 : H 2 = φ (t+) x () log x () = log π x (t+) (ŷ) π (t+) By lemma, H 2 < H(u); hence H 2 < H 0. As we observed above, H > H 0, but if we consider the change overall, we find that the increase in the training step is more than offset in the labeling step: H(x) = H 2 H + H H 0 < 0 3 The Specific Yarowsy Algorithm 3. The Original Decision List Induction Algorithm DL-0 When one speas of the Yarowsy algorithm, one often has in mind not ust the generic algorithm Y-0 (or Y-), but an algorithm whose specification includes the particular choice of base learning algorithm made by Yarowsy. Specifically, Yarowsy s base learner constructs a decision list, that is, a list of rules of form f, where f is a feature and is a label, with score θ f. A rule f matches example x if x possesses the feature f. The label predicted for a given example x is the label of the highest-scoring rule that matches x. Yarowsy uses smoothed precision for rule scoring. As the name suggests, smoothed precision q f () is a smoothed version of (raw) precision q f (), which is the probability that rule f is correct given that it matches: q f () { Xf Y / X f Λ if X f Λ > 0 /L otherwise (6) 9

where X f is the set of examples that possess feature f. Note that the product notation X f Y here represents the intersection of the sets X f and Y. Smoothed precision q( f; ɛ) is defined as follows: q( f; ɛ) X f Y + ɛ X f Λ + Lɛ (7) We also write q f () when ɛ is clear from context. Yarowsy s update rule uses smoothed precision: θ f = q f () We will also consider raw precision as an alternative: θ f = q f (). Both raw and smoothed precision have the properties of a conditional probability distribution. Generally, we view θ f as a conditional distribution over labels for a fixed feature f. We can define π x () arg max θ f (8) f F x where F x is the set of features of x. (Note that f F x if and only if x X f.) Since the classifier s prediction for x is defined, in equation (), to be the label that maximizes π x (), definition (8) implies that the classifier s prediction is the label of the highest-scoring rule matching x, as desired. We have written in (8) rather than = because maximizing θ f across f F x for each label will not in general yield a probability distribution over labels though the scores will be positive and bounded, hence normalizable. Considering only the final predicted label ŷ for a given example x, the normalization will have no effect, inasmuch as all scores θ f being compared will be scaled in the same way. As characterized by Yarowsy, a decision list contains only those rules f whose score q f () exceeds the labeling threshold ζ. This can be seen purely as an efficiency measure. Including rules whose score falls below the labeling threshold will have no effect on the classifier s predictions, as the threshold will be applied when the classifier is applied to examples. For this reason, we do not prune the list. That is, we represent a decision list as a set of parameters {θ f }, one for every possible rule f in the cross product of the set of features and the set of labels. The decision-list induction algorithm used by Yarowsy is summarized in table 3; we call it DL-0. Note that the step labeled (*) is not actually a step of the induction algorithm, but rather specifies how the decision list is used to compute a prediction distribution π x for a given example x. Unfortunately, we cannot show that the algorithm DL-0 satisfies equation (4). A large part of the problem is the definition of π given in (8). As mentioned ust above, the parameters θ f can be thought of as defining a prediction distribution θ f () over labels for each feature f. Hence the equation (8) specifies how the prediction distributions θ f for the features of example x are to be combined to yield a prediction distribution π x for x. Analysis of the algorithm 0

(0) Given: a fixed value for ɛ > 0 Initialize C[f, ] = 0, Z[f] = 0 for all f, () For each example-label pair (x, ) (.) Increment C[f, ], Z[f] (2) For each feature f and label (2.) Set θ f = C[f,]+ɛ Z[f]+Lɛ (*) Define π x () arg max f Fx θ f Table 3: The decision list induction algorithm DL-0 becomes more manageable if, instead of combining distributions by maximizing θ f across f F x as in equation 8, we tae a mixture of the θ f : π x () = θ f (9) m f F x Here m = F x is the number of features that x possesses; for sae of simplicity, we assume that all examples have the same number of features. Since, for each f, θ f is a probability distribution, any convex combination of distributions θ f is also a distribution, including in particular π x. The two definitions for π x (), (8) and (9), will often have the same mode ŷ, but that is guaranteed only in the rather severely restricted case of two features and two labels. Under definition (8), the prediction is determined entirely by the strongest θ f, whereas definition (9) permits a bloc of weaer θ f to outvote the strongest one. Yarowsy explicitly wished to avoid the possibility of such interactions. Nonetheless, we henceforth assume definition (9). 3.2 The Decision List Induction Algorithm DL-EM Even adopting (9) in place of (8), we cannot show that DL-0 satisfies (4). We will be able to prove something weaer about some close variants of DL-0, but before we do so, we present an alternative decision list induction algorithm that does satisfy (4). The alternative algorithm, DL-EM, is a special case of the Expectation-Maximization (EM) algorithm. We consider two versions of the algorithm: DL-EM-Λ and DL-EM-X. DL-EM-Λ is somewhat simpler, and we describe it first. Unlie DL-0, which constructs a classifier from scratch, DL-EM sees to improve on a previous classifier. (In the context of the Yarowsy algorithm, the previous classifier is the one from the previous iteration.) We write θf old for the parameters and πx old for the prediction distributions of the previous classifier. Conceptually, DL-EM-Λ considers the label assigned to an example x to be generated by choosing a feature f F x and then assigning the label according to the feature s prediction distribution θ f (). The choice of feature f is a hidden variable. The degree to which an example labeled is imputed to feature f is

(0) Initialize C[f, ] = 0 for all f, () For each example x labeled (.) Let Z = g F x θ old g (.2) For each f F x, increment C[f, ] by Z θold f (2) For each feature f (2.) Let Z = C[f, ] (2.2) For each label, set θ f = Z C[f, ] Table 4: DL-EM-Λ decision list induction algorithm determined by the old distribution: π old (f x, ) = [[f F x]]θ old f g [[g F x]]θ g old = [[f Fx]] mθold f π old () One can thin of π old (f x, ) either as the posterior probability that feature f was responsible for the label, or as the portion of the labeled example (x, ) that is imputed to feature f. We also write πx old(f) as a synonym for πold (f x, ). The new estimate θ f is obtained by summing imputed occurrences of (f, ) and normalizing across labels: θ f = x Y π old (f x, ) x Y π old (f x, ) The algorithm is summarized in table 4. The second version of the algorithm, DL-EM-X is summarized in table 5. It is lie DL-EM-Λ, except that it uses the update rule: θ f = x Y π old (f x, ) + L [ x Y π old (f x, ) + L x x V πold (f x, ) x V πold (f x, ) ] (0) The update rule (0) includes unlabeled examples as well as labeled examples. Conceptually, it divides each unlabeled example equally among the labels, then divides the resulting fractional labeled example among the example s features. We note that both variants of the DL-EM algorithm constitute a single iteration of an EM-lie algorithm. A single iteration suffices to prove the following theorem, though multiple iterations would also be effective. Theorem 2 The classifier produced by the DL-EM-Λ algorithm satisfies equation (4), and the classifier produced by the DL-EM-X algorithm satisfies equation (5). Combining theorems and 2 yields: Corollary The Yarowsy algorithm Y-, using DL-EM-Λ or DL-EM-X as its base learning algorithm, decreases H at each iteration until it reaches a critical point of H. 2

(0) Initialize C[f, ] = 0 and U[f, ] = 0, for all f, () For each example x labeled (.) Let Z = g F x θ old g (.2) For each f F x, increment C[f, ] by Z θold f (2) For each unlabeled example x (2.) Let Z = g F x θ old g (2.2) For each f F x, increment U[f, ] by Z θold f (3) For each feature f (3.) Let Z = (C[f, ] + LU[f, ]) (3.2) For each label, set θ f = Z ( C[f, ] + L U[f, ]) Table 5: DL-EM-X decision list induction algorithm Proof of theorem 2. Let θ old represent the parameter values at the beginning of the call to DL-EM, let θ represent a family of free variables that we will optimize, and let π old and π be the corresponding prediction distributions. The labeling distribution φ is fixed. For any set of examples α, let D α be the change in x α D(φ x π x ) resulting from the change in θ. We are obviously particularly interested in the two cases where α is the set of all examples X (for DL-EM-X) and where α is the set of labeled examples Λ (for DL-EM-Λ). In either case, D α 0. Observe that: [ D(φx πx old ) D(φ x π x ) ] D α = x α = x α = x α [ H(φx π old x ) H(φ x ) H(φ x π x ) + H(φ x ) ] φ x () [ log π x () log π old x () ] () The EM algorithm is based on the fact that divergence is non-negative, and strictly positive if the distributions compared are not identical. 0 φ x ()D(πx old π x ) x α = φ x () πx old (f) log πold x (f) π x α x (f) f F x = φ x () ( ) θ old f x (f) log π old x α x () πx() θ f f F x π old Which yields the inequality: φ x () [ log π x () log πx old () ] x α 3

φ x () x (f) [ log θ f log θf old ] x α f F x π old By (), this can be written as: D α φ x () x α f F x π old x (f) [ log θ f log θf old ] (2) Since θ old f is constant, by maximizing φ x () x α f F x π old x (f) log θ f (3) we maximize a lower bound on D α. It is easy to see that D α is bounded above by 0: we simply set θ f = θf old. Since divergence is zero only if the two distributions are identical, we have strict inequality in (2) unless the best choice for θ is θ old, in which case no choice of θ maes D α < 0. It remains to show that DL-EM computes the parameter set θ that maximizes (3). We wish to maximize (3) under the constraints that the values {θ f } for fixed f sum to unity across choices of. We express the constraints in the form: C f = 0 where C f θ f We see a solution to the family of equations: θ f φ x () x α g F x π old C f x (g) log θ g = λ f (4) θ f We derive an expression for the derivative on the left hand side: θ f φ x () x α g F x π old Similarly for the right hand side: Substituting these into equation (4): x X f α θ f = x (g) log θ g = C f θ f = x X f α φ x ()π old x (f) θ f = λ f x X f α φ x ()π old x (f) θ f φ x ()π old x (f) λ f (5) 4

Using the constraint C f = 0 and solving for λ f : φ x ()πx old (f) = 0 λ f Substituting bac into (5): x X f α λ f = x X f α φ x ()π old x (f) x X θ f = f α φ x()πx old (f) x X f α φ x()πx old(f) (6) If we consider the case where α is the set of all examples, and expand out φ x in (6), we obtain: θ f = x (f) + πx old (f) Z L x X f Y π old x X f V where Z normalizes θ f. It is not hard to see that this is the update rule that DL-EM-X computes, using the intermediate values: C[f, ] = x (f) U[f, ] = x X f Y π old x X f V π old x (f) If we consider the case where α is the set of labeled examples, and expand out φ x in (6), we obtain: θ f = x (f) Z x X f Y π old This is the update rule that DL-EM-Λ computes. Thus we see that DL-EM-X reduces D X, and DL-EM-Λ reduces D Λ. We note in closing that DL-EM-X can be simplified when used with algorithm Y-, inasmuch as it is nown that θ f = /L for all (f, ) where f F x for some x V. Then the expression for U[f, ] simplifies as follows: πx old (f) x X f V = x X f V = X f V m [ ] /L g F x /L The dependence on disappears, so we can replace U[f, ] with U[f] in algorithm DL-EM-X, delete step (2.), and replace step (2.2) with the statement For each f F x, increment U[f] by /m. 5

3.3 The Obective Function K The decision list induction algorithm DL-EM is substantially different from the one that Yarowsy uses, DL-0. We are not able to show that DL-0 optimizes the obective function H. However, in this section, we define a different (but related) obective function K, and in the next section, we show that modified versions of DL-0 optimize it. The obective function K is an upper bound on H. Observe that: We define: H = φ x log m θ g x g F x φ x m log θ g x g F x = H(φ x θ g ) m x g F x K x g F x H(φ x θ g ) (7) By minimizing K, we minimize an upper bound on H. Moreover, it is in principle possible to reduce K to zero. Since H(φ x θ g ) = H(φ x ) + D(φ x θ g ), K is reduced to zero if all examples are labeled, each feature concentrates its prediction distribution in a single label, and the label of every example agrees with the prediction of every feature it possesses. In this limiting case, any minimizer of K is also a minimizer of H. We hasten to add a proviso: it is not possible to reduce K to zero for all data sets. The following provides a necessary and sufficient condition. Consider an undirected bipartite graph G whose nodes are examples and features. There is an edge between example x and feature f ust in case f is a feature of x. Define examples x and x 2 to be neighbors if they both belong to the same connected component of G. K is reducible to zero if and only if all pairs of neighbors in Λ 0 are labeled consistently. 3.4 Algorithm DL- We consider two variants of DL-0, called DL--R and DL--VS. They differ from DL-0 in two ways. First, the DL- algorithms assume the mean definition of π x given in equation (9) rather than the max definition of equation (8). This is not actually a difference in the induction algorithm itself, but in the way the decision list is used to construct a prediction distribution π x. Second, the DL- algorithms use update rules that differ from the smoothed precision of DL-0. DL--R (table 6) uses raw precision instead of smoothed precision. DL--VS (table 7) uses smoothed precision, but unlie DL-0, DL-- VS does not used a fixed smoothing constant ɛ; rather ɛ varies from feature to feature. 6

(0) Initialize C[f, ] = 0, Z[f] = 0 for all f, () For each example-label pair (x, ) (.) For each feature f F x, increment C[f, ], Z[f] (2) For each feature f and label (2.) Set θ f = C[f,] Z[f] (*) Define π x () = m f F x θ f Table 6: The decision list induction algorithm DL--R (0) Initialize C[f, ] = 0, Z[f] = 0, U[f] = 0 for all f, () For each example-label pair (x, ) (.) For each feature f F x, increment C[f, ], Z[f] (2) For each unlabeled example x (2.) For each feature f F x, increment U[f] (3) For each feature f and label (3.) Set ɛ = U[f]/L (3.2) Set θ f = C[f,]+ɛ Z[f]+U[f] (*) Define π x () = m f F x θ f Table 7: The decision list induction algorithm DL--VS The value of ɛ used by DL--VS can be expressed in another way that will prove useful. Let us define: p(λ f) X f Λ X f p(v f) X f V X f Lemma 2 The parameter values {θ f } computed by DL--VS can be expressed as: θ f = p(λ f)q f () + p(v f)u() (8) Proof. First we show that smoothed precision can be expressed as a convex combination of raw precision (6) and the uniform distribution. Define δ = ɛ/ X f Λ. Then: q f () = X f Y + ɛ X f Λ + Lɛ = X f Y / X f Λ + δ + Lδ = + Lδ q f () + Lδ + Lδ δ Lδ 7

= + Lδ q f () + Lδ u() (9) + Lδ Now we show that the mixing coefficient /( + Lδ) of (9) is the same as the mixing coefficient p(λ f) of the lemma, when ɛ = X f V /L as in step (3.) of DL--VS. ɛ = X f V L = X f Λ L p(v f) p(λ f) Lδ = p(λ f) = p(λ f) + Lδ The main theorem of this section (corollary 2) is that the specific Yarowsy algorithm Y-/DL- decreases K in each iteration until it reaches a critical point. It is stated as a corollary of two theorems. The first (theorem 3) shows that DL- minimizes K as a function of θ, holding φ constant, and the second (theorem 4) shows that Y- decreases K as a function of φ, holding θ constant. More precisely, DL--R minimizes K over labeled examples Λ, and DL--VS minimizes K over all examples X. Either is sufficient for Y- to be effective. Theorem 3 DL- minimizes K as a function of θ holding φ constant. Specifically, DL--R minimizes K over labeled examples Λ, and DL--VS minimizes K over all examples X. Proof. We wish to minimize K as a function of θ under the constraints: C f θ f = 0 for each f. As before, to minimize K under the constraints C f = 0, we express the gradient of K as a linear combination of the gradients of the constraints, and solve the resulting system of equations: K θ f = λ f C f θ f (20) First we derive expressions for the derivatives of C f and K. The variable α represents the set of examples over which we are minimizing K. C f θ f = K = θ f θ f = x X f α φ x log θ g x α g F x φ x θ f 8

We substitute these expressions into (20) and solve for θ f : x X f α θ f = φ x θ f = λ f x X f α φ x /λ f Substituting the latter expression into the equation for C f = 0 and solving for f: φ x /λ f = x X f α X f α = λ f Substituting this bac into the expression for θ f : θ f = X f α x X f α φ x (2) If α = Λ, we have: θ f = X f Λ = q f () x X f Λ [[x Y ]] This is the update computed by DL--R, showing that DL--R computes the parameter values {θ f } that minimize K over the labeled examples Λ. If α = X, we have: θ f = X f = X f Λ X f x X f Λ [[x Y ]] + X f X f Y X f Λ + X f V X f L = p(λ f) q f () + p(v f) u() x X f V By lemma 2, this is the update computed by DL--VS, hence DL--VS minimizes K over the complete set of examples X. Theorem 4 If the base learner decreases K over X or over Λ, where the prediction distribution is computed as π x () = m f F x θ f then algorithm Y- decreases K at each iteration until it reaches a critical point, considering K as a function of φ with θ held constant. L 9

The proof has the same structure as the proof of theorem, so we give only a setch here. We minimize K as a function of φ by minimizing it for each example separately. K(x) = H(φ x θ g ) g F x = φ x θ g g F x log To minimize K(x), we choose φ x so as to concentrate all mass in arg min log = arg max π x () θ g g F x This is the labeling rule used by Y-. If the base learner minimizes over Λ only, rather than X, it can be shown that any increase in K on unlabeled examples is compensated for in the labeling step, as in the proof of theorem. Corollary 2 The specific Yarowsy algorithms Y-/DL--R and Y-/DL--VS decrease K at each iteration until they reach a critical point. 4 Sequential Algorithms 4. The Family YS The Yarowsy algorithm variants we have considered to now do parallel updates in the sense that the parameters {θ f } are completely recomputed at each iteration. In this section, we consider a family YS of sequential variants of the Yarowsy algorithm, in which a single feature is selected for update at each iteration. The YS algorithms resemble the Yarowsy-Cautious algorithm of Collins & Singer [3], though they differ from the Yarowsy-Cautious algorithm in that they update a single feature in each iteration, rather than a small set of features, as in Yarowsy-Cautious. The YS algorithms are intended to be as close to the Y-/DL- algorithm as is consonant with single-feature updates. The YS algorithms differ from each other, and from Y-/DL-, in the choice of update rule. An interesting range of update rules wor in the sequential setting. In particular, smoothed precision with fixed ɛ, as in the original algorithm Y-0/DL-0, wors in the sequential setting, though with a proviso that will be spelled out later. At each iteration there is a set of selected features S t, beginning with an initial set S 0. One feature is selected in each iteration. A feature once selected remains in the selected set. It is permissible for a feature to be selected more than once; this permits us to continue reducing K even after all features have been selected. In short, there is a sequence of selected features f 0, f,..., and S t+ = S t {f t } 20

(0) Given: S, θ with θ g = /L for g S () For each example x possessing a feature in S, set Y (x) = ŷ (2) Loop: (2.) Choose a feature f such that X f Λ and θ f q f If there is none, stop (2.2) Add f to S (2.2) For each label, set θ f = update(f, ) (2.3) For each example x possessing a feature in S, set Y (x) = ŷ Table 8: The sequential algorithm YS The algorithm is also provided initially with a set of parameters {θ (0) f }, with θ (0) f = /L if f S 0. In iteration t, the only parameters θ g that change are those with g = f t, where f t is the selected feature at iteration t. That is: θ (t+) g = θ (t) g for g f t It follows that, for all t: θ (t) g = L for g S t In each iteration, one chooses a feature f t and computes (or recomputes) the prediction distribution θ ft for f t. Then labels are recomputed. An example x is assigned label ŷ if any feature of x belongs to S t+. In particular, all previously labeled examples continue to be labeled (though their labels may change), and any unlabeled examples possessing feature f t become labeled. The algorithm is summarized in table 8. It is actually an algorithm schema; the definition for update needs to be supplied. We consider three different update functions: one that uses raw precision as prediction distribution, one that uses smoothed precision, and one that goes in the opposite direction, using what we might call peaed precision. As we have seen, smoothed precision can be expressed as a mixture of raw precision and the uniform (i.e., maximumentropy) distribution (9). Peaed precision ˆq(f) mixes in a certain amount of the point (i.e., minimum-entropy) distribution that has all its mass on the label that maximizes raw precision: where: ˆq f () p(λ t f)q f () + p(v t f)[[ = ]] arg max q f () Note that peaed precision involves a variable amount of peaing ; the mixing parameters depend on the relative proportions of labeled and unlabeled examples. Note also that is a function of f, though we do not explicitly represent that dependence. The three instantiations of algorithm YS that we consider are: 2

YS-P ( peaed ) YS-R ( raw ) YS-FS ( fixed smoothing ) θ f = ˆq f () θ f = q f () θ f = q f () We will show that the first two algorithms reduce K in each iteration. We will show that the third algorithm, YS-FS, reduces K in iterations in which f t is a new feature, not previously selected. Unfortunately, we are unable to show that YS-FS reduces K when f t is a previously-selected feature. This suggests using a mixed algorithm in which smoothed precision is used for new features but raw or peaed precision is used for previously-selected features. A final issue with the algorithm schema YS concerns the selection of features in step (2.). The schema as stated does not specify which feature is to be selected. In essence, the manner in which rules are selected does not matter, as long as one selects rules that have room for improvement, in the sense that the current prediction distribution θ f differs from raw precision q f. (The ustification for this choice is given in corollary 4 below.) The theorems of the following sections show that K decreases in each iteration, so long as any such rule can be found. One could choose greedily by choosing the feature that maximizes gain G (22), though in the next section we give lower bounds for G that are rather more easily computed (theorems 5 and 6). There is in addition one case of particular interest that we return to in section 4.6. 4.2 Gain From this point on, we consider a single iteration of the YS algorithm, and discard the variable t. We write θ old and φ old for the parameter-set and labeling at the beginning of the iteration, and we write simply θ and φ for the new parameter-set and new labeling. The set Λ (resp., V) represents the examples that are labeled (resp., unlabeled) at the beginning of the iteration. The selected feature is f. We wish to choose a prediction distribution for f so as to guarantee that K decreases in each iteration. The gain in the current iteration is: G = [ H(φ old x θg old ) H(φ x θ g ) ] (22) x g F x Gain is the negative change in K; it is positive when K decreases. In considering the reduction in K from (φ old, θ old ) to (φ, θ), it will be convenient to consider the following intermediate values. K 0 = x K = x K 2 = x H(φ old x g F x H(ψ x θ old g F x θ old g ) g ) g F x H(ψ x θ g ) 22

where and One should note that: K 3 = x g F x H(φ x θ g ) { [[ = ]] if x Xf V ψ x = otherwise φ old x arg max θ f θ f is the new prediction distribution for the candidate f; θ g = θg old g f. φ is the new label distribution, after relabeling. It is defined as: for { [[ = ŷ(x)]] if x Λ Xf φ x = otherwise L (23) For x X f V, the only selected feature at t + is f, hence = ŷ for such examples. It follows that ψ and φ agree on examples in X f V. They also agree on examples that are unlabeled at t +, assigning them the uniform label distribution. If ψ and φ differ, it is only on old labeled examples (Λ) that need to be relabeled, given the addition of f. The gain G can be represented as the sum of three intermediate gains, corresponding to the intermediate values ust defined: where: G = G V + G θ + G Λ (24) G V = K 0 K G θ = K K 2 G Λ = K 2 K 3 The gain G V intuitively represents the gain that is attributable to labeling previously unlabeled examples in accordance with the predictions of θ. The gain G θ represents the gain that is attributable to changing the values θ f where f is the selected feature. The gain G Λ represents the gain that is attributable to changing the labels of previously labeled examples to mae labels agree with the predictions of the new model θ. The gain G θ corresponds to step 2.2 of algorithm YS, in which θ is changed but φ is held constant; and the combined G V and G Λ gains correspond to step 2.3 of algorithm YS, in which φ is changed while holding θ constant. In the remainder of this section, we derive two lower bounds for G. In following sections, we show that the updates YS-P, YS-R, and YS-FS guarantee that the lower bounds given below are non-negative, hence that G is non-negative. Lemma 3 G V = 0 23

Proof. We show that K remains unchanged if we substitute ψ for φ old in K 0. The only property of ψ that we need is that it agrees with φ old on previously labeled examples. Since ψ x = φ old x for x Λ, we need only consider examples in V. Since these examples are unlabeled at the beginning of the iteration, none of their features have been selected, hence θg old = /L for all their features g. Hence: K = ψ x log θf old x V g F x = ψ x log L x V g F x = φ old x log L x V g F x = φ old x log θf old = K 0 x V g F x This shows that K 0 = K, hence that G V = 0. Lemma 4 G Λ 0 We must show that relabeling old labeled examples that is, setting φ x () = [[ = ŷ]] for x Λ does not increase K. The proof has the same structure as the proof of theorem, and is omitted. Lemma 5 G θ is equal to X f Λ [ H(q f θf old ) H(q f θ f ) ] [ + X f V log L log ] θ f (25) Proof. By definition, G θ = K K 2, and K and K 2 are identical everywhere except on examples in X f. Hence: G θ = [ H(ψx θg old ) H(ψ x θ g ) ] x X f g F x We divide this sum into three partial sums: G θ = A + B + C (26) [ A = H(ψx θf old ) H(ψ x θ f ) ] B = x X f Λ x X f V C = x X f g f F x [ H(ψx θ old f ) H(ψ x θ f ) ] [ H(ψx θ old g ) H(ψ x θ g ) ] 24

We consider each partial sum separately. A = x X f Λ = x X f Λ = = x X f Λ [ H(ψx θ old f ) H(ψ x θ f ) ] ψ x [ log θ old f log θ f ] [[x Y ]] [ log θ old f log θ f ] X f Y [ log θ old f log θ f ] = X f Λ q f () [ log θ old ] f log θ f = X f Λ [ H(q f θf old ) H(q f θ f ) ] (27) B = x X f V = x X f V = x X f V [ H(ψx θ old f ) H(ψ x θ f ) ] ψ x [ log θ old f log θ f ] [ = X f V log θf old log [ = X f V log L log ] θ f [[ = ]] [ log θ old f log θ f ] θ f ] (28) The ustification for the last step is a bit subtle. If f is a new feature, not previously selected, then θf old = /L for all, and the substitution is valid. On the other hand, if f is a previously selected feature, then X f V = 0, and even though the substitution of /L for θf old may not be valid, it is innocuous. C = x X f g f F x = x X f g f F x [ H(ψx θ old g ) H(ψ x θ g ) ] [ H(ψx θg old ) H(ψ x θg old ) ] = 0 (29) Combining (26), (27), (28) and (29) yields the lemma. Theorem 5 G is bounded below by (25). 25

Proof. Combining (24) with lemmas 3, 4, and 5. Theorem 6 G is bounded below by X f Λ [ H(q f θ old f ) H(q f θ f ) ] Proof. The theorem follows immediately from theorem 5 if we can show that log L log θ f 0 Observe first that log L = H(u). By lemma, we now that H(u) log θ f H(u) H(θ f ) 0 The latter following because the uniform distribution maximizes entropy. Corollary 3 G is bounded below by X f Λ [ D(q f θ old f ) D(q f θ f ) ] Proof. Immediate from theorem 6 and the fact that: H(q f θ old f ) H(q f θ f ) = H(q f ) + D(q f θ old f ) H(q f ) D(q f θ f ) = D(q f θ old f ) D(q f θ f ) Corollary 4 If θf old q f, then there is a choice of θ f that yields strictly positive gain. Proof. If θ old f q f, then Setting θ f = q f has the result that D(q f θ old f ) > 0 X f Λ [ D(q f θf old ) D(q f θ f ) ] = X f Λ D(q f θf old ) > 0 Hence G > 0 by corollary 3. 4.3 Algorithm YS-P We now use the results of the previous section to show that the algorithm YS-P is correct in the sense that it reduces K in every iteration. Theorem 7 In each iteration of algorithm YS-P, K decreases. 26

Proof. We wish to show that G > 0. By theorem 5, that is true if the expression (25) is positive. By corollary 4, there exist choices for θ f that mae (25) positive, hence in particular we guarantee G > 0 by maximizing (25). We maximize (25) by minimizing: Since we minimize (30) by minimizing: X f Λ H(q f θ f ) + X f V log θ f (30) H(q f θ f ) = H(q f ) + D(q f θ f ) X f Λ D(q f θ f ) + X f V log θ f (3) Both terms are nonnegative. The first term is 0 if θ f = q f. The second term is 0 for any distribution that concentrates all its mass in a single label ; it is symmetric in all choices of, and decreases monotonically as θ f approaches. Hence, the minimum of (3) will have equal to the mode of q f, though it may be more peaed than q f, at the cost of an increase in the first term, but offset by a decrease in the second term. Recall that = arg max q f (). By the reasoning of the previous paragraph, we now that = at the minimum of (3). Hence we can minimize (3) by minimizing: X f Λ D(q f θ f ) X f V [[ = ]] log θ f (32) We compute the gradient: [ X f Λ D(q f θ f ) X f V ] [[ = ]] log θ f θ f [ = X f Λ H(q f θ f ) X f Λ H(q f ) X f V ] [[ = ]] log θ f θ f = X f Λ H(q f θ f ) X f V [[ = ]] log θ f θ f θ f = X f Λ q f () log θ f X f V [[ = ]] log θ f θ f θ f = X f Λ q f () log θ f X f V [[ = ]] log θ f θ f θ f = X f Λ q f () θ f X f V [[ = ]] θ f (33) As before, the derivative of the constraint C f = 0 is, and we minimize (32) under the constraint by solving: X f Λ q f () θ f X f V [[ = ]] θ f = λ 27

θ f = ( X f Λ q f () X f V [[ = ]]) /λ (34) Substituting into the constraint: ( X f Λ q f () X f V [[ = ]]) /λ = Substituting this bac into (34): X f Λ X f V = λ X f = λ θ f = p(λ f)q f () + p(v f)[[ = ]] (35) That is, the maximizing solution is peaed precision, which is the update rule for YS-P. 4.4 Algorithm YS-R We now show that YS-R also decreases K in each iteration. essentially already been proven. In fact, it has Theorem 8 Algorithm YS-R decreases K in each iteration. Proof. In the proof of corollary 4, we showed that the choice θ f = q f yields strictly positive gain. This is the update rule used by YS-R. 4.5 Algorithm YS-FS The original Yarowsy algorithm YS-0/DL-0 used smoothed precision with fixed ɛ as update rule. We have been unsuccessful at ustifying this choice of update rule in general. However, we are able at least to show that it does decrease K when the selected feature is a new feature, not previously selected. Theorem 9 Algorithm YS-FS has positive gain in each iteration in which the selected feature has not been previously selected. Proof. By theorem 6, gain is positive if H(q f θ old f ) > H(q f θ f ) (36) By the assumption that the selected feature f has not been previously selected, θf old is the uniform distribution u, and the left-hand side of (36) is equal to H(q f u). It is easy to verify that H(p u) = H(u) for any distribution p, hence 28