Hierarchies of Probabilistic and. Abstract. A FIN-learning machine M receives successive values of the function

Size: px
Start display at page:

Download "Hierarchies of Probabilistic and. Abstract. A FIN-learning machine M receives successive values of the function"

Transcription

1 Hierarchies of Probabilistic and Team FIN-Learning 1 Andris Ambainis 2 Kalvis Apstis 3 Rusins Freivalds 4 Carl H. Smith 5 Abstract A FIN-learning machine M receives successive values of the function f it is learning and at some moment outputs a conjecture which should be a correct index of f. FIN learning has 2 extensions: (1) If M ips fair coins and learns a function with certain probability p, we have FINhpi-learning. (2) When n machines simultaneously try to learn the same function f and at least k of these machines output correct indices of f, we have learning by a [k; n]fin team. Sometimes a team or a probabilistic learner can simulate another one, if their probabilities p 1 ; p 2 (or team success ratios k 1 =n 1 ; k 2 =n 2 ) are close enough [DKV92a, DK96]. On the other hand, there are cut-points r which make simulation of FIN hp 2 i by FINhp 1 i impossible whenever p 2 r < p 1. Cut-points above 10=21 are known [DK96]. We show that the problem for given k i ; n i to determine whether [k 1 ; n 1 ]FIN [k 2 ; n 2 ]FIN is algorithmically solvable. The set of all FIN cut-points is shown to be well-ordered and recursive. Asymmetric teams are introduced and used as both a tool to obtain these results, and are of interest in themselves. The framework of asymmetric teams allows us to characterize intersections [k 1 ; n 1 ]FIN \[k 2 ; n 2 ]FIN, unions [k 1 ; n 1 ]FIN [[k 2 ; n 2 ]FIN, and memberwise unions [k 1 ; n 1 ]FIN + [k 2 ; n 2 ]FIN, i.e. collections of all unions U 1 [U 2 where U i 2 [k i ; n i ]FIN. 1 Based on two conference papers: [AFS97] and [AAF + 97]. 2 University of California, Berkeley. ambainis@cs.berkeley.edu. Part of this work done at the University of Latvia, supported by Latvia Science Council Grant Institute of Mathematics and Computer Science, University of Latvia, Raina bulvaris 29, Rga, LV-1459, Latvia. Kalvis.Apsitis@dati.lv. Part of this work done at the University of Maryland. 4 Institute of Mathematics and Computer Science, University of Latvia, Raina bulvaris 29, Rga, LV-1459, Latvia. rusins@cclu.lv. Supported by Latvia Science Council Grant Department of Computer Science, University of Maryland, College Park, MD 20742, USA. smith@cs.umd.edu. Supported in part by National Science Foundation Grant CCR

2 Hence, we can compare the learning power of traditional FIN-teams [k; n]fin as well as all kinds of their set-theoretic combinations. 1 Introduction To a large extent the study of inductive inference is concerned with with dening new learning paradigms and comparing their power. Often the only relations which hold between the dierent paradigms of machine learning are those which trivially follow from their denitions. E.g. in the mindchangeanomaly hierarchy of limit learning E a b E c d i a c and b d as shown in [CS83]. It was hard to develop a mathematical theory in a situation where almost every time two dierent denitions lead to two dierent concepts. The initial stage in the research of inductive inference was mostly descriptive: there were more and more new learning paradigms and demonstrations of their dierences. Probabilistic and team learning changed this situation. In [PS88] it was shown that not all probabilistic limit learning types Ehpi are dierent. E.g. if p 2 ( 1 ; 1 ), then Ehpi can be improved to Eh1=ni. For the case n+1 n of E, a probabilistic learner is always equivalent to some team [1; n]e. A corollary of this is the inclusion [ma; mb]e = [a; b]e for any positive a; b; m. Initial results about the type FIN (also known as E 0 0, i.e. learning with 0 anomalies and 0 mindchanges) were just as simple. All probabilistic types FINhpi, p > 1=2, are equivalent to team types [DPVW91]. The rst surprise was the proper inclusion [1; 2]FIN [2; 4]FIN in [Vel89]. This diers sharply from the behavior of type E. Soon after that the following result was proved: probabilistic FINhpilearning with p 2 ( ; 1 2 ] can be simulated by team [2; 4]FIN; on the other hand, the team [24; 49]FIN (and hence also the probabilistic learner FINh24=49i) can learn more than [2; 4]FIN. The authors of [DKV92a] used \trial and error" to come up with the ratio 24=49, therefore one can ask an interesting (but informal) question: Where does the 24/49 come from? We provide an answer of sorts to this question. This is accomplished by generalizing the problem and casting it in terms of a game. By putting in the corresponding parameters to the game, the constant 24=49 emerges. Our paper does not focus on nding more constants below 10=21, but 2

3 rather reects on the global structure of all cut-points in the interval (0; 1). We describe these cut-points as solutions to combinatorial optimization problems on tree-like objects called widgets. Each widget corresponds to a set of strategies which one team can use to diagonalize against another one. Section 3 focuses on widgets and their use in diagonalization and simulation. Section 4 shows that asymmetric teams are of independent interest, since \natural" questions about symmetric teams may yield asymmetric teams as intermediate results. 2 Preliminaries 2.1 General Notation N denotes the set of natural numbers, and F the set of recursive functions. ' h denotes the partial recursive function with index h, see [Soa87]. Subsets of F are denoted by U; V; W with or without decorations. Symbols [, \,?,,, 2 are read \union," \intersection," \set minus," \is a subset of," \is a proper subset of," \is an element of" respectively. jaj denotes the number of elements in the set A. Logical conjunction, disjunction and implication are denoted by ^, _ and!. The quantier 8 1 is read \for all but nitely many," 9 1 is read \there are innitely many," and 9! is read \there is exactly one." Denition 1 Let f : N! N be a total function. The set fx : f(x) 6= 0g is called support of the function f. If (8 1 x)[f(x) = 0], we call f function of nite support. Denition 2 A threshold function t k n least k of its n arguments are 1. : f0; 1gn! f0; 1g has value 1 i at Denition 3 ([Ros82]) Set with a binary relation is quasi-ordering if (1) (8x 2 )[x x], i.e. is reexive, (2) (8x 1 ; x 2 ; x 3 2 )[(x 1 x 2 and x 2 x 3 ) implies x 1 x 3 ], i.e. is transitive. We notice that not every quasi-ordering is partial ordering, since we do not require antisymmetry, i.e. it is ne to have x 1 6= x 2 such that x 1 x 2 and x 2 x 1. Such x 1 ; x 2 we will nevertheless regard as equivalent. 3

4 Denition 4 Let ( ; ) be a quasi-ordering, and x 1 ; x 2 2. Element y 2 is called a join of x 1 ; x 2 (write y = x 1 _ x 2 ), if (y x 1 and y x 2 ) and (8y 0 2 )[(y 0 x 1 and y 0 x 2 ) implies y 0 y]: Element z 2 is called a meet of x 1 ; x 2 (write z = x 1 ^ x 2 ), if (z x 1 and z x 2 ) and (8z 0 2 )[(z 0 x 1 and z 0 x 2 ) implies z 0 z]: Denition 5 A quasi-ordering ( ; ) is lattice if any two elements x 1 ; x 2 2 have a join and a meet. We x some pairing function h ; i, i.e. a recursive 1-1 mapping of NN onto N. Applying the pairing function several times we can encode variable length lists of natural numbers n 1 ; : : : ; n k by a single number hn 1 ; : : : ; n k i. We will call the encoded lists of natural numbers strings and denote them by. The length of a string, denoted by jj shows how many numbers are listed in the string. Denition 6 A single valued set is a set A N such that ha 1 ; b 1 i 2 A, ha 2 ; b 2 i 2 A, a 1 = a 2 imply b 1 = b 2. Single valued sets are used to represent partial functions as collections of argument-value pairs. Denition 7 Initial segment of a function f is f hni = hf(0); ; f(n)i the encoding of the rst n + 1 values of a recursive function. Denition 8 Let 0 = ha 1 ; : : : ; a m i, 00 = hb 1 ; : : : ; b n i be two strings. Then ha 1 ; : : : ; a m ; b 1 ; : : : ; b n i is called concatenation of 0 and 00 and is denoted by Denition 9 Let 1 ; 2 be strings. If 2 = 1 3 for some 3, we call 1 a prex of 2 and write 1 2. If the strings are dierent, we write also 1 2. If is an initial segment of some (partial or total) function ', we write '. 4

5 2.2 Inductive Inference Inductive inference machines M (also called IIMs or machines), denoted by M, are partially dened algorithms which receive growing pieces of a graph of a function and output conjectures. The n th conjecture of a machine M, if it is dened, is h n = M(f hni ), i.e. it is output after receiving the n th initial segment of f. Denition 10 ([Gol67]) Machine M learns function f in the limit if the conjecture sequence h a = M(f hai ) converges to h such that f = ' h. Written: f 2 E(M). Denition 11 ([Wie76]) Machine M learns function f nitely if M(f hai ) has the same value h wherever it is dened and f = ' h. Written: f 2 FIN(M). Machine M nitely learning f may be undened for some initial values of M; once the conjecture is output it cannot be changed any more. This sort of learning is called also one shot learning. Notice that the success of a FINlearner is algorithmically intractable, since we cannot check the equivalence of two indices even if we know what is the target of learning. Therefore FIN is like a supervised learning where the success is determined by an external omniscient teacher. A partial success of a FIN learner can be veried more eciently. We call it learning of segments. Denition 12 FIN machine M learns segment, if M after reading outputs conjecture h such that ' h. Ordered lists of IIMs are called teams and are denoted by M. Denition 13 ([Smi82, PS88]) Class U of recursive functions is [k; n]finlearnable if there is a team of n machines M = (M 1 ; ; M n ) such that for any f 2 U, f 2 FIN(M j ) for at least k dierent M j 2 M. Written: U [k; n]fin(m). Note that for dierent functions from U, dierent collections of machines from the team may succeed. The denition of [k; n]fin makes sense i 0 k n. 5

6 Example 14 Let f n be the characteristic function of a singleton set fng. Namely, ( 1 if x = n, f n (x) = 0 otherwise. and let U be the collection of all functions f n, U = ff n g n2n : The class U 2 FIN, but its union with everywhere zero function U 0 = U [ fx[0]g 62 FIN. On the other hand, U 0 2 [1; 2]FIN. Indeed, the rst machine outputs an index for x[0] without reading any input, the other one waits for the value 1 to appear, then outputs an index for an appropriate f n : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 9 >= 2 FIN >; 9 >= 62 FIN : : : 2 FIN >; Denition 15 Let M be probabilistic machine whose behavior depends on the input as well as on coin tosses. Machine M hpifin-learns function f, if the probability that M will output just one conjecture h such that f = ' h is at least p. Written: f 2 hpifin(m). The following result states that probabilistic FIN-learning can be simulated by team learning with arbitrarily small success ratio overhead. Theorem 16 ([DPVW91]) Let k; n 2 N and k+1 n+2 [k; n]fin. Denition 17 A real number p 2 (0; 1) is a cut-point, if (8" > 0)[FINhp + "i FINhpi]: < p 1. Then FINhpi By convention, numbers 0 and 1 are cut-points as well. The set of all FIN cut-points is called the FIN-hierarchy (write H FIN ). 6

7 Accordingly to [DPVW91, DK96] we have H FIN = 1; 2 3 ; 3 5 ; 4 7 ; 5 9 ; 6 11 ; : : : ; 1 2 ; ; ; ; ; : : : ; : : : ; ; : : : : Below 10=21 our knowledge about cut-points is limited; in general, we cannot tell whether or not a number is a cut-point. 2.3 Matrix Games Denition 18 An m n matrix A = (a ij ) determines the following zerosum matrix game. Player 2 chooses the i th row, 1 i m, Player 1 simultaneously chooses the j th column, 1 j n. The number a ij indicates payo of Player 1. Payo of Player 2 is?a ij. Each player seeks to maximize his/her payo. In most textbooks on game theory, Player 1 (who tries to maximize a ij ) picks a row i, but Player 2 (who tries to minimize a ij ) picks a column j [PZ96]. In our paper all matrix games are dened the other way, as in Denition 18. Denition 19 For an m n matrix game a probability distribution p = (p 1 ; : : : ; p m ) is called mixed strategy of Player 2 (we require p i 0 and P m i=1 p i = 1). Similarly, a probability distribution q = (q 1 ; : : : ; q n ) is a mixed strategy of Player 1 (again we require q j 0 and P n j=1 q j = 1). Let an m n matrix A = (a ij ) be given. If Player 2 chooses i th row with probability p i and Player 1 independently chooses j th column with probability q j, then the expected payo of Player 1 is given by p T Aq = P m P n i=1 j=1 a ij p i q j. A saddle point (p ; q ) is the pair of optimal mixed strategies for both players. Then (p ) T A(q ) is the best expected payo of Player 1, if Player 2 behaves rationally. This is called the matrix game value of A (write VN(A)). Theorem 20 (von Neumann,[PZ96]) Matrix game for any m n matrix A = (a ij ) has a saddle point in mixed strategies. It can be obtained by solving mutually dual problems of linear programming: maximize: P n j=1 x j minimize: P m i=1 y i subject to: P n j=1 a ijx j 1; 1 i m subject to: P m i=1 a ijy i 1; 1 j n x j 0; 1 j n y i 0; 1 i m 7

8 If x and y are some solutions of these problems, we have = P n j=1 x j = P m i=1 y i (due to the strong duality theorem in linear programming). The mixed strategies p = (1=)x, q = (1=)y make a saddle point, and (p ) T A(q ) = 1= is the matrix game value. Denition 21 For the given matrix A, by matrix game value VN(A) we denote the greatest expected payo for the 1 st player in a zero-sum matrix game. Formally: VN(A) = max q min p pt Aq = min p maxpt Aq; q where the vectors p and q are probability distributions, i.e. all their components are nonnegative and add up to Asymmetric Teams In this subsection we generalize the notion of team learning by describing the success of a team by any nondecreasing Boolean function. Example 22 Consider team learning type [2; 4]. We can represent it by a 6 4 matrix (see below). Each column corresponds to some machine in the team; each row represents a dierent case of the team success. E.g. the second row signies that machines M 1 and M 3 succeed. The [2; 4] team can be successful by satisfying any of the 6 clauses in the following DNF where x i represents the formulae \M i succeeds." (x 1 ^ x 2 ) _ (x 1 ^ x 3 ) _ (x 1 ^ x 4 ) _ (x 2 ^ x 3 ) _ (x 2 ^ x 4 ) _ (x 3 ^ x 4 ): [1; 2] = L 1 L [2; 4] = M 1 M 2 M 3 M A = N 1 N 2 N 3 N In the team [2; 4] all 4 machines are equal, i.e. it does not matter which 2 of them succeed. In our paper we interpret any matrix of 0's and 1's as a 8

9 learning type. Consider the matrix A above. Row 1 represents the success of the team when machines N 1 ; N 2 ; N 3 succeed; row 2 represents the success of the team when machines N 1 ; N 4 ; etc. Notice that in each of these cases we do not care about the success of the remaining machines in the team. We call this generalization asymmetric teams, since the order of machines in a team matters. For the team (N 1 ; N 2 ; N 3 ; N 4 ) given above, the machine N 4 has a dierent role than any of N 1 ; N 2 or N 3. Denition 23 A f0; 1g valued matrix is called a team matrix. Let A = (a ij ) be a team matrix. A team M = (M 1 ; : : : ; M n ) [A]FIN-learns a function f if there is a row i in the matrix A such that f 2 FIN(M j ) whenever a ij = 1. If f 2 FIN(M j ) whenever a ij = 1, we say that M [A]FIN-learns f according to the i th row. When the team M will be clear from the context, we will simply say that the i th row of A learns f. The success of a traditional team [m; n]fin for any given function depends only on successes of the participant machines via some threshold function (see Denition 2). Asymmetric teams are a generalization of these, since they allow any nondecreasing Boolean formula to express a team's success. Example 24 Traditional, symmetric type [k; n] is a particular case of asymmetric team type u. It is described by the n-argument threshold function t k n. The matrix [k; n] has n columns and ( n k) = n! rows. See matrix [2; 4] in k! (n?k)! Example 22. Each team of learning machines M = (M 1 ; : : : ; M n ) can be easily transformed into a single probabilistic machine. Indeed, at the very beginning the probabilistic machine M picks M i with probability q i and simulates it. We want to choose q i so that M FIN-learns class U with the maximal probability VN(A), given that the team M [A]FIN-learns U. The probability of success which can be achieved by a probabilistic machine M on U is the value of zero-sum matrix game where we pick column and our adversary independently picks row of the 0-1 matrix A (see Denition 18). Lemma 25 Let A be a team matrix and VN(A) its matrix game value. Then [A]FIN FINhV N(A)i. 9

10 Proof: Let q be the optimal distribution for the rst player on A T. Teams of learning machines M = (M 1 ; : : : ; M n ) can be uniformly transformed into a single probabilistic machine by picking M j with the probability q j. Then regardless of the function f 2 U, the success is guaranteed at least V N(A), as long as U 2 [A]FIN 3 Widgets 3.1 Intuition: Games on Trees All previous results on FIN-team comparisons [DKV92b, DK96] fall into the following two categories: simulations. [A]FIN [B]FIN, i.e. any A-team can be simulated by an equally strong B-team. diagonalizations. [A]FIN 6 [B]FIN, i.e. there is a class learnable by an A-team which is not learnable by any B-team. A uniform procedure which builds a counterexample for any particular B-team is described. The class containing all these examples, if it is [A]FIN-learnable then shows the noninclusion [A]FIN 6 [B]FIN. In this paper we view the problem of comparing collections of classes of functions [A]FIN and [B]FIN as a game. In this game, an A-team M chooses function f, and learns it in a way to elude any strategy chosen by a B-team. Its adversary, a B-team N inputs f, observes M's behavior on f hni and other segments, and tries to learn it along with M. If there is an A-team M that wins any B-team N, we get a diagonalization result. Otherwise, we have a simulation result. The game can be broken into certain irreversible moves. For example, if a team member outputs a conjecture h on a segment, then h can still simulate the adversary's team on dierent extensions of, but it cannot read values from the input function beyond any more. Furthermore, when h becomes dened on some argument x, h(x) can never later be redened. We will consider as moves of the game more complex events than just outputting a single conjecture and extending some domain a little. Instead we will track and analyze those events that indicate that the whole team is up to learning 10

11 some function. Our analysis proceeds by building a tree where the nodes correspond to noteworthy segments. Lemma 26 Let f be a function which is [A]FIN-learned by some team M. There exists an algorithm which builds an (indenitely growing) rooted tree labeled by A's rows such that 1. The root segment 0 f. 2. If some node in the tree is labeled by row i of A, then all M's machines on that row learn. 3. Any 0 learned by all machines in some row of A is eventually inserted in the tree. Proof: We describe the tree in eective stages of nite extension. Stage 0. Simulate M on f hni, n = 0; 1; : : :. As soon as all machines corresponding to 1s in a row i 0 of matrix A learn one of these segments, make it the root 0, label the root node with i 0 and go to Stage 1. Stage n. Currently the segment tree contains 0, 1, : : :, n?1. Simulate M on all extensions of all these segments until we nd an extension n k (k < n) such that all machines corresponding to 1s in a row i n of A learn n. Assume that k is the longest prex of n currently in. Make n a child of k in the tree, and label it with i n. Go to Stage n + 1. Property 1 is satised in Stage 0, subsequent stages ensure Properties 2 and 3. Forks. If a conjecture is issued on a segment k, it cannot be correct on two incompatible extensions of k. We call 3 nodes in the segment tree that correspond to a segment and two incompatible extensions is called a fork. If a machine appears in labels of 3 nodes that form a fork, its conjecture is wrong for at least one of these three segments. Therefore, we introduce a restriction that any 3 nodes in a fork should be labeled so that no column has 1 in all three rows. 2-player game. This gives us a following game with two players A and B. A builds a tree by inserting nodes. In the rst move, A creates the root of the tree and labels it by a row of A. In each next move, A creates a new child for one of existing nodes and labels it by a row of A. For every fork in the tree, no column can have 1 in all 3 rows labeling the nodes of the fork. B 11

12 has to respond to each move by labeling the new node by a row of B, subject to a similar restriction about forks. A wins if, at some moment, B cannot label the new node without violating the restriction about forks. B wins if he keeps the game going on forever. In this game, we abstract from the concrete machines in the team. Neither A nor B work with the actual learning machines and their conjectures, they work with columns of A and B. The only information that A has about B (and conversely) is which rows B has used to label which nodes (i.e. which machines have output programs consistent with the corresponding segments). This allows to isolate the combinatorial part of the problem from the recursion-theoretic part, to solve the combinatorial part rst and to translate this solution back to the original problem. Next, we show that A wins over B in this game if and only if [A]FIN 6 [B]FIN. We rst introduce some more terminology. We dene widgets to be collections of trees representing winning strategies. Then, in sections 3.2 and 3.3, we give general diagonalization and simulation arguments in terms of our game and widgets. Innite trees. Consider the class U learned by the [1; 2]-team (M 1 ; M 2 ) as in Example 14. Each function with a nonzero value is an extension of some segment 0 n. Therefore the segments learned by team [1; 2] constitute an innite tree. Consider the case where the root node is learned by M 1, each child is learned by M 2, and neither M 1 nor M 2 is learning any fork. Such unbounded trees could make the analysis of the game hard. In matrices sparser than [1; 2] there can be many nodes with innitely many children; the depth of the tree can be unbounded as well. We solve this problem in section 3.3 by showing that it is enough to consider nite trees. Adaptive and nonadaptive strategies. It may happen that A can win over B even if B knows all moves of A in advance. For example, let A be the matrix for the symmetric [2, 3]-team type and B be the matrix for the [1, 1]-team type. A = B = 1 Then, A can label the root by the 1 st row and then insert two children of the root and label them by the 2 nd and the 3 rd row, respectively. B can only because label all 3 nodes by the only row of B and this results in an illegal 12

13 labeling of a fork. A wins. In this case, the tree that A is going to build is predetermined, the game follows a xed pattern. In other situations, when playing against B, A can choose which branches of the tree to expand depending on responses of B. Let us say that A has built and labeled the root and two of its children in some tree. After observing B's responses, A may want to extend either the left or the right side of the tree. In general, all strategies of A are represented as a collection of partially overlapping trees. First, A introduces the nodes belonging to all components. The next nodes depend on responses of B. We proceed to formal denitions. Denition 27 A widget T is a collection of overlapping trees 1 ; : : : ; n. Each of them covers some of the nodes V (T ) = fv 0 ; : : : ; v k g. The i 's are called components of T. We require the following: 1. All i have the same root v If some non-root node v 2 V (T ) belongs to several components, then it has the same parent node with respect to all these components. 3. For each node v 2 V (T ) consider the set of components containing it: T v = f i : v 2 V ( i )g. For any two dierent nodes v; w; either T v and T w are disjoint, or one is a subset of another. In other words, at least one of the following three sets T v \ T w, T v? T w and T w? T v is empty. Denition 28 Let T be a widget. Three nodes v; v 0 ; v 00 2 V (T ) constitute a fork if There is a component 2 T such that v; v 0 ; v 00 2 V (). v 0 and v 00 are successors of v. Neither of the two nodes v 0 ; v 00 is a successor of the other one. Denition 29 Let A = (a ij ) be a team matrix and let T be a widget. Matrix A can label T (written T 2 T (A)), if there is a mapping l from nodes of T to A's rows such that for any fork fv; v 0 ; v 00 g 2 V (T ) and for any column j in the matrix A we have a l(v);j = 0 for some v 2 fv; v 0 ; v 00 g. Intuitively, there is no column whose machine learns all three segments represented by the nodes of a fork. 13

14 Next, we link the possibility of labeling widgets to the existence of winning strategies in the game. Lemma 30 T (A) 6 T (B) if and only there is a strategy of A that wins any (even non-recursive) strategy for B. Proof: Assume there is a winning strategy for A. We build a tree consisting of positions in the game. Each position is a tree labeled by both rows of A and rows of B with every node being labeled by both a row of A and a row of B. It may contain one node that is labeled by a row of A only. (This node is the last node inserted by A that has not been labeled by B yet.) The root R is the position after the rst move of A. For a position R (for example, the root) where the last move has been made by A, its children are all positions that result from possible responses of B in position R. For a position R where the last move has been made by a B, we have only one child: the position after the A makes a move in the position R according to the winning strategy. If this tree has an innite branch, there is a strategy for B that keeps A in the game forever (and, therefore, wins the game). This strategy is just following the innite branch in the tree of positions. It may be nonrecursive (if the innite branch is nonrecursive). Hence, if A wins any strategy for B, every branch is nite. By Konig's lemma, this means that the tree is nite. Now, we show how to make a widget T from this tree of positions. For every position P after a move of A, we make one node v P. It's parent is v R corresponding to the position R after A inserted the parent of the node v that was inserted last in P. (This may be not the same as the parent of P in the tree of positions because A may have inserted other nodes between inserting the parent of v into game tree and inserting v.) We also have one component P for each position P. It consists of v P and v R for all R that are ancestors of P. When viewed separately from the rest of T, P is the same as the game tree in the position P. (This can be shown by an induction. For the root R, both game tree and R consist of just one node. For a non-root position P, we consider it's parent R. By inductive assumption, R is the same as game tree in the position R. P consists of R and one more node v P. Similarly, the game tree in P consists of the game tree in R and one more node. Further, the new node is the child of the same node in both trees. Therefore, P is the same as the game tree in the position P.) 14

15 To label T by rows of A, we just label every node by the row of A by which it gets labeled during the game. It is easy to see that this is a correct labeling. (If v, v 0 and v 00 form a fork, they are all contained in some component R 2 T. The tree R appears in the game after A inserts R and it is labeled correctly in the game.) On the other hand, if B could label T, we could take a strategy for the B that labels a node inserted by A according to the B-labeling of T. This strategy would be able to respond to every move of A, contradicting the assumption that A wins against any strategy of B. Hence, the widget T can be labeled by A but not by B. We have shown that the existence of the winning strategy for A implies T (A) 6 T (B). The other direction (T (A) 6 T (B) implies the existence of the winning strategy for A) is shown similarly to Theorem in the next section. We omit the proof here to avoid duplication. 3.2 Diagonalization Here we show how to construct examples of functions that a B-team N cannot FIN learn whenever B cannot label some widget T. All counterexamples will be functions of nite support. Lemma 31 Let T be a widget, let B be a team matrix, T 62 T (B), and let N be a [B]FIN-learning team. There is an algorithm which, for nodes v 2 V (T ), builds functions f v by enumerating their initial segments v;s (i.e. f hsi v = [ v;s ). Only one of the functions f v will be dened on arbitrarily large segments. Let f T;N denote the only total function f v yielded by this construction. Furthermore: 1. f T;N is not [B]FIN-learned by N. 2. f T;N (0) encodes the team N, i.e. f T;N (0) = hn 1 ; : : : ; n j i, where n 1 ; : : : ; n j are indices for machines in the team N = (N 1 ; : : : ; N j ). 3. Consider nonempty segments v;s, w;t at some stages s and t. We have v;s w;t i w is a successor of v in the widget T. (Informally, each tree component 2 T reects the ordering of all nonempty segments by prex order). 15

16 4. At each stage s below there is a nonempty subset of components T s T such that each component 2 T s contains all v 2 V (T ) such that v;s is nonempty. Proof: Our construction proceeds in eective stages of nte extension. At each stage s we extend v;s for just one node v = v[s] which we call the active node. As soon N learns v[s];s according to some row of B, we label node v[s] with the respective row in B, stop dening f v[s] and pick another active node v[s + 1]. We eventually want to converge to some active vertex v[s] such that no row of B ever learns any segment v[s];s. Then f v[s] is the total function f T;N not learned by N (as required by Property 1 of the Lemma). To ensure this, we will pick the subset of components T s at every stage so that B cannot extend its labeling to all the components of T s. Initially we have T 0 = T, no vertices are labeled yet, and it is known that T 62 T (B). Subsequently B will label some of the nodes in T, and T s will shrink, since every 2 T s has to contain all the nodes which ever become active. We proceed with the algorithm. Stage 0. Dene v;0 to be empty for all v 2 V (T ). Let v 0 be the root of T. Set the active node v[0] = v 0 and dene v[0];0 (0) = hn 1 ; : : : ; n j i, i.e. a string of length 1 encoding the entire team N. Go to Stage 1. Stage s. Let v[s? 1] be the active vertex from the previous stage. Dene v[s?1];s = v[s?1];s?1 h0i. Simulate the team N on v[s?1];s for s steps (i.e. consider all conjectures output by machines in the team N, simulate all these conjectures for s steps, and see whether they are dened and equal to the values of v[s?1];s ). If no row in the matrix B learns the segment, dene the active node v[s] = v[s? 1] and go to Stage s + 1. If one row of B does learn v[s?1];s, label node v[s? 1] in the widget T with the respective row of B and select a new active node v[s]. Dene T s = f 2 T : fv[0]; : : : ; v[s? 1]g V ()g: We distinguish two cases. Case 1. There are unlabeled nodes in the overlapping part of all components 2 T s, i.e. in the set \ 2Ts V (). Pick any of these as the new active node v[s] and go to Stage s + 1. Case 2. The intersection of all components in T s has no more unlabeled vertices. In this case we have to shrink T s. Consider all nodes v 0 which have 16

17 a labeled parent. Assume inductively that the current subset of components T s T is such that B cannot extend its labeling to all of T s. We claim that at least one of the following diminished subsets of components T s;v 0 = f 2 T : fv[0]; : : : ; v[s? 1]; v 0 g V ()g also is such that B cannot extend its labeling to T s;v 0. Indeed, assume that we can label all of T s;v 0. Pick the maximal components of T s;v 0, i.e. those components which are not proper subsets of other components (we consider widgets as sets of their components). By Property 3 in Denition 27 dierent maximal T s;v 0's do not share any components. Putting together their labeling, we could extend the current labeling of v[0]; : : :; v[s? 1] to all of T s contrary to the inductive assumption. Consider the smallest v 0 (assume that all nodes are enumerated) such that the current labeling cannot be extended to T s;v 0. Dene the next active vertex v[s] to be v 0 and set v[s];s = u;s hsi, where u is the parent of v[s]. Go to Stage s + 1. End of stage s and the algorithm. At every stage we label a new vertex with a row of B. Since B cannot label the entire tree T, at some stage s, B will stop labeling nodes, and the active node v[s] will stay active forever. Consequently, v[s];s 0 will add a new zero on every stage s 0 > s, and f v[s] will be total and almost everywhere zero. It is not learned by the team N, establishing Property 1. We dened v[0];0 (0) = hn 1 ; : : : ; n j i in Stage 0; any other v;s is an extension of v[0];0, establishing Property 2. Child segment, when it is rst created, adds one extra nonzero value s to the parent's segment, establishing Property 3. Assume that the active node stabilizes in Stage s. Since B cannot extend the labeling to T s, T s is nonempty, establishing Property 4. Lemma 32 Let A and B be team matrices, let T be some widget such that T 2 T (A), T 62 T (B). Let W = [ N ff T;N g be the union containing all functions f T;N from Lemma 31, where N runs over all B-teams. There is a team M which [A]FIN-learns the entire class W. Proof: We describe a team M that has a machine M for each column of A. We show how M learns an arbitrary f 2 W. Input f(0) = hn 1 ; : : : ; n j i and restore the team N we are diagonalizing against. We know that f = f T;N by Property 2 in Lemma 31. We simulate the entire segment construction algorithm from this lemma. 17

18 Since T 2 T (A), we can label T with the rows of A. Each machine M in team M inputs values of f and outputs its conjecture at the rst time when there is a vertex v 2 V (T ) such that 1. v is labeled by a row which includes M, 2. v becomes active at stage s in the construction, and we have received v;s in the input. At this moment M outputs conjecture h which is dened as follows. Set ' h (x) = v;s (x) for each x < j v;s j. Each machine M 2 M continues to simulate the algorithm of Lemma 31. Whenever it discovers a new segment v 0 ;s 0 which extends the currently dened ' h, then ' h is made equal to this segment. We now prove that the team M described above indeed [A]FIN-learns f 2 W. Let f = f w for some w 2 V (T ). Vertex w is labeled by some row in A. We claim that all machines in this row learn f w. Assume that some M 2 M on that row rst outputs its conjecture on v. At the stage s when M outputs its conjecture, we have v;s w;t for t for which w;t is nonempty. By Property 3 in Lemma 31 we have that v = w or v is an ancestor of w. If v = w, the conjecture by machine M extends itself to v;s 0 for ever larger s 0, so it is total and computes f w = f v. If w is a successor of v, there is some component 2 T such that the whole path from v down to w is in V (). Since labeling by A's rows is legal, M cannot participate in the rows which label any successors of v which do not lie on the path from v to w. Therefore it will always extend to segments compatible with f w, and ultimately will extend to the innitely growing sequence of segments w;s 0, i.e. again ' h computes f w. Theorem 33 T (A) 6 T (B) implies [A]FIN 6 [B]FIN. Proof: Pick T 2 T (A)? T (B). Dene W = ff T;N : N is any B-teamg (as in Lemma 32). Any team N fails to [B]FIN-learn the total function f T;N by Property 1 in Lemma 31. On the other hand, there is a team M which [A]FIN-learns the whole W by Lemma

19 3.3 Simulation Lemma 34 If, for any strategy of A, there is a recursive strategy for B that keeps A in the game forever, then [A]FIN [B]FIN. Proof: Let W 2 [A]FIN and let M be an A-team learning W. We indicate a team N which [B]FIN-learns W. Fix a function f 2 W. By Lemma 26 build a segment tree labeled by rows of A. Use the winning strategy for the B to label it with rows of B as well. We now describe when N outputs conjectures, and how the conjectures themselves are dened. Whenever some M j 2 N rst participates in some label it outputs a conjecture h j. The function ' hj coincides with the segment of the labeled node. Later, whenever a new segment is labeled by some row of B, we extend to that segment. ' hj Since the tree is legally labeled by B, there are no forks, i.e. for each ' hj there cannot be mutually incompatible extensions. We claim that N learns f. Indeed, since M learns f, some row of A labels the innite branch of initial segments f hni f. Since B has nitely many rows, some row of B labels innitely many nodes on that branch. This means that all conjectures output by N on that row are always consistent with f and are dened on a growing sequence of initial segments of it. Therefore these machines learn the function f. By Lemma 30, if T (A) T (B), then, for every strategy of A, there is a (possibly nonrecursive) strategy for B that wins the game. Next, we show that if there is such a strategy for B, then there is a recursive strategy as well. This together with lemma 34 implies [A]FIN [B]FIN. To show the existence of a recursive A winning any A-team in the game, we modify the rules of the game so that only nitely many dierent positions are possible. Then, any winning strategy is nite and, therefore, recursive. First, we bound the depth of the tree. Let P be a node. We say that the label of P uses the j th column if a ij = 1 where i is the row labeling P. The set Used A (P ) consists of columns of A are used by a label of P or an ancestor of P. The set Used B (P ) is dened similarly, with B instead of A. A node P and its child P 0 are equivalent if two conditions are satised: 1. The sets Used A and Used B are the same for P and P Any column used by the label of P 0 is also used by a label of one of its descendants. 19

20 The sets Used A and Used B never change because ancestors of P and P 0 and their labels never change. Therefore, if P and its child P 0 are equivalent at some stage of the game, they are equivalent at all later stages as well. Lemma 35 Inserting a child of P and labeling it by a row i is possible if and only if it is possible to insert a child of P 0 and label it by i. Proof: Assume that it is impossible to label the new child C by a row of i. This means there is a fork formed by R, C and C 0 (where R is an ancestor of C and C 0 is some other child of R) and R and C 0 already have labels using the same column j with 1 in the row i. Case 1. R, C and C 0 form a fork if we make C a child of P 0. If R is not the same as P 0, this remains a fork if we make C a child of P. If R = P 0, notice that the label of P or one of its ancestors uses the column j as well (because Used A (P ) = Used A (P 0 )) and we can take this node as R instead of P 0. Then, we get a fork for C that is a child of P. Case 2. R, C and C 0 form a fork if we make C a child of P. This is a fork for a child of P 0 as well unless C 0 is the same as P 0. However, any column in the label of P 0 is used by one of its descendants and we can replace P 0 by its descendant if necessary. We dene the bounded-depth game as the game with an additional rule that, whenever a node P and its child P 0 become equivalent, P 0 is deleted and all children of P 0 (together with their subtrees) are moved so that they become children of P. The argument above shows Lemma 36 If A can win B in the original game, then A can win B in the bounded-depth game as well. Lemma 37 Let n A and n B be the number of columns in A and B, respectively. The depth of the game-tree in the bounded-depth game never exceeds (n A + n B ) 2. Proof: For a contradiction, assume the depth is more than (n A + n B ) 2 and no node is equivalent to its parent. We take the path from the root to a node of depth (n A + n B ) 2. Let P 0, P 1, : : :, P k be all nodes on this path such that at least one of sets Used A (P i ) and Used B (P i ) is dierent from a similar set for the parent of P i. Then, for 20

21 each node P i (i > 0), at least one of Used A (P i ) and Used B (P i ) is larger than Used A (P i?1 ) and Used B (P i?1 ) and the other is the same or larger (because these sets never decrease when we move from a parent to a child). There are n A columns that can be in Used A and n B columns that can be in Used B. Therefore, these sets can increase at most n A + n B times, i.e., k n A + n B. This implies that, for some i, there are more than n A + n B nodes between P i?1 and P i. Let R 0, : : :, R j be these nodes (not including P i ). R i being non-equivalent to R i?1 means that there is a column used by one of labels of R i that is not used by any of its descendants, including R i+1, : : :, R j. Having one such column for every i implies having more than n A + n B columns together. A contradiction. The second step is bounding how much the tree can branch. If P 0 and P 00 have the same parent P and the subtrees starting at P 0 and P 00 are identical (have the same topology and are labeled in the same way by both A and B), we allow to remove P 00 and subtree starting at P 00 so that only one of two identical subtrees remains. After every move of B, we check for such pairs of subtrees and do removal, if necessary. Lemma 38 Depth-bounded game with removal of identical subtrees has only nitely many possible positions. Proof: Any position in this game is a labeled tree of depth at most (n A +n B ) 2 (lemma 37) with no two siblings P 0 and P 00 such that subtrees rooted in P 0 and P 00 are identical. We need to show that there is only a nite number of such trees. We do this by an induction over the depth of the tree. A tree of depth 0 is just a single node and it can be labeled in nitely many ways. For the inductive step, a tree of depth i consists of the root and one or several subtrees of depth i? 1 with no two subtrees being identical. The number of subtrees is bound by the number of labeled trees of depth i? 1 without identical subtrees. This number is nite by inductive assumption and every subtree can be chosen in nitely many ways. Therefore, if the number of trees of depth i? 1 is nite, the number of trees of depth i is nite as well. The problem is that, to win, A may need to create two subtrees that are identical for some time but will become dierent later. This means that the 21

22 game with removal of identical subtrees is not necessarily equivalent to the original game. To x this problem, we allow a new type of move. Let P be a node and P 0 be a child of P such that the labels of the subtree starting at P 0 do not use any columns from the labels of P and ancestors of P. Then, A can create P 00, an another child of P and a subtree starting at P 00 which is precisely identical (including labels by A and B) to one starting at P 0. We call this duplicating a subtree. The next move of A after duplicating a subtree must be inserting a new node in a way that makes the two subtrees dierent. With both removal and duplication, it is clear that A can do anything she can do in the original game. If A needs two copies of an identical subtree and one of them has been removed earlier, she can get the second copy by duplication. Next, we prove that anything that the A can do with both removal and duplication can be done in the original game in a slightly dierent way. Lemma 39 A can win B in the depth-bounded game with removal and duplication if and only if A can win B in the depth bounded game. Proof: If A can win B in the depth-bounded game, A can win B in the depthbounded game with removal and duplication by using the same strategy (with removing identical subtrees when they appear and restoring them by duplication when necessary). For the other direction, assume A can win B in the depth-bounded game with removal and duplication. Let d be the number of dierent positions possible in this game. By Lemma 38, this is constant. If A can win B, A can do that without repeating positions. Therefore, A can win in at most d moves. To win in the original depth-bounded game, A plays in the same way as in the depth-bounded game with removals and duplications. The only dierence is that, in some cases, A creates several identical subtrees instead of one subtree to simulate possible duplication moves in the future. Next, we describe this process more formally. A node P in the game with removals and duplications corresponds to a set of nodes S(P ) in the game without them. All nodes of S(P ) must be labeled in the same way as P. If P 0 is a child of P, then every node in S(P 0 ) is a child of some node in S(P ). 22

23 Inserting the rst node (the root) is done in the same way in both versions of the game. Thus, S(R), for R being the root, consists of the root only. For each next move of A, we consider three cases: 1. The move is duplicating a subtree rooted at P. In this case, we split S(P ) in two equal parts and one of them becomes S(P 0 ) for the newly created node P 0. For each descendant Q, we dene S(Q) as the set of all nodes that have parents in S(Q 0 ) where Q 0 is the parent of Q. 2. The move is inserting a new child P 0 to the node P and the label of P 0 has a machine in common with P or one of its ancestors. In this case, we insert a child to every node in S(P ) and label it in the same way. Then, we wait for answers of B. We choose the label that B has used most times on the new nodes and dene S(P 0 ) as the set of new nodes having this label. 3. The move is inserting a new child P 0 to the node P and the label of P 0 has no machine in common with P or any of its ancestors. Then, we insert n d B children to every node in S(P ). For every node in S(P ), we choose the \most popular" response of B (the label that B has used for largest number of new children). Then, we choose the label that has been the \most popular" the most times, restrict S(P ) to those nodes that have this label as the most popular and set S(P 0 ) equal to the set of children of nodes in S(P ) having this label. In the second and the third case, the moves of A are always possible. We need to show that this is true in the rst case. This means showing that S(P ) always has at least two elements so that we can split it into two parts. If the label of P uses a column that is used to label an ancestor of P, duplication would create a fork (the ancestor, P and the new node) and, therefore, is impossible. Hence, the label of P does not use such columns. Then, P was inserted by the step 3 and S(P ) originally contained n d B elements. Selecting a majority in the next application of step 2 or 3 decreases it by at most a factor of n B (because there are only n B possible labels for B) and simulating duplication (step 1) decreases it by a factor of 2. As we already noticed, game goes on for at most d moves. Therefore, even before the last move S(P ) contains at least n B 2 elements. 23

24 Theorem 40 Let A and B be two team matrices. T (A) T (B) implies that [A]FIN [B]FIN. Proof: Let T (A) T (B). By Lemma 30, B can win A in the game. Then, B can win A in the depth-bounded game with removals and duplications. Every winning strategy in this game is nite. By checking the proofs of Lemmas 36 and 39, we see that it implies a nite winning strategy for B in the original game. By Lemma 34, [A]FIN [B]FIN. The proof above also gives us a decidability result. Theorem 41 There is an algorithm that, given A and B, answers whether [A]FIN [B]FIN. Proof: [A]FIN [B]FIN if and only if T (A) T (B). This, in turn, is equivalent to the existence of the winning strategy for A in the depth-bounded game with removals and duplications (Lemmas 30, 36, 39). For this game, there are only nitely many positions. Therefore, one can enumerate them all, enumerate all strategies for this game and determine whether A has a strategy that wins against an arbitrary strategy of B. 4 More on Asymmetric Learning Theorems 33 and 40 give an algorithm to decide if [A]FIN [B]FIN for arbitrary asymmetric teams A and B. In this section we examine properties of that algorithm and further explore the relationships between the asymmetric team classes. 4.1 Basic Reductions and Duality Denition 42 Let A = (a ij ) and B = (b ij ) be two team matrices. There is a basic reduction from A to B, if there are functions : f1; : : : ; n B g! f1; : : : ; n A g and : f1; : : : ; m A g! f1; : : : ; m B g such that (8i 2 f1; : : : ; m A g) (8j 2 f1; : : : ; n B g) ai;(j) b (i);j : (1) The reduction is written: A B. The pair of mappings (; ) is a witness for the reduction A B. 24

25 Intuitively A B, whenever [A]FIN [B]FIN can be proved in the trivial way, i.e. reusing the old n A strategies in n B new places. For example, the inclusion [1; 2]FIN [2; 4]FIN can be established repeating each strategy twice, in our notation: [1; 2] [2; 4]. Theorem 43 The following problem is NP-Complete: matrices A, B determine whether A B. For any two team Proof: The problem is NP. We can verify A B by a nondeterministic algorithm which guesses all values of the witness functions ;. After that we can check all inequalities from Denition 42 in polynomial time. The problem is NP-Hard. We show that CLIQUE is reducible to the problem of determining whether A B. Let G(V; E) be a graph with the set of vertices V = fv 1 ; : : : ; v m g and the set of unoriented edges E V V. We construct for the given graph G a team matrix A G with m columns and m(m?1) rows: each column corresponds to a vertex v 2 j 2 V, each row corresponds to a pair (v i ; v k ) of dierent vertices in V. We accordingly label the entries of the matrix A by a (i;k);j. Dene a (i;k);j = ( 0 if (i = j or k = j) and (vi ; v k ) 62 E; 1 otherwise. We claim that A G [n? 1; n] i G contains a clique of size n. ()). Let (; ) be a witness for A G [n? 1; n] Consider columns with numbers?1 (1); : : : ;?1 (n) in A G. Each row in A G has at most one 0 on its intersections with these n columns, for otherwise we cannot match this row with a row from the matrix [n? 1; n] associated with the threshold function t n?1 n. This contradicts the choice of. All the vertices v?1 (1) ; : : : ; v?1 (n) in G are pairwise connected. Indeed, assume that (v?1 (i) ; v?1 (k) ) is not an edge in G. Then we have two zero entries on the same row, namely a (?1 (i);?1 (k));?1 (i) and a (?1 (i);?1 (k));?1 (k), a contradiction. ((). Suppose G contains a clique C V of n vertices. Choose a function such that C = fv?1 (1) ; : : : ; v?1 (n) g: Each row has at most one 0 on its intersection with the columns numbered?1 (1); : : : ;?1 (n). Indeed, let (i; k) be the label for some row in A G. If C 25

of Learning Teams Kalvis Apstis Department of Computer Science University of Maryland College Park, MD 20742, USA Rusins Freivalds

of Learning Teams Kalvis Apstis Department of Computer Science University of Maryland College Park, MD 20742, USA Rusins Freivalds On Duality in Learning and the Selection of Learning Teams Kalvis Apstis Department of Computer Science University of Maryland College Park, MD 20742, USA kalvis@csumdedu Rusins Freivalds Institute of

More information

Tree sets. Reinhard Diestel

Tree sets. Reinhard Diestel 1 Tree sets Reinhard Diestel Abstract We study an abstract notion of tree structure which generalizes treedecompositions of graphs and matroids. Unlike tree-decompositions, which are too closely linked

More information

An Algebraic View of the Relation between Largest Common Subtrees and Smallest Common Supertrees

An Algebraic View of the Relation between Largest Common Subtrees and Smallest Common Supertrees An Algebraic View of the Relation between Largest Common Subtrees and Smallest Common Supertrees Francesc Rosselló 1, Gabriel Valiente 2 1 Department of Mathematics and Computer Science, Research Institute

More information

2 RODNEY G. DOWNEY STEFFEN LEMPP Theorem. For any incomplete r.e. degree w, there is an incomplete r.e. degree a > w such that there is no r.e. degree

2 RODNEY G. DOWNEY STEFFEN LEMPP Theorem. For any incomplete r.e. degree w, there is an incomplete r.e. degree a > w such that there is no r.e. degree THERE IS NO PLUS-CAPPING DEGREE Rodney G. Downey Steffen Lempp Department of Mathematics, Victoria University of Wellington, Wellington, New Zealand downey@math.vuw.ac.nz Department of Mathematics, University

More information

A version of for which ZFC can not predict a single bit Robert M. Solovay May 16, Introduction In [2], Chaitin introd

A version of for which ZFC can not predict a single bit Robert M. Solovay May 16, Introduction In [2], Chaitin introd CDMTCS Research Report Series A Version of for which ZFC can not Predict a Single Bit Robert M. Solovay University of California at Berkeley CDMTCS-104 May 1999 Centre for Discrete Mathematics and Theoretical

More information

Extracted from a working draft of Goldreich s FOUNDATIONS OF CRYPTOGRAPHY. See copyright notice.

Extracted from a working draft of Goldreich s FOUNDATIONS OF CRYPTOGRAPHY. See copyright notice. 106 CHAPTER 3. PSEUDORANDOM GENERATORS Using the ideas presented in the proofs of Propositions 3.5.3 and 3.5.9, one can show that if the n 3 -bit to l(n 3 ) + 1-bit function used in Construction 3.5.2

More information

1 Primals and Duals: Zero Sum Games

1 Primals and Duals: Zero Sum Games CS 124 Section #11 Zero Sum Games; NP Completeness 4/15/17 1 Primals and Duals: Zero Sum Games We can represent various situations of conflict in life in terms of matrix games. For example, the game shown

More information

Reverse mathematics of some topics from algorithmic graph theory

Reverse mathematics of some topics from algorithmic graph theory F U N D A M E N T A MATHEMATICAE 157 (1998) Reverse mathematics of some topics from algorithmic graph theory by Peter G. C l o t e (Chestnut Hill, Mass.) and Jeffry L. H i r s t (Boone, N.C.) Abstract.

More information

On improving matchings in trees, via bounded-length augmentations 1

On improving matchings in trees, via bounded-length augmentations 1 On improving matchings in trees, via bounded-length augmentations 1 Julien Bensmail a, Valentin Garnero a, Nicolas Nisse a a Université Côte d Azur, CNRS, Inria, I3S, France Abstract Due to a classical

More information

On the Effectiveness of Symmetry Breaking

On the Effectiveness of Symmetry Breaking On the Effectiveness of Symmetry Breaking Russell Miller 1, Reed Solomon 2, and Rebecca M Steiner 3 1 Queens College and the Graduate Center of the City University of New York Flushing NY 11367 2 University

More information

Graph coloring, perfect graphs

Graph coloring, perfect graphs Lecture 5 (05.04.2013) Graph coloring, perfect graphs Scribe: Tomasz Kociumaka Lecturer: Marcin Pilipczuk 1 Introduction to graph coloring Definition 1. Let G be a simple undirected graph and k a positive

More information

2 THE COMPUTABLY ENUMERABLE SUPERSETS OF AN R-MAXIMAL SET The structure of E has been the subject of much investigation over the past fty- ve years, s

2 THE COMPUTABLY ENUMERABLE SUPERSETS OF AN R-MAXIMAL SET The structure of E has been the subject of much investigation over the past fty- ve years, s ON THE FILTER OF COMPUTABLY ENUMERABLE SUPERSETS OF AN R-MAXIMAL SET Steffen Lempp Andre Nies D. Reed Solomon Department of Mathematics University of Wisconsin Madison, WI 53706-1388 USA Department of

More information

DIMACS Technical Report March Game Seki 1

DIMACS Technical Report March Game Seki 1 DIMACS Technical Report 2007-05 March 2007 Game Seki 1 by Diogo V. Andrade RUTCOR, Rutgers University 640 Bartholomew Road Piscataway, NJ 08854-8003 dandrade@rutcor.rutgers.edu Vladimir A. Gurvich RUTCOR,

More information

4.1 Notation and probability review

4.1 Notation and probability review Directed and undirected graphical models Fall 2015 Lecture 4 October 21st Lecturer: Simon Lacoste-Julien Scribe: Jaime Roquero, JieYing Wu 4.1 Notation and probability review 4.1.1 Notations Let us recall

More information

The complexity of recursive constraint satisfaction problems.

The complexity of recursive constraint satisfaction problems. The complexity of recursive constraint satisfaction problems. Victor W. Marek Department of Computer Science University of Kentucky Lexington, KY 40506, USA marek@cs.uky.edu Jeffrey B. Remmel Department

More information

Lecture 5: Efficient PAC Learning. 1 Consistent Learning: a Bound on Sample Complexity

Lecture 5: Efficient PAC Learning. 1 Consistent Learning: a Bound on Sample Complexity Universität zu Lübeck Institut für Theoretische Informatik Lecture notes on Knowledge-Based and Learning Systems by Maciej Liśkiewicz Lecture 5: Efficient PAC Learning 1 Consistent Learning: a Bound on

More information

Realization Plans for Extensive Form Games without Perfect Recall

Realization Plans for Extensive Form Games without Perfect Recall Realization Plans for Extensive Form Games without Perfect Recall Richard E. Stearns Department of Computer Science University at Albany - SUNY Albany, NY 12222 April 13, 2015 Abstract Given a game in

More information

Vector Space Basics. 1 Abstract Vector Spaces. 1. (commutativity of vector addition) u + v = v + u. 2. (associativity of vector addition)

Vector Space Basics. 1 Abstract Vector Spaces. 1. (commutativity of vector addition) u + v = v + u. 2. (associativity of vector addition) Vector Space Basics (Remark: these notes are highly formal and may be a useful reference to some students however I am also posting Ray Heitmann's notes to Canvas for students interested in a direct computational

More information

Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space

Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) Contents 1 Vector Spaces 1 1.1 The Formal Denition of a Vector Space.................................. 1 1.2 Subspaces...................................................

More information

Lecture 14 - P v.s. NP 1

Lecture 14 - P v.s. NP 1 CME 305: Discrete Mathematics and Algorithms Instructor: Professor Aaron Sidford (sidford@stanford.edu) February 27, 2018 Lecture 14 - P v.s. NP 1 In this lecture we start Unit 3 on NP-hardness and approximation

More information

CS6901: review of Theory of Computation and Algorithms

CS6901: review of Theory of Computation and Algorithms CS6901: review of Theory of Computation and Algorithms Any mechanically (automatically) discretely computation of problem solving contains at least three components: - problem description - computational

More information

k-blocks: a connectivity invariant for graphs

k-blocks: a connectivity invariant for graphs 1 k-blocks: a connectivity invariant for graphs J. Carmesin R. Diestel M. Hamann F. Hundertmark June 17, 2014 Abstract A k-block in a graph G is a maximal set of at least k vertices no two of which can

More information

NP-completeness. Chapter 34. Sergey Bereg

NP-completeness. Chapter 34. Sergey Bereg NP-completeness Chapter 34 Sergey Bereg Oct 2017 Examples Some problems admit polynomial time algorithms, i.e. O(n k ) running time where n is the input size. We will study a class of NP-complete problems

More information

MATH 521, WEEK 2: Rational and Real Numbers, Ordered Sets, Countable Sets

MATH 521, WEEK 2: Rational and Real Numbers, Ordered Sets, Countable Sets MATH 521, WEEK 2: Rational and Real Numbers, Ordered Sets, Countable Sets 1 Rational and Real Numbers Recall that a number is rational if it can be written in the form a/b where a, b Z and b 0, and a number

More information

Closedness properties in ex-identication

Closedness properties in ex-identication Theoretical Computer Science 268 (2001) 367 393 www.elsevier.com/locate/tcs Closedness properties in ex-identication Kalvis Apstis, Rusins Freivalds, Raimonds Simanovskis, Juris Smotrovs Institute of Mathematics

More information

Coins with arbitrary weights. Abstract. Given a set of m coins out of a collection of coins of k unknown distinct weights, we wish to

Coins with arbitrary weights. Abstract. Given a set of m coins out of a collection of coins of k unknown distinct weights, we wish to Coins with arbitrary weights Noga Alon Dmitry N. Kozlov y Abstract Given a set of m coins out of a collection of coins of k unknown distinct weights, we wish to decide if all the m given coins have the

More information

INDEPENDENCE OF THE CONTINUUM HYPOTHESIS

INDEPENDENCE OF THE CONTINUUM HYPOTHESIS INDEPENDENCE OF THE CONTINUUM HYPOTHESIS CAPSTONE MATT LUTHER 1 INDEPENDENCE OF THE CONTINUUM HYPOTHESIS 2 1. Introduction This paper will summarize many of the ideas from logic and set theory that are

More information

Connectivity and tree structure in finite graphs arxiv: v5 [math.co] 1 Sep 2014

Connectivity and tree structure in finite graphs arxiv: v5 [math.co] 1 Sep 2014 Connectivity and tree structure in finite graphs arxiv:1105.1611v5 [math.co] 1 Sep 2014 J. Carmesin R. Diestel F. Hundertmark M. Stein 20 March, 2013 Abstract Considering systems of separations in a graph

More information

CS 6820 Fall 2014 Lectures, October 3-20, 2014

CS 6820 Fall 2014 Lectures, October 3-20, 2014 Analysis of Algorithms Linear Programming Notes CS 6820 Fall 2014 Lectures, October 3-20, 2014 1 Linear programming The linear programming (LP) problem is the following optimization problem. We are given

More information

Maximising the number of induced cycles in a graph

Maximising the number of induced cycles in a graph Maximising the number of induced cycles in a graph Natasha Morrison Alex Scott April 12, 2017 Abstract We determine the maximum number of induced cycles that can be contained in a graph on n n 0 vertices,

More information

STABILITY AND POSETS

STABILITY AND POSETS STABILITY AND POSETS CARL G. JOCKUSCH, JR., BART KASTERMANS, STEFFEN LEMPP, MANUEL LERMAN, AND REED SOLOMON Abstract. Hirschfeldt and Shore have introduced a notion of stability for infinite posets. We

More information

1 More finite deterministic automata

1 More finite deterministic automata CS 125 Section #6 Finite automata October 18, 2016 1 More finite deterministic automata Exercise. Consider the following game with two players: Repeatedly flip a coin. On heads, player 1 gets a point.

More information

Written Qualifying Exam. Spring, Friday, May 22, This is nominally a three hour examination, however you will be

Written Qualifying Exam. Spring, Friday, May 22, This is nominally a three hour examination, however you will be Written Qualifying Exam Theory of Computation Spring, 1998 Friday, May 22, 1998 This is nominally a three hour examination, however you will be allowed up to four hours. All questions carry the same weight.

More information

Lecture #14: NP-Completeness (Chapter 34 Old Edition Chapter 36) Discussion here is from the old edition.

Lecture #14: NP-Completeness (Chapter 34 Old Edition Chapter 36) Discussion here is from the old edition. Lecture #14: 0.0.1 NP-Completeness (Chapter 34 Old Edition Chapter 36) Discussion here is from the old edition. 0.0.2 Preliminaries: Definition 1 n abstract problem Q is a binary relations on a set I of

More information

Price of Stability in Survivable Network Design

Price of Stability in Survivable Network Design Noname manuscript No. (will be inserted by the editor) Price of Stability in Survivable Network Design Elliot Anshelevich Bugra Caskurlu Received: January 2010 / Accepted: Abstract We study the survivable

More information

A fast algorithm to generate necklaces with xed content

A fast algorithm to generate necklaces with xed content Theoretical Computer Science 301 (003) 477 489 www.elsevier.com/locate/tcs Note A fast algorithm to generate necklaces with xed content Joe Sawada 1 Department of Computer Science, University of Toronto,

More information

CS3719 Theory of Computation and Algorithms

CS3719 Theory of Computation and Algorithms CS3719 Theory of Computation and Algorithms Any mechanically (automatically) discretely computation of problem solving contains at least three components: - problem description - computational tool - analysis

More information

MORE ON CONTINUOUS FUNCTIONS AND SETS

MORE ON CONTINUOUS FUNCTIONS AND SETS Chapter 6 MORE ON CONTINUOUS FUNCTIONS AND SETS This chapter can be considered enrichment material containing also several more advanced topics and may be skipped in its entirety. You can proceed directly

More information

Generalized Pigeonhole Properties of Graphs and Oriented Graphs

Generalized Pigeonhole Properties of Graphs and Oriented Graphs Europ. J. Combinatorics (2002) 23, 257 274 doi:10.1006/eujc.2002.0574 Available online at http://www.idealibrary.com on Generalized Pigeonhole Properties of Graphs and Oriented Graphs ANTHONY BONATO, PETER

More information

Critical Reading of Optimization Methods for Logical Inference [1]

Critical Reading of Optimization Methods for Logical Inference [1] Critical Reading of Optimization Methods for Logical Inference [1] Undergraduate Research Internship Department of Management Sciences Fall 2007 Supervisor: Dr. Miguel Anjos UNIVERSITY OF WATERLOO Rajesh

More information

STGs may contain redundant states, i.e. states whose. State minimization is the transformation of a given

STGs may contain redundant states, i.e. states whose. State minimization is the transformation of a given Completely Specied Machines STGs may contain redundant states, i.e. states whose function can be accomplished by other states. State minimization is the transformation of a given machine into an equivalent

More information

Math 42, Discrete Mathematics

Math 42, Discrete Mathematics c Fall 2018 last updated 12/05/2018 at 15:47:21 For use by students in this class only; all rights reserved. Note: some prose & some tables are taken directly from Kenneth R. Rosen, and Its Applications,

More information

Lecture 15 - NP Completeness 1

Lecture 15 - NP Completeness 1 CME 305: Discrete Mathematics and Algorithms Instructor: Professor Aaron Sidford (sidford@stanford.edu) February 29, 2018 Lecture 15 - NP Completeness 1 In the last lecture we discussed how to provide

More information

Complexity Theory VU , SS The Polynomial Hierarchy. Reinhard Pichler

Complexity Theory VU , SS The Polynomial Hierarchy. Reinhard Pichler Complexity Theory Complexity Theory VU 181.142, SS 2018 6. The Polynomial Hierarchy Reinhard Pichler Institut für Informationssysteme Arbeitsbereich DBAI Technische Universität Wien 15 May, 2018 Reinhard

More information

Outline. Complexity Theory EXACT TSP. The Class DP. Definition. Problem EXACT TSP. Complexity of EXACT TSP. Proposition VU 181.

Outline. Complexity Theory EXACT TSP. The Class DP. Definition. Problem EXACT TSP. Complexity of EXACT TSP. Proposition VU 181. Complexity Theory Complexity Theory Outline Complexity Theory VU 181.142, SS 2018 6. The Polynomial Hierarchy Reinhard Pichler Institut für Informationssysteme Arbeitsbereich DBAI Technische Universität

More information

Lean clause-sets: Generalizations of minimally. unsatisable clause-sets. Oliver Kullmann. University of Toronto. Toronto, Ontario M5S 3G4

Lean clause-sets: Generalizations of minimally. unsatisable clause-sets. Oliver Kullmann. University of Toronto. Toronto, Ontario M5S 3G4 Lean clause-sets: Generalizations of minimally unsatisable clause-sets Oliver Kullmann Department of Computer Science University of Toronto Toronto, Ontario M5S 3G4 e-mail: kullmann@cs.toronto.edu http://www.cs.utoronto.ca/kullmann/

More information

for average case complexity 1 randomized reductions, an attempt to derive these notions from (more or less) rst

for average case complexity 1 randomized reductions, an attempt to derive these notions from (more or less) rst On the reduction theory for average case complexity 1 Andreas Blass 2 and Yuri Gurevich 3 Abstract. This is an attempt to simplify and justify the notions of deterministic and randomized reductions, an

More information

NP-Completeness. Andreas Klappenecker. [based on slides by Prof. Welch]

NP-Completeness. Andreas Klappenecker. [based on slides by Prof. Welch] NP-Completeness Andreas Klappenecker [based on slides by Prof. Welch] 1 Prelude: Informal Discussion (Incidentally, we will never get very formal in this course) 2 Polynomial Time Algorithms Most of the

More information

1 Introduction A general problem that arises in dierent areas of computer science is the following combination problem: given two structures or theori

1 Introduction A general problem that arises in dierent areas of computer science is the following combination problem: given two structures or theori Combining Unication- and Disunication Algorithms Tractable and Intractable Instances Klaus U. Schulz CIS, University of Munich Oettingenstr. 67 80538 Munchen, Germany e-mail: schulz@cis.uni-muenchen.de

More information

On the Intrinsic Complexity of Learning Recursive Functions

On the Intrinsic Complexity of Learning Recursive Functions On the Intrinsic Complexity of Learning Recursive Functions Sanjay Jain and Efim Kinber and Christophe Papazian and Carl Smith and Rolf Wiehagen School of Computing, National University of Singapore, Singapore

More information

Connectedness. Proposition 2.2. The following are equivalent for a topological space (X, T ).

Connectedness. Proposition 2.2. The following are equivalent for a topological space (X, T ). Connectedness 1 Motivation Connectedness is the sort of topological property that students love. Its definition is intuitive and easy to understand, and it is a powerful tool in proofs of well-known results.

More information

Discrete Applied Mathematics

Discrete Applied Mathematics Discrete Applied Mathematics 194 (015) 37 59 Contents lists available at ScienceDirect Discrete Applied Mathematics journal homepage: wwwelseviercom/locate/dam Loopy, Hankel, and combinatorially skew-hankel

More information

Learning Large-Alphabet and Analog Circuits with Value Injection Queries

Learning Large-Alphabet and Analog Circuits with Value Injection Queries Learning Large-Alphabet and Analog Circuits with Value Injection Queries Dana Angluin 1 James Aspnes 1, Jiang Chen 2, Lev Reyzin 1,, 1 Computer Science Department, Yale University {angluin,aspnes}@cs.yale.edu,

More information

Notes on the Matrix-Tree theorem and Cayley s tree enumerator

Notes on the Matrix-Tree theorem and Cayley s tree enumerator Notes on the Matrix-Tree theorem and Cayley s tree enumerator 1 Cayley s tree enumerator Recall that the degree of a vertex in a tree (or in any graph) is the number of edges emanating from it We will

More information

The domination game played on unions of graphs

The domination game played on unions of graphs The domination game played on unions of graphs Paul Dorbec 1,2 Gašper Košmrlj 3 Gabriel Renault 1,2 1 Univ. Bordeaux, LaBRI, UMR5800, F-33405 Talence 2 CNRS, LaBRI, UMR5800, F-33405 Talence Email: dorbec@labri.fr,

More information

Nordhaus-Gaddum Theorems for k-decompositions

Nordhaus-Gaddum Theorems for k-decompositions Nordhaus-Gaddum Theorems for k-decompositions Western Michigan University October 12, 2011 A Motivating Problem Consider the following problem. An international round-robin sports tournament is held between

More information

Then RAND RAND(pspace), so (1.1) and (1.2) together immediately give the random oracle characterization BPP = fa j (8B 2 RAND) A 2 P(B)g: (1:3) Since

Then RAND RAND(pspace), so (1.1) and (1.2) together immediately give the random oracle characterization BPP = fa j (8B 2 RAND) A 2 P(B)g: (1:3) Since A Note on Independent Random Oracles Jack H. Lutz Department of Computer Science Iowa State University Ames, IA 50011 Abstract It is shown that P(A) \ P(B) = BPP holds for every A B. algorithmically random

More information

CHAPTER 7. Connectedness

CHAPTER 7. Connectedness CHAPTER 7 Connectedness 7.1. Connected topological spaces Definition 7.1. A topological space (X, T X ) is said to be connected if there is no continuous surjection f : X {0, 1} where the two point set

More information

Löwenheim-Skolem Theorems, Countable Approximations, and L ω. David W. Kueker (Lecture Notes, Fall 2007)

Löwenheim-Skolem Theorems, Countable Approximations, and L ω. David W. Kueker (Lecture Notes, Fall 2007) Löwenheim-Skolem Theorems, Countable Approximations, and L ω 0. Introduction David W. Kueker (Lecture Notes, Fall 2007) In its simplest form the Löwenheim-Skolem Theorem for L ω1 ω states that if σ L ω1

More information

Algebraic Methods in Combinatorics

Algebraic Methods in Combinatorics Algebraic Methods in Combinatorics Po-Shen Loh 27 June 2008 1 Warm-up 1. (A result of Bourbaki on finite geometries, from Răzvan) Let X be a finite set, and let F be a family of distinct proper subsets

More information

CO759: Algorithmic Game Theory Spring 2015

CO759: Algorithmic Game Theory Spring 2015 CO759: Algorithmic Game Theory Spring 2015 Instructor: Chaitanya Swamy Assignment 1 Due: By Jun 25, 2015 You may use anything proved in class directly. I will maintain a FAQ about the assignment on the

More information

Online Learning, Mistake Bounds, Perceptron Algorithm

Online Learning, Mistake Bounds, Perceptron Algorithm Online Learning, Mistake Bounds, Perceptron Algorithm 1 Online Learning So far the focus of the course has been on batch learning, where algorithms are presented with a sample of training data, from which

More information

Efficient Approximation for Restricted Biclique Cover Problems

Efficient Approximation for Restricted Biclique Cover Problems algorithms Article Efficient Approximation for Restricted Biclique Cover Problems Alessandro Epasto 1, *, and Eli Upfal 2 ID 1 Google Research, New York, NY 10011, USA 2 Department of Computer Science,

More information

Chapter 3 Deterministic planning

Chapter 3 Deterministic planning Chapter 3 Deterministic planning In this chapter we describe a number of algorithms for solving the historically most important and most basic type of planning problem. Two rather strong simplifying assumptions

More information

COUNTING NUMERICAL SEMIGROUPS BY GENUS AND SOME CASES OF A QUESTION OF WILF

COUNTING NUMERICAL SEMIGROUPS BY GENUS AND SOME CASES OF A QUESTION OF WILF COUNTING NUMERICAL SEMIGROUPS BY GENUS AND SOME CASES OF A QUESTION OF WILF NATHAN KAPLAN Abstract. The genus of a numerical semigroup is the size of its complement. In this paper we will prove some results

More information

2 Z. Lonc and M. Truszczynski investigations, we use the framework of the xed-parameter complexity introduced by Downey and Fellows [Downey and Fellow

2 Z. Lonc and M. Truszczynski investigations, we use the framework of the xed-parameter complexity introduced by Downey and Fellows [Downey and Fellow Fixed-parameter complexity of semantics for logic programs ZBIGNIEW LONC Technical University of Warsaw and MIROS LAW TRUSZCZYNSKI University of Kentucky A decision problem is called parameterized if its

More information

Relations Graphical View

Relations Graphical View Introduction Relations Computer Science & Engineering 235: Discrete Mathematics Christopher M. Bourke cbourke@cse.unl.edu Recall that a relation between elements of two sets is a subset of their Cartesian

More information

Equational Logic. Chapter Syntax Terms and Term Algebras

Equational Logic. Chapter Syntax Terms and Term Algebras Chapter 2 Equational Logic 2.1 Syntax 2.1.1 Terms and Term Algebras The natural logic of algebra is equational logic, whose propositions are universally quantified identities between terms built up from

More information

Incomplete version for students of easllc2012 only. 6.6 The Model Existence Game 99

Incomplete version for students of easllc2012 only. 6.6 The Model Existence Game 99 98 First-Order Logic 6.6 The Model Existence Game In this section we learn a new game associated with trying to construct a model for a sentence or a set of sentences. This is of fundamental importance

More information

CS 350 Algorithms and Complexity

CS 350 Algorithms and Complexity 1 CS 350 Algorithms and Complexity Fall 2015 Lecture 15: Limitations of Algorithmic Power Introduction to complexity theory Andrew P. Black Department of Computer Science Portland State University Lower

More information

Computer Science 385 Analysis of Algorithms Siena College Spring Topic Notes: Limitations of Algorithms

Computer Science 385 Analysis of Algorithms Siena College Spring Topic Notes: Limitations of Algorithms Computer Science 385 Analysis of Algorithms Siena College Spring 2011 Topic Notes: Limitations of Algorithms We conclude with a discussion of the limitations of the power of algorithms. That is, what kinds

More information

1 Basic Combinatorics

1 Basic Combinatorics 1 Basic Combinatorics 1.1 Sets and sequences Sets. A set is an unordered collection of distinct objects. The objects are called elements of the set. We use braces to denote a set, for example, the set

More information

Part V. Intractable Problems

Part V. Intractable Problems Part V Intractable Problems 507 Chapter 16 N P-Completeness Up to now, we have focused on developing efficient algorithms for solving problems. The word efficient is somewhat subjective, and the degree

More information

Computability and Complexity Theory: An Introduction

Computability and Complexity Theory: An Introduction Computability and Complexity Theory: An Introduction meena@imsc.res.in http://www.imsc.res.in/ meena IMI-IISc, 20 July 2006 p. 1 Understanding Computation Kinds of questions we seek answers to: Is a given

More information

Lecture 29: Computational Learning Theory

Lecture 29: Computational Learning Theory CS 710: Complexity Theory 5/4/2010 Lecture 29: Computational Learning Theory Instructor: Dieter van Melkebeek Scribe: Dmitri Svetlov and Jake Rosin Today we will provide a brief introduction to computational

More information

Lecture 2: Connecting the Three Models

Lecture 2: Connecting the Three Models IAS/PCMI Summer Session 2000 Clay Mathematics Undergraduate Program Advanced Course on Computational Complexity Lecture 2: Connecting the Three Models David Mix Barrington and Alexis Maciel July 18, 2000

More information

an efficient procedure for the decision problem. We illustrate this phenomenon for the Satisfiability problem.

an efficient procedure for the decision problem. We illustrate this phenomenon for the Satisfiability problem. 1 More on NP In this set of lecture notes, we examine the class NP in more detail. We give a characterization of NP which justifies the guess and verify paradigm, and study the complexity of solving search

More information

We are going to discuss what it means for a sequence to converge in three stages: First, we define what it means for a sequence to converge to zero

We are going to discuss what it means for a sequence to converge in three stages: First, we define what it means for a sequence to converge to zero Chapter Limits of Sequences Calculus Student: lim s n = 0 means the s n are getting closer and closer to zero but never gets there. Instructor: ARGHHHHH! Exercise. Think of a better response for the instructor.

More information

Lecture 17: Trees and Merge Sort 10:00 AM, Oct 15, 2018

Lecture 17: Trees and Merge Sort 10:00 AM, Oct 15, 2018 CS17 Integrated Introduction to Computer Science Klein Contents Lecture 17: Trees and Merge Sort 10:00 AM, Oct 15, 2018 1 Tree definitions 1 2 Analysis of mergesort using a binary tree 1 3 Analysis of

More information

Inclusion Problems in Parallel Learning and Games

Inclusion Problems in Parallel Learning and Games journal of computer and system sciences 52, 403420 (1996) article no. 0031 Inclusion Problems in Parallel Learning and Games Martin Kummer* and Frank Stephan - Institut fu r Logik, Komplexita t, und Deduktionssysteme,

More information

Erdös-Ko-Rado theorems for chordal and bipartite graphs

Erdös-Ko-Rado theorems for chordal and bipartite graphs Erdös-Ko-Rado theorems for chordal and bipartite graphs arxiv:0903.4203v2 [math.co] 15 Jul 2009 Glenn Hurlbert and Vikram Kamat School of Mathematical and Statistical Sciences Arizona State University,

More information

Reverse mathematics and marriage problems with unique solutions

Reverse mathematics and marriage problems with unique solutions Reverse mathematics and marriage problems with unique solutions Jeffry L. Hirst and Noah A. Hughes January 28, 2014 Abstract We analyze the logical strength of theorems on marriage problems with unique

More information

VC-DENSITY FOR TREES

VC-DENSITY FOR TREES VC-DENSITY FOR TREES ANTON BOBKOV Abstract. We show that for the theory of infinite trees we have vc(n) = n for all n. VC density was introduced in [1] by Aschenbrenner, Dolich, Haskell, MacPherson, and

More information

Monotonically Computable Real Numbers

Monotonically Computable Real Numbers Monotonically Computable Real Numbers Robert Rettinger a, Xizhong Zheng b,, Romain Gengler b, Burchard von Braunmühl b a Theoretische Informatik II, FernUniversität Hagen, 58084 Hagen, Germany b Theoretische

More information

An Application of First-Order Logic to a Problem in Combinatorics 1

An Application of First-Order Logic to a Problem in Combinatorics 1 An Application of First-Order Logic to a Problem in Combinatorics 1 I. The Combinatorial Background. A. Families of objects. In combinatorics, one frequently encounters a set of objects in which a), each

More information

Strongly chordal and chordal bipartite graphs are sandwich monotone

Strongly chordal and chordal bipartite graphs are sandwich monotone Strongly chordal and chordal bipartite graphs are sandwich monotone Pinar Heggernes Federico Mancini Charis Papadopoulos R. Sritharan Abstract A graph class is sandwich monotone if, for every pair of its

More information

Chapter 1 The Real Numbers

Chapter 1 The Real Numbers Chapter 1 The Real Numbers In a beginning course in calculus, the emphasis is on introducing the techniques of the subject;i.e., differentiation and integration and their applications. An advanced calculus

More information

COMBINATORIAL GAMES AND SURREAL NUMBERS

COMBINATORIAL GAMES AND SURREAL NUMBERS COMBINATORIAL GAMES AND SURREAL NUMBERS MICHAEL CRONIN Abstract. We begin by introducing the fundamental concepts behind combinatorial game theory, followed by developing operations and properties of games.

More information

Chapter 2. Reductions and NP. 2.1 Reductions Continued The Satisfiability Problem (SAT) SAT 3SAT. CS 573: Algorithms, Fall 2013 August 29, 2013

Chapter 2. Reductions and NP. 2.1 Reductions Continued The Satisfiability Problem (SAT) SAT 3SAT. CS 573: Algorithms, Fall 2013 August 29, 2013 Chapter 2 Reductions and NP CS 573: Algorithms, Fall 2013 August 29, 2013 2.1 Reductions Continued 2.1.1 The Satisfiability Problem SAT 2.1.1.1 Propositional Formulas Definition 2.1.1. Consider a set of

More information

k-degenerate Graphs Allan Bickle Date Western Michigan University

k-degenerate Graphs Allan Bickle Date Western Michigan University k-degenerate Graphs Western Michigan University Date Basics Denition The k-core of a graph G is the maximal induced subgraph H G such that δ (H) k. The core number of a vertex, C (v), is the largest value

More information

A.J. Kfoury y. December 20, Abstract. Various restrictions on the terms allowed for substitution give rise

A.J. Kfoury y. December 20, Abstract. Various restrictions on the terms allowed for substitution give rise A General Theory of Semi-Unication Said Jahama Boston University (jahama@cs.bu.edu) A.J. Kfoury y Boston University (kfoury@cs.bu.edu) December 20, 1993 Technical Report: bu-cs # 93-018 Abstract Various

More information

CS 350 Algorithms and Complexity

CS 350 Algorithms and Complexity CS 350 Algorithms and Complexity Winter 2019 Lecture 15: Limitations of Algorithmic Power Introduction to complexity theory Andrew P. Black Department of Computer Science Portland State University Lower

More information

Uncountable Automatic Classes and Learning

Uncountable Automatic Classes and Learning Uncountable Automatic Classes and Learning Sanjay Jain a,1, Qinglong Luo a, Pavel Semukhin b,2, Frank Stephan c,3 a Department of Computer Science, National University of Singapore, Singapore 117417, Republic

More information

Lecture 4: NP and computational intractability

Lecture 4: NP and computational intractability Chapter 4 Lecture 4: NP and computational intractability Listen to: Find the longest path, Daniel Barret What do we do today: polynomial time reduction NP, co-np and NP complete problems some examples

More information

On shredders and vertex connectivity augmentation

On shredders and vertex connectivity augmentation On shredders and vertex connectivity augmentation Gilad Liberman The Open University of Israel giladliberman@gmail.com Zeev Nutov The Open University of Israel nutov@openu.ac.il Abstract We consider the

More information

ACO Comprehensive Exam October 14 and 15, 2013

ACO Comprehensive Exam October 14 and 15, 2013 1. Computability, Complexity and Algorithms (a) Let G be the complete graph on n vertices, and let c : V (G) V (G) [0, ) be a symmetric cost function. Consider the following closest point heuristic for

More information

Approximation Algorithms for Maximum. Coverage and Max Cut with Given Sizes of. Parts? A. A. Ageev and M. I. Sviridenko

Approximation Algorithms for Maximum. Coverage and Max Cut with Given Sizes of. Parts? A. A. Ageev and M. I. Sviridenko Approximation Algorithms for Maximum Coverage and Max Cut with Given Sizes of Parts? A. A. Ageev and M. I. Sviridenko Sobolev Institute of Mathematics pr. Koptyuga 4, 630090, Novosibirsk, Russia fageev,svirg@math.nsc.ru

More information

Advanced topic: Space complexity

Advanced topic: Space complexity Advanced topic: Space complexity CSCI 3130 Formal Languages and Automata Theory Siu On CHAN Chinese University of Hong Kong Fall 2016 1/28 Review: time complexity We have looked at how long it takes to

More information

CS 395T Computational Learning Theory. Scribe: Mike Halcrow. x 4. x 2. x 6

CS 395T Computational Learning Theory. Scribe: Mike Halcrow. x 4. x 2. x 6 CS 395T Computational Learning Theory Lecture 3: September 0, 2007 Lecturer: Adam Klivans Scribe: Mike Halcrow 3. Decision List Recap In the last class, we determined that, when learning a t-decision list,

More information

Chapter 4: Computation tree logic

Chapter 4: Computation tree logic INFOF412 Formal verification of computer systems Chapter 4: Computation tree logic Mickael Randour Formal Methods and Verification group Computer Science Department, ULB March 2017 1 CTL: a specification

More information