Oblivious and Non-Oblivious Local Search for Combinatorial Optimization. Justin Ward

Size: px
Start display at page:

Download "Oblivious and Non-Oblivious Local Search for Combinatorial Optimization. Justin Ward"

Transcription

1 Oblivious and Non-Oblivious Local Search for Combinatorial Optimization by Justin Ward A thesis submitted in conformity with the requirements for the degree of Doctor of Philosophy Graduate Department of Computer Science University of Toronto c Copyright 2012 by Justin Ward

2 Abstract Oblivious and Non-Oblivious Local Search for Combinatorial Optimization Justin Ward Doctor of Philosophy Graduate Department of Computer Science University of Toronto 2012 Standard local search algorithms for combinatorial optimization problems repeatedly apply small changes to a current solution to improve the problem s given objective function. In contrast, non-oblivious local search algorithms are guided by an auxiliary potential function, which is distinct from the problem s objective. In this thesis, we compare the standard and non-oblivious approaches for a variety of problems, and derive new, improved non-oblivious local search algorithms for several problems in the area of constrained linear and monotone submodular maximization. First, we give a new, randomized approximation algorithm for maximizing a monotone submodular function subject to a matroid constraint. Our algorithm s approximation ratio matches both the nown hardness of approximation bounds for the problem and the performance of the recent continuous greedy algorithm. Unlie the continuous greedy algorithm, our algorithm is straightforward and combinatorial. In the case that the monotone submodular function is a coverage function, we can obtain a further simplified, deterministic algorithm with improved running time. Moving beyond the case of single matroid constraints, we then consider general classes of set systems that capture problems that can be approximated well. While previous such classes have focused primarily on greedy algorithms, we give a new class that captures problems amenable to optimization by local search algorithms. We show that several combinatorial optimization problems can be placed in this class, and give a non-oblivious local search algorithm that delivers improved approximations for a variety of specific problems. In contrast, we show that standard local search algorithms give no improvement over nown approximation results for these problems, even when allowed to search larger neighborhoods than their non-oblivious counterparts. ii

3 Finally, we expand on these results by considering standard local search algorithms for constraint satisfaction problems. We develop conditions under which the approximation ratio of standard local search remains limited even for super-polynomial or exponential local neighborhoods. In the special case of MaxCut, we further show that a variety of techniques including random or greedy initialization, large neighborhoods, and best-improvement pivot rules cannot improve the approximation performance of standard local search. iii

4 Acnowledgements I than the members of both my supervisory and final oral exam committees Charles Racoff, Toni Pitassi, Avner Magen, Alasdair Urquhart, and Dere Corneil as well as my external examiner Anupam Gupta. They provided many useful suggestions, insightful observations, and supportive comments that have helped shape this thesis. Additionally, I would lie to Maxim Svirideno for many useful discussions and advice on simplifying some the proofs presented in the thesis, and Julian Mestre for comments on an initial draft of some results presented in Chapters 5 and 6. I than various colleagues and friends at the University especially, Siavosh Benabbas, George Dahl, Golnaz Elahi, Michalis Famelis, Yuval Filmus, Wesley George, Abe Heifets, Dai Tri Man Le, Joel Oren, Jocelyn Simmonds, Colin Stewart, Rory Tul, and Dustin Wehr for providing not only intellectual insights and technical help but also levity, camaraderie, and empathy. In addition, I than Yuval Filmus for his many and considerable contributions to the proofs presented in Chapters 3 and 4. Finally, I than Lila Fontes for aiding in the translation of relevant portions of Padé s thesis, and for being a close friend and confidant throughout my studies. I than the Department of Computer Science and the School of Graduate Studies for providing financial support for my studies at the University of Toronto. I than my supervisor Allan Borodin for his sound advice and unwavering support. My research initially involved a significant change of direction for me and so was accompanied by occasionally daunting periods of self-doubt and uncertainty. Most of all I than him for maintaining faith in my abilities and prospects, even when I had none. His support whether intellectual, emotional, moral, or merely financial gave me the confidence to complete this wor, and I hope above all else that he finds the end result to be worthy of his considerable investment. Finally, I than my best friend and wife Amy Miller who has been and remains a steadfast advocate, constant inspiration, and insoluble mystery to me. iv

5 Contents 1 Introduction A Generic Local Search Algorithm Theoretical Results for General Local Search Our Contributions Preliminaries Notation Sets Probability Miscellaneous Linear and Submodular Functions Independence Systems and Matroids The Greedy Algorithm The Continuous Greedy Algorithm Partial Enumeration Maximum Coverage The Problem A Non-Oblivious Local Search Algorithm Analysis of the Algorithm Obtaining the α Sequence Monotone Submodular Maximization A Non-Oblivious Local Search Algorithm Analysis of the Algorithm Properties of the Sequences γ Locality Ratio Computing g Main Results Obtaining the Coefficient Sequences Further Properties of g v

6 5 Set Systems for Local Search Set Systems for Greedy Approximation Algorithms Wea -Exchange Systems Strong -Exchange Systems Applications Independent Set in ( + 1)-Claw Free Graphs Matroid Intersection Uniform Hypergraph b-matching Matroid -Parity Maximum Asymmetric Traveling Salesman Algorithms for Strong -Exchange Systems Linear Maximization Monotone Submodular Maximization Limitations of Oblivious Local Search for CSPs Large Neighborhoods Random and Greedy Initial Solutions and Best Improvement Pivot Rules Conclusion Monotone Submodular Maximization Set Systems for Local Search Negative Results for CSPs Bibliography 132 List of Algorithms 141 vi

7 Chapter 1 Introduction Local search is one of the simplest algorithmic approaches to combinatorial optimization problems. Despite its simplicity, local search has been successful in both practical and theoretical settings. It is widely used as an heuristic for solving NP-hard problems, and appears as a ey component of such classic algorithms as Edmonds matching algorithms, the Ford-Fulerson algorithm, and Dantzig s simplex algorithm, as well as many state-of-the-art approximation algorithms. In the non-oblivious variant of local search, the algorithm is guided by an auxiliary potential function instead of the problem s given objective. This technique was first formalized by Alimonti [2, 3, 4] and Khanna, Motwani, Sudan, and Vazirani [63, 64] but has seen limited application since its introduction. Here, we reconsider non-oblivious local search and give several new applications of the technique. We show that standard, oblivious local search algorithms have limited approximation performance for these applications, and thereby demonstrate the relative power of non-oblivious algorithms. 1.1 A Generic Local Search Algorithm We now describe more formally what we mean by local search. First, we describe the general class of optimization problems considered in this thesis. We restrict our attention to combinatorial optimization problems of the following form: 1 Definition 1.1 (Combinatorial Optimization Problem). A combinatorial optimization problem consists of: a goal in {max, min} that specifies whether it is a maximization or minimization problem. a ground set X. a collection F of subsets of X called feasible solutions. 1 See Ausiello, Crescenzi, and Protasi [7] for a survey on the theory of NP-Optimization problems. Note that our definition varies slightly from the standard in that we do not require f to assign integer values to solutions. Our notion of a combinatorial optimization problem is not intended to capture all problems in the field of combinatorial optimization, but is general enough to capture all those problems that we consider. 1

8 Chapter 1. Introduction 2 a function f : 2 X R 0 assigning a value to each subset of the ground set. The goal of the problem is to find a set S F that either maximizes or minimizes the function f (depending on the stated goal). All of the problems that we consider will be maximization problems, so we shall not specify the goal explicitly. Note that, in general, there may not be a succinct representation for either F or f. Generally, we shall assume that F is given as a membership oracle (also called an independence oracle, for reasons that will be made clear in Section 2.3) that answers whether a given set is in F or not. Similarly, we generally suppose that f is given as a value oracle that, given a subset S of X, returns its value f(s). Notable exceptions are linear functions (described in Section 2.2) and coverage functions (described in Section 3.1). In these cases, f has a succinct representation that we shall exploit in our algorithms. Because the combinatorial optimization problems we consider are NP-hard, we focus on the problem of obtaining approximate solutions. Given an instance of a combinatorial optimization problem, we say that an algorithm is an r-approximation algorithm for that instance, for r [0, 1], if the solution S produced by the algorithm is at least r times the value of the optimal solution O. We call the value r the approximation ratio for the algorithm on the given instance, and define the approximation ratio of an algorithm for a problem to be the infimum of the approximation ratios of its instances. Again, here we consider only maximization problems (a similar definition can be obtained for minimization by reversing the role of S and O in the definition). Note that since we use approximation ratios in the range [0, 1], larger values reflect better approximations. The primary concern of this thesis is the application of local search to particular combinatorial optimization problems. Our general notion of a local search algorithm is captured in Algorithm 1. The generic local search algorithm GenLocalSearch is parameterized by several component functions, which together define a particular local search algorithm for a combinatorial optimization problem. Definition 1.2 (Generic Local Search Algorithm). Let I be some instance of a combinatorial optimization problem with solution space S, feasible solutions, F and objective value f. The generic local search algorithm for I has the form shown in Algorithm 1, and is specified by the following component functions: A potential function g assigning each solution S S a value g(s) R 0. A neighborhood structure N associating a set of nearby solutions N(S) S with each solution S S. A pivot rule pivot selecting a solution pivot(c) from the set of improved, feasible 2 solutions C = {T N(S) : T F and g(t ) > g(s)} whenever this set is non-empty. 2 While there are local search algorithms that consider infeasible solutions during the search process, all of the algorithms considered in this thesis only consider feasible solutions.

9 Chapter 1. Introduction 3 An initial solution S init F. Algorithm 1: GenLocalSearch Algorithm: LocalSearch S S init ; repeat C ; foreach T N(S) do if T F and g(t ) > g(s) then C C {T }; if C then S pivot(c); until S does not change; return S; Note that each of the functions in Definition 1.2 depend implicitly on the instance I. Intuitively, the local search algorithm proceeds by first finding an initial feasible solution S init, then repeatedly searching in the neighborhood N(S) of the current solution S for a set of feasible candidate solutions, each of which improves the potential function g. After a set C of candidate solutions is found, the pivot rule pivot selects a new current solution from it. When no improved solutions are found in the neighborhood of the current solution, the algorithm returns S. In most of the algorithms we present, the pivot rule simply returns the first improved feasible solution encountered when search N(S). In this case, it is unnecessary to build the entire set of candidate solutions C, and so we omit this step from the algorithm. One notable exception is in Chapter 7 where we examine the effect of the pivot rule on the approximation performance of Algorithm 1. In general, there are no global guarantees on the quality of the solution S returned by GenLocalSearch even in terms of the potential function g. However, we can say that any S returned by GenLocalSearch is a local optimum of g in the following sense: Definition 1.3 (Local Optimum). Let N be a neighborhood structure and g be a potential function. Then, a solution S F is a local optimum with respect to g and N if we have g(t ) g(s) for all T N(S). Note that the notion of local optimality depends on both the neighborhood N and the potential function g used in GenLocalSearch. Thus, whenever we refer to local optima we mean local optima with respect to some understood, previously fixed neighborhood and potential function.

10 Chapter 1. Introduction Theoretical Results for General Local Search We now present an overview of the major theoretical approaches to local search as a general algorithmic paradigm. The first such approach concerns the time required for Algorithm 1 to converge to a local optimum. Because its runtime depends on the number of improvements that it applies, Algorithm 1 could require exponential time to find a local optimum even if each improvement can be calculated efficiently. Motivated by this general question, Johnson, Papadimitriou, and Yannaais [60] define the class PLS, of polynomial local search problems. A search problem is in PLS if the initial solution and each iteration of the local search algorithm can be carried out in polynomial time. Hence, the primary complexity-theoretic questions regarding the class PLS pertain to the number of improvements that the local search algorithm can mae. All of the problems in this class are search problems, in which the goal is to find a solution to an NP-optimization problem that is locally optimal with respect to some given neighborhood and objective function. Thus, when we spea about the PLS completeness of some problem, we are referring specifically to the problem of finding any locally optimal solution with respect to some stated neighborhood. Johnson et al. provide an appropriate reduction for the class PLS, which they use to prove the completeness of a variety of problems. They prove that if any PLS problem is NP-hard then NP =conp. In contrast, they show that the standard algorithm problem, in which we must find the specific local optimum returned by GenLocalSearch for some particular initial starting solution S init, is NP-hard for all PLS-complete problems. Using tight PLS-reductions, Papadimitriou, Yannaais, and Shäffer [79] give a general method for showing that a variety of local search problems do in fact have exponential worst-case behavior. Moreover, they show that the standard algorithm problem is in fact PSPACE-complete. Thus, the problem of finding some local optimum appears to be easier than that of finding the particular local optimum produced by a local search algorithm. In this and subsequent wor [83], they demonstrate the PLS-completeness of several well-nown local search problems, including Lin and Kernighan s heuristics for the traveling salesman and graph bipartition problems, the problem of finding a stable configuration in an undirected neural networ, and various local search algorithms for MaxCut. In practice, we can eschew the difficulties posed by PLS-completeness by weaening our notion of local optimality. In many situations, it is sufficient to find a solution that is only approximately locally optimal, in the following sense. Definition 1.4 (ɛ-approximate Local Optimum). Let N be a neighborhood structure and g be a potential function. Then, a solution S F is an ɛ-approximate local optimum with respect to g and N if we have g(t ) (1 + ɛ)g(s) for all T N(S). This idea has been used in a variety of contexts to yield polynomial-time local search

11 Chapter 1. Introduction 5 algorithms. A variant for linear objective functions is described by Arin and Hassin [5]. They round all the weights used to define the objective function down to integer multiples of some well-chosen value, thus requiring that each step of the local search algorithm must mae a constant additive improvement to the problem s objective function. Orlin, Punnen, and Schulz [77] consider the general difficulty of finding approximate local optima for linear combinatorial optimization problems, and show that this can be accomplished in polynomial time for any problem in PLS. While the general theory of PLS-completeness gives non-trivial bounds on the convergence time of the standard local search algorithm, it says nothing about the relative quality of the local optima produced by local search. In order to study such questions, Ausiello and Protasi [8] introduce the class GLO of those NP-Optimization problems that have guaranteed local optima with respect to a given neighborhood mapping. A problem has guaranteed local optima with respect to N if there is some constant R 0 such that any solution S that is locally optimal with respect to N has objective value at least 1/ times that of a globally optimal solution. The constant for a problem in GLO gives a natural bound on the approximation performance of a local search algorithm. We define the locality ratio of a local search algorithm on a given instance of a combinatorial optimization problem to be the largest value r [0, 1] such that for any local optimum S we have f(s) r f(o), where O is a global optimum. Then, the locality ratio r corresponds to the value 1/ in the definition of GLO. By analogy with approximation ratios, we define the locality ratio for a problem to be the infimum of the locality ratios of its instances. There are several advantages to woring with locality ratios. An algorithm s locality ratio is determined solely by the potential function g and the neighborhood structure N. However, it does not depend on the initial solution S init or the pivot rule pivot. The locality ratio hence allows us to compute a lower bound on the approximation ratio of GenLocalSearch without considering the dynamic behavior of the algorithm, which can be extremely difficult to determine. For this reason, virtually all analyses of particular local search algorithms are based on the algorithms locality ratios. One noteworthy exception is Chandra and Halldórsson s analysis [22] of a greedy local search algorithm for maximum weight independent sets in ( +1)-claw free graphs. They show that a local search algorithm in which S init is chosen greedily and pivot always chooses the best improved solution attains an approximation ratio of almost 3/2 times the locality ratio for the problem. Thus far, we have been using the terms potential function and objective function, interchangeably in our discussion of local optimality; that is, we have been assuming that the potential function g used by GenLocalSearch is simply the problem s given objective function f. In independent wor, Alimonti [2, 3, 4] and Khanna et al. [63, 64] introduce the notion 3 of nonoblivious local search, in which the potential function used to guide the local search procedure is different from the problem s stated objective function, f (in contrast, they call variants of 3 The (unfortunate) terminology non-oblivious local search is due to Khanna et al.

12 Chapter 1. Introduction 6 local search in which g = f oblivious local search algorithms). They show that non-oblivious techniques yield improved locality ratios for a variety of problems. Note that we always require local optimality with respect to g but state locality ratios in terms of f. That is, a problem has locality ratio r for some non-oblivious local search algorithm of the form GenLocalSearch if every local optimum S with respect to g satisfies f(s) r f(o). By analogy with GLO, Khanna et al. formulate the class NonObliviousGLO of problems that have non-zero locality ratios for some non-oblivious potential function. They show that GLO is a strict subset of NonObliviousGLO; that is, there are problems that have locality ratio 0 for oblivious local search but some positive locality ratio for non-oblivious local search. Khanna et al. further prove that every problem in MaxSNP can be approximated to within some constant factor by some non-oblivious local search in which the neighborhood relation N satisfies d(s, T ) = 1 for all S and all T N(S), where d is the Hamming distance between solutions S and T. Despite the apparent relative power of non-oblivious local search, there has been little application or systematic study of it since these first results. Berman [11] gives a non-oblivious local search algorithm for the weighted independent set problem in ( + 1)-claw free graphs (we discuss this algorithm further in Chapters 5 and 6). Berman and Krysta [12] further consider a generalization of this algorithm in which the weights are raised to some power between 1 and 2. Finally, some of the local search approaches to facility location problems [6, 23] mae use of weight scaling to improve the approximation performance of the algorithm. Although it is not presented as such, the resulting algorithm essentially employs a non-oblivious potential function. In this thesis, we revisit non-oblivious local search. We apply the technique in several new areas, including submodular maximization, and obtain improved approximation for a variety of problems. Even in cases where non-oblivious techniques merely match the performance of existing approximation algorithms, they yield combinatorial algorithms that are significantly simpler than existing approaches. In contrast, we show that variants of oblivious local search for these problems give diminishing returns even when they are allowed to consider much larger neighborhoods than their non-oblivious counterparts. 1.3 Our Contributions We now outline the main contributions presented in the thesis. In Chapter 3, we give a new, combinatorial algorithm for the problem of maximizing a coverage function subject to a matroid constraint. Coverage functions, defined in Section 3.1, are a particular class of submodular functions possessing a succinct, explicit representation. Our non-oblivious algorithm maes use of a special, weighted potential function, whose weights were derived by solving a family of linear programs. In addition to stating the general formula for this function and proving that it yields an improved locality ratio,

13 Chapter 1. Introduction 7 we give some details of the experimental approach used to derive it. Our non-oblivious algorithm is a (1 1 e )-approximation, which is optimal under the assumption P NP as well as in the value oracle setting. Our algorithm matches the approximation performance of the continuous greedy algorithm described in Section 2.5, which applies in the more general setting of maximizing any monotone submodular function subject to a matroid constraint. However, our algorithm is simpler and more straightforward than the continuous greedy algorithm, and is completely combinatorial. In contrast to our non-oblivious local search algorithm, we show that oblivious local search has a locality ratio of only ɛ, even when allowed to search much larger neighborhoods than our algorithm. This chapter is based on joint wor with Yuval Filmus, appearing in [40]. In Chapter 4, we turn to the general problem of maximizing any monotone submodular function subject to a matroid constraint. We expand the non-oblivious approach of Chapter 3 to this setting, matching the general applicability of the continuous greedy algorithm, as well as its approximation performance. Again, our algorithm is simple to state and combinatorial. The results of Chapter 3 mae crucial use of the succinct representation available for coverage functions, and here we do not have access to such a representation. Thus, the techniques required for the general submodular case are a non-trivial extension of the maximum coverage case. Again, we provide a complete construction and analysis for our non-oblivious potential function as well as the details of its derivation. Additionally, we show that our construction produces the same non-oblivious potential function as that in Chapter 3 when applied to a submodular function that is a coverage function. Unlie the algorithm of Chapter 3, however, our general algorithm requires randomization. potential function efficiently. Specifically, it employs a sampling procedure to compute our Our algorithm is a (1 1 e )-approximation for monotone submodular maximization subject to a matroid constraint. Moreover, ( ) if the total curvature of the submodular function is at most c, our algorithm is a 1 e c -approximation. Even in this specific case, our result c matches performance of the continuous greedy algorithm, and is the best possible in the value oracle model. This chapter is based on joint wor with Yuval Filmus, appearing in [41, 42]. In Chapters 5 and 6, we consider the problem of linear and monotone submodular maximization in larger classes of set systems. There is a wealth of research deriving set systems that capture problems for which the greedy algorithm attains some constant approximation ratio, but there are no such results for local search algorithms. In Chapter 5, we introduce a new class of set systems called -exchange systems, and show that they capture combinatorial optimization problems for which oblivious local

14 Chapter 1. Introduction 8 search attains a 1 -approximation. We prove several results relating -exchange systems to existing classes of set systems. Finally, we show that a variety of well-nown combinatorial optimization problems give rise to -exchange systems. In Chapter 6, we consider non-oblivious local search for -exchange systems. We extend a simple algorithm based on an existing approach [11] for the weighted independent set problem in ( + 1)-claw free graphs to all -exchange systems. Moreover, we show how to generalize this approach to the case of monotone submodular objective functions. Because the approach for the weighted case maes crucial use of the objective function s weighted representation, our generalization is non-trivial. We obtain and approximations for maximizing linear and monotone submodular objective functions, respectively, in - exchange systems. This provides improved approximations in both the general case and for several specific problems. These chapters are based on wor appearing in [97] and joint wor with Feldman, Naor, Schwartz in [39]. and the listed authors. contributions here. This latter paper was merged from separate submissions by myself Unless otherwise noted, I present only my own independent In Chapter 7, we prove a variety of negative results for oblivious local search in the general setting of Boolean constraint satisfaction problems. Specifically, we consider the performance of oblivious local search both when the neighborhood size is increased, and when the initial solution is chosen randomly or via a simple, greedy algorithm. The first set of results consider the h(n)-local search algorithm that at each step changes the assignment to at most h(n) variables for some function h depending on the total number n of variables. We show that if a constraint satisfaction problem possesses an instance with a particular ind of local optimum under 1-local search, then this instance s locality ratio is an upper bound on the locality ratio of h(n)-local search for all h = o(n). Moreover, even if h(n) = cn for some small value c, the locality ratio for the problem remains strictly less than 1. Note that in this case we are allowing the local search algorithm to examine an exponential number of solutions in each iteration. The second set of results considers the particular CSP MaxCut. The bounds in this problem differ from our other results in that we consider the effects of the initial solution S init and the pivot rule pivot used to define the oblivious local search algorithm. Thus, we consider the actual dynamic behavior of the algorithm and directly bound its approximation ratio, rather than simply considering its locality ratio. We show that there are instances for which a local search algorithm that chooses its initial solution S init uniformly at random has expected approximation ratio at most 3/4. This bound is less than the approximation produced by Goemans and Williamson [45] and holds even in the case that the local search algorithm has access to an arbitrarily powerful oracle for the function pivot that chooses an improved solution at each step. If pivot is implemented

15 Chapter 1. Introduction 9 by a greedy rule that always chooses the best available improvement, we can improve our bound to 1/2, showing that the randomly initialized best-improvement local search has an expected approximation ratio no better than the locality ratio for deterministic 1-local search. All of our results hold generally for any h(n)-local search in which h = o(n). Moreover, we derive non-trivial bounds even in the case that h(n) = cn for c < 1/2. Finally, we show that choosing S init by using the greedy algorithm can result in a worst-case local optimum and so cannot attain an approximation ratio beyond the locality ratio for the problem.

16 Chapter 2 Preliminaries In this chapter, we review some definitions regarding linear and submodular functions, independence systems, matroids, and related algorithms. We present relevant wor in the area, and give some general theorems that will prove useful in later sections. We begin by establishing some standard notational conventions in Section 2.1. In Section 2.2 we consider two particular classes of objective functions f and identify some useful extra properties of such functions. In Section 2.3, we examine restrictions on the structure of the collection F of feasible solutions. Finally, in Sections 2.4, 2.5, and 2.6, we review nown algorithms for solving the resulting classes of combinatorial optimization problems. 2.1 Notation We begin by describing the notational conventions used in the thesis Sets Throughout, we shall (with a few exceptions) use lowercase letters to denote single values or elements, uppercase letters to denote sets of elements, and calligraphic letters to denote collections of sets. We use the following special notations related to sets: R 0 denotes the set of non-negative real numbers. N denotes the set of natural numbers. For an integer n, [n] denotes the set {1,..., n}. For a set S and element x, we use the shorthand S + x for the set S {x} and the shorthand S x for the set S \ {x}. For a set S, 2 S denotes the set of all subsets of S. For a set S and an integer, we denote by ( S ) the collection of all subsets of S containing exactly elements. 10

17 Chapter 2. Preliminaries Probability For a condition (or event) C, 1 (C) denotes the indicator that is 1 when C is true and 0 otherwise. For a random event E and some explicitly given probability distribution, Pr [E] denotes the probability that E will occur. For a variable x, a function f, and a set of values S, E x S [f(x)] denotes the expected value of f(x) when the value x is chosen uniformly at random from S. When the probability distribution of a random variable X has been explicitly stated, and there is no chance of confusion, we shall write E[X] the expected value of X with respect to this distribution Miscellaneous We use H to denote the th harmonic number, given by H = i=1 1. Note that the sequence H 1, H 2,... is increasing. We shall also mae use of the well-nown fact that H = Θ(log ). We use the notation f = Õ(g), to indicate that f has the same asymptotic rate of growth as g when poly-logarithmic factors are ignored. 2.2 Linear and Submodular Functions Perhaps the simplest useful class of objective functions f are linear functions. Definition 2.1 (Linear Function). A function f : 2 X R 0 is linear if for all A, B X. f(a) + f(b) = f(a B) + f(a B) A linear function f can always be represented in terms of a weight function w : X R 0 that assigns each element of x X a non-negative weight w(x). The value f(s) is then given by the total weight x S w(x) of all elements in S. If we relax the equality in Definition 2.1 we obtain the class of submodular functions. Definition 2.2 (Submodular Function). A function f : 2 X R 0 is submodular if f(a) + f(b) f(a B) + f(a B)

18 Chapter 2. Preliminaries 12 for all A, B X. For a submodular function f and a set A, we define the marginal gain with respect to S of an element x X \ A as f S (x) = f(s + x) f(s). The notion of an element s marginal gain is roughly analogous to the notion of an element s weight in the linear case. However, in the submodular case, an element can have different marginal gains with respect to different sets. Nemhauser, Wolsey, and Fisher study various aspects of combinatorial optimization problems in a pair of papers [76, 43]. Among other things, they show that the following properties are equivalent characterizations of submodularity: Definition 2.3 (Submodular Function (Alternative Characterizations)). Consider a function f : 2 X R 0. Then, the following statements are each equivalent to the statement that f is submodular: (i) f A (x) f B (x) whenever B A X and x A. (ii) f(a + x) + f(a + y) f(a {x, y}) + f(a) for all A X and x, y A. These characterizations essentially state that submodular functions are characterized by decreasing marginal gains. Thus, submodularity can be viewed as a discrete analogue of concavity. Furthermore, the concept of decreasing marginal gains lends itself naturally to many economic and combinatorial settings, as shown by Nemhauser, Wolsey, and Fisher. We shall assume that all submodular objective functions f are normalized so that f( ) = 0. Note that this condition holds trivially for linear functions. We restrict ourselves further to the class of monotone submodular functions. Definition 2.4 (Monotone Function). A function f : 2 X R 0 is monotone if f(b) f(a) for all B A X. Note that in a monotone submodular function, all marginal gains are non-negative. In fact, this property provides an alternative characterization of the class of monotone functions. Another natural restriction involves how much the marginals of a submodular function are allowed to decrease. This notion is captured by the curvature of a submodular function. Definition 2.5 (Total Curvature). A monotone submodular function f has total curvature c if and only if f(a B) f(a) + (1 c)f(b). for any two disjoint sets A, B. For c = 1 the definition is equivalent to a statement of monotonicity. In the case that c = 0, the statement implies that f(a) + f(b) f(a B) and from submodularity we have

19 Chapter 2. Preliminaries 13 f(a) + f(b) f(a B) since A and B are disjoint, so in fact f(a B) = f(a) + f(b) for all disjoint A and B. That is, the case c = 0 corresponds to the case in which f is linear. Thus, the parameter c [0, 1] smoothly interpolates between the class of all monotone submodular functions and linear functions. Finally, we give three useful theorems regarding monotone submodular functions. The first two are very slight modifications of Lemmas 1.1 and 1.2 in Lee, Svirideno, and Vondrá [71]. Theorem 2.6. Let f be a monotone submodular function on X. Let C, S X, and {T i } l i=1 be a collection of subsets of C \ S such that each element of C \ S appears in at least of the subsets T i. Then, l [f(s T i ) f(s)] [f(s C) f(s)] i=1 Proof. Fix an arbitrary ordering on X. For any x C \ S let C x be the set of elements in C that precede x in the ordering. Similarly, let Ti x be the set of elements of T i that precede x. Then, we have Ti x C x for all x. Thus, f S C x(x) f S T x i (x) = f(s Ti x x T i x T i x T i + x) f(s T x i ) = f(s T i ) f(s), where the first inequality follows from submodularity and the last equality from telescoping the summation. Now, we have: l f(s T i ) f(s) i=1 l f S C x(x) f S C x(x) = [f(c S) f(s)], x T i x T i i=1 where the second inequality follows from the fact that each x S occurs in at least of the sets T i and f S C x(x) 0 since f is monotone. Theorem 2.7. Let f be a monotone submodular function on X. Let C, S X, and {T i } l i=1 be a collection of subsets of S \ C such that each element of S \ C appears in at most of the subsets. Then, l [f(s) f(s \ T i )] [f(s) f(s C)] i=1 Proof. Fix an arbitrary ordering on S \ C. For any x S \ C let S x be the set containing x, and all the elements from S \ C that precede x in the ordering. Similarly, let T x i all the elements from T i preceding x. Then, we have Ti x for all x. Thus, contain x and S x for all x and so S \ S x S \ T x i f S\S x(x) f S\T x i (x) = [f((s \ Ti x ) + x) f(s \ Ti x )] = f(s) f(s \ T i ), x T i x T i x T i where the first inequality follows from submodularity and the last equality from telescoping the

20 Chapter 2. Preliminaries 14 summation. Now, we have: l [f(s) f(s \ T i )] i=1 l f S\S x(x) f S\S x(x) = [f(s) f(s C)], x T i i=1 x S\C where the second inequality follows from the fact that each x S occurs in at most of the sets T i and f S\S x(x) 0 since f is monotone. We primarily use Theorem 2.6 in the restricted setting in which = 1 and {T i } l i=1 is a partition of C \ S. The next theorem is another application of Theorem 2.6, involving the average value of a submodular function on subsets of a particular size. Theorem 2.8. Let f be a non-negative submodular function, and let S be a set of size m. For in the range 1 m, 1 ( m ) f(t ) m f(s). T ( S ) Proof. Each element x S appears in exactly ( ) m 1 1 = m ) ( m( of the sets in S ). From Theorem 2.6, and the assumption that f( ) = 0, we then have: T ( S ) f(t ) = T ( S ) [f(t ) f( )] m 2.3 Independence Systems and Matroids ( ) m [f(s) f( )] = ( ) m f(s). m In this section, we consider various classes of feasible solutions for combinatorial optimization problems. Our classes are all built on the notion of an independence system. An independence system is given by a ground set X and a non-empty, downward closed collection I of subsets of X: Definition 2.9 (Independence System). Let X be a set of elements and I 2 X. Then the set system (X, I) is an independence system if and only if I and for all A, B X, A I and B A implies B I. We refer to the sets in I as independent sets. For a given set A X we call the inclusionwise maximal independent subsets of A bases of A, or, when A is understood to be X, simply bases. Finally, when dealing with independence systems we assume that every element x X is contained in at least one independent set A I. This assumption is without loss of generality since if some element does not occur in any independent set of I, we can remove it from the ground set X without affecting the set I of feasible solutions. Furthermore, because I is downward closed, this assumption is equivalent to the assumption that {x} I for all x X. While the class of all independence systems is too general to give rise to interesting algorithmic and combinatorial properties, it does serve as the basis for several more restricted classes of

21 Chapter 2. Preliminaries 15 set systems which do exhibit interesting properties. Probably the most well nown such class, is the class of matroids. Matroids were first axiomatized by Whitney [98] as a generalization of the notion of linear independence in vector spaces. Definition 2.10 (Matroid [98] (also [85, (39.1)])). An independence system (X, I) is a matroid if and only if for all A, B I if A > B then there exists x A \ B such that B + x I. The following are some simple classes of matroids that we will refer to later in the thesis. In a uniform matroid of ran, I consists of precisely those sets of size at most. In a partition matroid we are given a partition of X into p sets X 1,..., X p, and integers 1,..., p. Then, I contains precisely those sets S for which S X i i for all 1 i p. In a graphic matroid, we are given an undirected graph G = (V, E). The ground set is E and I contains precisely those sets of edges that do not contain a cycle. There are various alternate characterizations for the class of matroids. Two that shall be useful are the following, which are given in terms of bases. Theorem 2.11 ([85, (39.2)]). An independence system (X, I) is a matroid if and only if for all E X, all bases of E have the same size. The common size of all bases of a set E is called the ran of E, denoted ran(e). The ran of the matroid (X, I) is then simply the ran of X. Whitney [98] also gave the following alternate characterization of matroids in terms of bases. Theorem 2.12 ([98] (also [85, Theorem 39.6])). Let B be a non-empty collection of subsets of X. Then, B is the collection of bases of a matroid if and only if: (i) For any A, B B and x A \ B, there exists y B \ A such that A x + y B. (ii) For any A, B B and x A \ B, there exists y B \ A such that B y + x B. Brualdi [16] shows that matroids exhibit the following stronger exchange properties. Theorem 2.13 ([16, Theorem 1] (also [85, Corollary 39.12a])). Let A, B be bases of a matroid M. Then, there exists a bijection π : A B such that B π(x) + x is a base of M for all x A. Furthermore, π(x) = x for all x A B. Theorem 2.14 ([16, Theorem 2] (also [85, Theorem 39.12])). Let A, B be bases of a matroid M. Then, for any x A there exists some y B such that A x + y and B y + x are both bases of M. Theorems 2.11 and 2.13 (i.e. the fact that all bases of a matroid have equal size and the existence of the bijection π) are typically all that we need from the structure of a matroid in order to derive our results. Ideally, we would lie for the bijection π from 2.13 to satisfy the stronger conditions of 2.14 (i.e. to have a single bijection π : A B such that both B π(x) + x and A x + π(x)

22 Chapter 2. Preliminaries 16 are bases). Brualdi [16] gives an example of a matroid for which this cannot be done. Later wor by Brualdi and Scrimger [19, 17, 18] generalizes the base exchange characterization of Theorem 2.12 to consider wealy base orderable matroids, in which a bijection π satisfying this stronger property does exist. They also define the following class of matroids, that exhibit an even stronger sort of exchange property. Definition 2.15 (Strongly Base Orderable Matroid). A matroid is strongly base orderable if for any pair A, B of its bases there exists a bijection π : A B such that for all C A, (B \ π(c)) C is a base (where π(c) = {π(x) : x C}. That is, in a strongly base orderable matroid, the bijection π : A B between any two bases A to B can be extended to subsets of A and B. Essentially, this means that in any strongly base orderable matroid, several pairs of swaps (each of the x, π(x)) can be performed simultaneously. The class of strongly base orderable matroids is quite large, containing gammoids, transversal matroids, and partition matroids. 1 An example of a matroid that is not strongly base orderable is the graphic matroid on K 4. A final useful characterization of matroids is the following, which relates them to submodular functions: Theorem 2.16 ([98] (also [85, Theorem 39.8]). Let ran : 2 X Z 0. Then ran is the ran function of a matroid if and only if: (i) ran(t ) ran(u) U, for all T U X. (ii) ran(t ) + ran(u) ran(t U) + ran(t U), for all T, U X. That is, ran() is the ran function of a matroid if and only if ran() is monotone submodular. A ran function on X implicitly specifies the independence system (X, I), with I = {S X : S ran(s)}. 2.4 The Greedy Algorithm We now examine in more detail combinatorial optimization problems whose feasible sets are given by independence systems. In such problems, we are given an independence system (X, I) and a function f : 2 X R 0. The goal is to find a set S I that maximizes the value f(s). First, let us consider the case in which f is a linear function, given as a weight function w : X R 0. Then, the related combinatorial optimization problem is equivalent to the problem of finding an independent set in I of maximum total weight. Rado [81] showed that the standard greedy algorithm, shown in Algorithm 2, is optimal for all linear functions f whenever (X, I) is a matroid. Conversely, Edmonds [33] showed that if the standard greedy algorithm provides an optimal solution in I for every linear function f on 2 X, then the system 1 Each of these classes is contained within the next. We do not provide definitions for gammoids and transversal matroids here.

23 Chapter 2. Preliminaries 17 Algorithm 2: Greedy Input: Independence system (X, I) Weight function w : X R 0 S ; T X; while T do x arg max t T w(t); T T \ {x}; if S {x} I then S S {x}; return S; (X, I) must be a matroid. In this sense, matroids are exactly those independence systems for which the standard greedy algorithm is optimal with respect to all linear functions. Now, let us examine the more general case in which f is any monotone submodular function. The earliest reference to the problem of maximizing a submodular set function subject to a matroid constraint seems to be Cornuéjols, Fisher, and Nemhauser [30]. They consider a constrained maximization variant of a facility location problem that is a special case of monotone submodular maximization subject to a uniform matroid constraint. They show that a greedy algorithm is a 1 1/e-approximation algorithm for this problem, while a simple local search algorithm is only a 1/2-approximation. Fisher, Nemhauser, and Wolsey [43] consider the general case of maximizing an arbitrary monotone submodular function subject to an arbitrary matroid constraint. shown in Algorithm 3. The standard greedy algorithm that they consider for the problem is Algorithm 3: SubmodularGreedy Input: Independence system (X, I) Submodular function f : 2 X R 0 S ; T X; while T do x arg max t T f S (t); T T \ {x}; if S {x} I then S S {x}; return S; Algorithm 3 is obtained naturally by modifying Algorithm 2 to use the marginal gains f S with respect to the current solution in place of the weight function w. Fisher et al. show that SubmodularGreedy is a 1/2-approximation and that this bound is tight. They also show that a simple 1-local search algorithm is a 1/2-approximation (again, they give an example showing that this bound is tight). Many results pertaining to the greedy algorithm

24 Chapter 2. Preliminaries 18 for submodular maximization are summarized in a survey of Goundan and Schulz [46]. From a hardness perspective, Feige [35] showed that unless P = NP, it is impossible to approximate the following maximum -coverage problem beyond a factor of 1 1/e. In maximum -coverage, we are given a family of subsets of some universe U and must select subsets that cover as much of U as possible. The coverage function is monotone submodular and the constraint that we can tae at most sets can be formulated as a uniform matroid of ran. Thus, maximum -coverage is a special case of maximizing a monotone submodular function subject to a matroid constraint. Nemhauser and Wolsey [75] considered the problem of monotone submodular maximization subject to a matroid constraint in the value oracle model. In this model we are given f via an oracle that provides its value on any given set. Nemhauser and Wolsey show that attaining any approximation better than 1 1/e requires an exponential number of value queries to the oracle for f. 2.5 The Continuous Greedy Algorithm Calinescu, Cheuri, Pál and Vondrá [20, 93, 21] improved on the long-standing 1/2-approximation, giving an algorithm that attains the optimal approximation ratio of 1 1/e for the general problem of maximizing any monotone submodular function subject to a single matroid constraint. Their algorithm, called the continuous greedy algorithm, consists of two phases. In the first phase, they solve a particular relaxation of the combinatorial optimization problem to obtain an approximate fractional solution. In the second phase, this fractional solution is rounded to an integral solution of the original problem by using the pipage rounding framewor of Ageev and Svirideno [1]. We now consider the continuous greedy algorithm in more detail. Let f be a monotone submodular function on X and let M = (X, I) be a matroid on X. The continuous greedy algorithm considers the following continuous, multilinear extension of f, where x [0, 1] X, is a vector with a component x i [0, 1] for each i X: F ( x) = f(r) x i (1 x i ). (2.1) R X i R In the general setting, in which f is given by a value oracle, F cannot be computed in polynomial time, but it can be efficiently estimated by random sampling. The value F ( x) can be viewed as the expected value of a random subset of X in which each element i appears with probability x i. We identify an integral vector z {0, 1} X with the set Z whose indicator vector is χ Z = z. Then, for any vector z {0, 1} X, we have F ( z) = f( z) and so F is indeed a relaxation of f. Furthermore, (as shown by Calinescu et al. [20]) the i R

Monotone Submodular Maximization over a Matroid

Monotone Submodular Maximization over a Matroid Monotone Submodular Maximization over a Matroid Yuval Filmus January 31, 2013 Abstract In this talk, we survey some recent results on monotone submodular maximization over a matroid. The survey does not

More information

A (k + 3)/2-approximation algorithm for monotone submodular k-set packing and general k-exchange systems

A (k + 3)/2-approximation algorithm for monotone submodular k-set packing and general k-exchange systems A (k + 3)/-approximation algorithm for monotone submodular k-set packing and general k-exchange systems Justin Ward Department of Computer Science, University of Toronto Toronto, Canada jward@cs.toronto.edu

More information

Maximum Coverage over a Matroid Constraint

Maximum Coverage over a Matroid Constraint Maximum Coverage over a Matroid Constraint Yuval Filmus Justin Ward University of Toronto STACS 2012, Paris Max Coverage: History Location of bank accounts: Cornuejols, Fisher & Nemhauser 1977, Management

More information

c 2014 Society for Industrial and Applied Mathematics

c 2014 Society for Industrial and Applied Mathematics SIAM J. COMPUT. Vol. 43, No. 2, pp. 514 542 c 2014 Society for Industrial and Applied Mathematics MONOTONE SUBMODULAR MAXIMIZATION OVER A MATROID VIA NON-OBLIVIOUS LOCAL SEARCH YUVAL FILMUS AND JUSTIN

More information

Maximization of Submodular Set Functions

Maximization of Submodular Set Functions Northeastern University Department of Electrical and Computer Engineering Maximization of Submodular Set Functions Biomedical Signal Processing, Imaging, Reasoning, and Learning BSPIRAL) Group Author:

More information

The Power of Local Search: Maximum Coverage over a Matroid

The Power of Local Search: Maximum Coverage over a Matroid The Power of Local Search: Maximum Coverage over a Matroid Yuval Filmus 1,2 and Justin Ward 1 1 Department of Computer Science, University of Toronto {yuvalf,jward}@cs.toronto.edu 2 Supported by NSERC

More information

The Power of Local Search: Maximum Coverage over a Matroid

The Power of Local Search: Maximum Coverage over a Matroid The Power of Local Search: Maximum Coverage over a Matroid Yuval Filmus,2 and Justin Ward Department of Computer Science, University of Toronto {yuvalf,jward}@cs.toronto.edu 2 Supported by NSERC Abstract

More information

1 Submodular functions

1 Submodular functions CS 369P: Polyhedral techniques in combinatorial optimization Instructor: Jan Vondrák Lecture date: November 16, 2010 1 Submodular functions We have already encountered submodular functions. Let s recall

More information

Approximating Submodular Functions. Nick Harvey University of British Columbia

Approximating Submodular Functions. Nick Harvey University of British Columbia Approximating Submodular Functions Nick Harvey University of British Columbia Approximating Submodular Functions Part 1 Nick Harvey University of British Columbia Department of Computer Science July 11th,

More information

Approximation Algorithms for Maximum. Coverage and Max Cut with Given Sizes of. Parts? A. A. Ageev and M. I. Sviridenko

Approximation Algorithms for Maximum. Coverage and Max Cut with Given Sizes of. Parts? A. A. Ageev and M. I. Sviridenko Approximation Algorithms for Maximum Coverage and Max Cut with Given Sizes of Parts? A. A. Ageev and M. I. Sviridenko Sobolev Institute of Mathematics pr. Koptyuga 4, 630090, Novosibirsk, Russia fageev,svirg@math.nsc.ru

More information

Revisiting the Greedy Approach to Submodular Set Function Maximization

Revisiting the Greedy Approach to Submodular Set Function Maximization Submitted to manuscript Revisiting the Greedy Approach to Submodular Set Function Maximization Pranava R. Goundan Analytics Operations Engineering, pranava@alum.mit.edu Andreas S. Schulz MIT Sloan School

More information

Constrained Maximization of Non-Monotone Submodular Functions

Constrained Maximization of Non-Monotone Submodular Functions Constrained Maximization of Non-Monotone Submodular Functions Anupam Gupta Aaron Roth September 15, 2009 Abstract The problem of constrained submodular maximization has long been studied, with near-optimal

More information

Submodular Functions: Extensions, Distributions, and Algorithms A Survey

Submodular Functions: Extensions, Distributions, and Algorithms A Survey Submodular Functions: Extensions, Distributions, and Algorithms A Survey Shaddin Dughmi PhD Qualifying Exam Report, Department of Computer Science, Stanford University Exam Committee: Serge Plotkin, Tim

More information

The Complexity of Maximum. Matroid-Greedoid Intersection and. Weighted Greedoid Maximization

The Complexity of Maximum. Matroid-Greedoid Intersection and. Weighted Greedoid Maximization Department of Computer Science Series of Publications C Report C-2004-2 The Complexity of Maximum Matroid-Greedoid Intersection and Weighted Greedoid Maximization Taneli Mielikäinen Esko Ukkonen University

More information

Locally Adaptive Optimization: Adaptive Seeding for Monotone Submodular Functions

Locally Adaptive Optimization: Adaptive Seeding for Monotone Submodular Functions Locally Adaptive Optimization: Adaptive Seeding for Monotone Submodular Functions Ashwinumar Badanidiyuru Google ashwinumarbv@gmail.com Aviad Rubinstein UC Bereley aviad@cs.bereley.edu Lior Seeman Cornell

More information

Optimization of Submodular Functions Tutorial - lecture I

Optimization of Submodular Functions Tutorial - lecture I Optimization of Submodular Functions Tutorial - lecture I Jan Vondrák 1 1 IBM Almaden Research Center San Jose, CA Jan Vondrák (IBM Almaden) Submodular Optimization Tutorial 1 / 1 Lecture I: outline 1

More information

Submodular Maximization with Cardinality Constraints

Submodular Maximization with Cardinality Constraints Submodular Maximization with Cardinality Constraints Niv Buchbinder Moran Feldman Joseph (Seffi) Naor Roy Schwartz Abstract We consider the problem of maximizing a (non-monotone) submodular function subject

More information

Submodularity and curvature: the optimal algorithm

Submodularity and curvature: the optimal algorithm RIMS Kôkyûroku Bessatsu Bx (200x), 000 000 Submodularity and curvature: the optimal algorithm By Jan Vondrák Abstract Let (X, I) be a matroid and let f : 2 X R + be a monotone submodular function. The

More information

Submodular Maximization by Simulated Annealing

Submodular Maximization by Simulated Annealing Submodular Maximization by Simulated Annealing Shayan Oveis Gharan Jan Vondrák Abstract We consider the problem of maximizing a nonnegative (possibly non-monotone) submodular set function with or without

More information

1 Maximizing a Submodular Function

1 Maximizing a Submodular Function 6.883 Learning with Combinatorial Structure Notes for Lecture 16 Author: Arpit Agarwal 1 Maximizing a Submodular Function In the last lecture we looked at maximization of a monotone submodular function,

More information

arxiv: v1 [math.oc] 1 Jun 2015

arxiv: v1 [math.oc] 1 Jun 2015 NEW PERFORMANCE GUARANEES FOR HE GREEDY MAXIMIZAION OF SUBMODULAR SE FUNCIONS JUSSI LAIILA AND AE MOILANEN arxiv:1506.00423v1 [math.oc] 1 Jun 2015 Abstract. We present new tight performance guarantees

More information

10.3 Matroids and approximation

10.3 Matroids and approximation 10.3 Matroids and approximation 137 10.3 Matroids and approximation Given a family F of subsets of some finite set X, called the ground-set, and a weight function assigning each element x X a non-negative

More information

CSE541 Class 22. Jeremy Buhler. November 22, Today: how to generalize some well-known approximation results

CSE541 Class 22. Jeremy Buhler. November 22, Today: how to generalize some well-known approximation results CSE541 Class 22 Jeremy Buhler November 22, 2016 Today: how to generalize some well-known approximation results 1 Intuition: Behavior of Functions Consider a real-valued function gz) on integers or reals).

More information

Symmetry and hardness of submodular maximization problems

Symmetry and hardness of submodular maximization problems Symmetry and hardness of submodular maximization problems Jan Vondrák 1 1 Department of Mathematics Princeton University Jan Vondrák (Princeton University) Symmetry and hardness 1 / 25 Submodularity Definition

More information

Multi-Objective Maximization of Monotone Submodular Functions with Cardinality Constraint

Multi-Objective Maximization of Monotone Submodular Functions with Cardinality Constraint Multi-Objective Maximization of Monotone Submodular Functions with Cardinality Constraint Rajan Udwani (rudwani@mit.edu) Abstract We consider the problem of multi-objective maximization of monotone submodular

More information

Maximizing Submodular Set Functions Subject to Multiple Linear Constraints

Maximizing Submodular Set Functions Subject to Multiple Linear Constraints Maximizing Submodular Set Functions Subject to Multiple Linear Constraints Ariel Kulik Hadas Shachnai Tami Tamir Abstract The concept of submodularity plays a vital role in combinatorial optimization.

More information

Efficient Approximation for Restricted Biclique Cover Problems

Efficient Approximation for Restricted Biclique Cover Problems algorithms Article Efficient Approximation for Restricted Biclique Cover Problems Alessandro Epasto 1, *, and Eli Upfal 2 ID 1 Google Research, New York, NY 10011, USA 2 Department of Computer Science,

More information

Submodular Functions and Their Applications

Submodular Functions and Their Applications Submodular Functions and Their Applications Jan Vondrák IBM Almaden Research Center San Jose, CA SIAM Discrete Math conference, Minneapolis, MN June 204 Jan Vondrák (IBM Almaden) Submodular Functions and

More information

Learning symmetric non-monotone submodular functions

Learning symmetric non-monotone submodular functions Learning symmetric non-monotone submodular functions Maria-Florina Balcan Georgia Institute of Technology ninamf@cc.gatech.edu Nicholas J. A. Harvey University of British Columbia nickhar@cs.ubc.ca Satoru

More information

arxiv: v2 [cs.ds] 28 Aug 2014

arxiv: v2 [cs.ds] 28 Aug 2014 Constrained Monotone Function Maximization and the Supermodular Degree Moran Feldman Rani Izsak arxiv:1407.6328v2 [cs.ds] 28 Aug 2014 August 29, 2014 Abstract The problem of maximizing a constrained monotone

More information

1 Overview. 2 Multilinear Extension. AM 221: Advanced Optimization Spring 2016

1 Overview. 2 Multilinear Extension. AM 221: Advanced Optimization Spring 2016 AM 22: Advanced Optimization Spring 26 Prof. Yaron Singer Lecture 24 April 25th Overview The goal of today s lecture is to see how the multilinear extension of a submodular function that we introduced

More information

Stochastic Submodular Cover with Limited Adaptivity

Stochastic Submodular Cover with Limited Adaptivity Stochastic Submodular Cover with Limited Adaptivity Arpit Agarwal Sepehr Assadi Sanjeev Khanna Abstract In the submodular cover problem, we are given a non-negative monotone submodular function f over

More information

Approximation Algorithms for Re-optimization

Approximation Algorithms for Re-optimization Approximation Algorithms for Re-optimization DRAFT PLEASE DO NOT CITE Dean Alderucci Table of Contents 1.Introduction... 2 2.Overview of the Current State of Re-Optimization Research... 3 2.1.General Results

More information

Submodular Functions, Optimization, and Applications to Machine Learning

Submodular Functions, Optimization, and Applications to Machine Learning Submodular Functions, Optimization, and Applications to Machine Learning Spring Quarter, Lecture 12 http://www.ee.washington.edu/people/faculty/bilmes/classes/ee596b_spring_2016/ Prof. Jeff Bilmes University

More information

Maximization Problems with Submodular Objective Functions

Maximization Problems with Submodular Objective Functions Maximization Problems with Submodular Objective Functions Moran Feldman Maximization Problems with Submodular Objective Functions Research Thesis In Partial Fulfillment of the Requirements for the Degree

More information

9. Submodular function optimization

9. Submodular function optimization Submodular function maximization 9-9. Submodular function optimization Submodular function maximization Greedy algorithm for monotone case Influence maximization Greedy algorithm for non-monotone case

More information

Lec. 2: Approximation Algorithms for NP-hard Problems (Part II)

Lec. 2: Approximation Algorithms for NP-hard Problems (Part II) Limits of Approximation Algorithms 28 Jan, 2010 (TIFR) Lec. 2: Approximation Algorithms for NP-hard Problems (Part II) Lecturer: Prahladh Harsha Scribe: S. Ajesh Babu We will continue the survey of approximation

More information

Streaming Algorithms for Submodular Function Maximization

Streaming Algorithms for Submodular Function Maximization Streaming Algorithms for Submodular Function Maximization Chandra Chekuri Shalmoli Gupta Kent Quanrud University of Illinois at Urbana-Champaign October 6, 2015 Submodular functions f : 2 N R if S T N,

More information

Ahlswede Khachatrian Theorems: Weighted, Infinite, and Hamming

Ahlswede Khachatrian Theorems: Weighted, Infinite, and Hamming Ahlswede Khachatrian Theorems: Weighted, Infinite, and Hamming Yuval Filmus April 4, 2017 Abstract The seminal complete intersection theorem of Ahlswede and Khachatrian gives the maximum cardinality of

More information

Minimization of Symmetric Submodular Functions under Hereditary Constraints

Minimization of Symmetric Submodular Functions under Hereditary Constraints Minimization of Symmetric Submodular Functions under Hereditary Constraints J.A. Soto (joint work with M. Goemans) DIM, Univ. de Chile April 4th, 2012 1 of 21 Outline Background Minimal Minimizers and

More information

On the Impossibility of Black-Box Truthfulness Without Priors

On the Impossibility of Black-Box Truthfulness Without Priors On the Impossibility of Black-Box Truthfulness Without Priors Nicole Immorlica Brendan Lucier Abstract We consider the problem of converting an arbitrary approximation algorithm for a singleparameter social

More information

CS264: Beyond Worst-Case Analysis Lecture #11: LP Decoding

CS264: Beyond Worst-Case Analysis Lecture #11: LP Decoding CS264: Beyond Worst-Case Analysis Lecture #11: LP Decoding Tim Roughgarden October 29, 2014 1 Preamble This lecture covers our final subtopic within the exact and approximate recovery part of the course.

More information

Submodular and Linear Maximization with Knapsack Constraints. Ariel Kulik

Submodular and Linear Maximization with Knapsack Constraints. Ariel Kulik Submodular and Linear Maximization with Knapsack Constraints Ariel Kulik Submodular and Linear Maximization with Knapsack Constraints Research Thesis Submitted in partial fulfillment of the requirements

More information

CSC Linear Programming and Combinatorial Optimization Lecture 12: The Lift and Project Method

CSC Linear Programming and Combinatorial Optimization Lecture 12: The Lift and Project Method CSC2411 - Linear Programming and Combinatorial Optimization Lecture 12: The Lift and Project Method Notes taken by Stefan Mathe April 28, 2007 Summary: Throughout the course, we have seen the importance

More information

NP Completeness and Approximation Algorithms

NP Completeness and Approximation Algorithms Chapter 10 NP Completeness and Approximation Algorithms Let C() be a class of problems defined by some property. We are interested in characterizing the hardest problems in the class, so that if we can

More information

Tree sets. Reinhard Diestel

Tree sets. Reinhard Diestel 1 Tree sets Reinhard Diestel Abstract We study an abstract notion of tree structure which generalizes treedecompositions of graphs and matroids. Unlike tree-decompositions, which are too closely linked

More information

Query and Computational Complexity of Combinatorial Auctions

Query and Computational Complexity of Combinatorial Auctions Query and Computational Complexity of Combinatorial Auctions Jan Vondrák IBM Almaden Research Center San Jose, CA Algorithmic Frontiers, EPFL, June 2012 Jan Vondrák (IBM Almaden) Combinatorial auctions

More information

Solving Zero-Sum Security Games in Discretized Spatio-Temporal Domains

Solving Zero-Sum Security Games in Discretized Spatio-Temporal Domains Solving Zero-Sum Security Games in Discretized Spatio-Temporal Domains APPENDIX LP Formulation for Constant Number of Resources (Fang et al. 3) For the sae of completeness, we describe the LP formulation

More information

Submodular Functions Properties Algorithms Machine Learning

Submodular Functions Properties Algorithms Machine Learning Submodular Functions Properties Algorithms Machine Learning Rémi Gilleron Inria Lille - Nord Europe & LIFL & Univ Lille Jan. 12 revised Aug. 14 Rémi Gilleron (Mostrare) Submodular Functions Jan. 12 revised

More information

RESOLUTION OVER LINEAR EQUATIONS AND MULTILINEAR PROOFS

RESOLUTION OVER LINEAR EQUATIONS AND MULTILINEAR PROOFS RESOLUTION OVER LINEAR EQUATIONS AND MULTILINEAR PROOFS RAN RAZ AND IDDO TZAMERET Abstract. We develop and study the complexity of propositional proof systems of varying strength extending resolution by

More information

Lecture 6,7 (Sept 27 and 29, 2011 ): Bin Packing, MAX-SAT

Lecture 6,7 (Sept 27 and 29, 2011 ): Bin Packing, MAX-SAT ,7 CMPUT 675: Approximation Algorithms Fall 2011 Lecture 6,7 (Sept 27 and 29, 2011 ): Bin Pacing, MAX-SAT Lecturer: Mohammad R. Salavatipour Scribe: Weitian Tong 6.1 Bin Pacing Problem Recall the bin pacing

More information

Asymptotically optimal induced universal graphs

Asymptotically optimal induced universal graphs Asymptotically optimal induced universal graphs Noga Alon Abstract We prove that the minimum number of vertices of a graph that contains every graph on vertices as an induced subgraph is (1+o(1))2 ( 1)/2.

More information

From query complexity to computational complexity (for optimization of submodular functions)

From query complexity to computational complexity (for optimization of submodular functions) From query complexity to computational complexity (for optimization of submodular functions) Shahar Dobzinski 1 Jan Vondrák 2 1 Cornell University Ithaca, NY 2 IBM Almaden Research Center San Jose, CA

More information

6.854J / J Advanced Algorithms Fall 2008

6.854J / J Advanced Algorithms Fall 2008 MIT OpenCourseWare http://ocw.mit.edu 6.85J / 8.5J Advanced Algorithms Fall 008 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms. 8.5/6.85 Advanced Algorithms

More information

The 2-valued case of makespan minimization with assignment constraints

The 2-valued case of makespan minimization with assignment constraints The 2-valued case of maespan minimization with assignment constraints Stavros G. Kolliopoulos Yannis Moysoglou Abstract We consider the following special case of minimizing maespan. A set of jobs J and

More information

Notes for Lecture 2. Statement of the PCP Theorem and Constraint Satisfaction

Notes for Lecture 2. Statement of the PCP Theorem and Constraint Satisfaction U.C. Berkeley Handout N2 CS294: PCP and Hardness of Approximation January 23, 2006 Professor Luca Trevisan Scribe: Luca Trevisan Notes for Lecture 2 These notes are based on my survey paper [5]. L.T. Statement

More information

Testing Problems with Sub-Learning Sample Complexity

Testing Problems with Sub-Learning Sample Complexity Testing Problems with Sub-Learning Sample Complexity Michael Kearns AT&T Labs Research 180 Park Avenue Florham Park, NJ, 07932 mkearns@researchattcom Dana Ron Laboratory for Computer Science, MIT 545 Technology

More information

Aditya Bhaskara CS 5968/6968, Lecture 1: Introduction and Review 12 January 2016

Aditya Bhaskara CS 5968/6968, Lecture 1: Introduction and Review 12 January 2016 Lecture 1: Introduction and Review We begin with a short introduction to the course, and logistics. We then survey some basics about approximation algorithms and probability. We also introduce some of

More information

Characterization of Convex and Concave Resource Allocation Problems in Interference Coupled Wireless Systems

Characterization of Convex and Concave Resource Allocation Problems in Interference Coupled Wireless Systems 2382 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL 59, NO 5, MAY 2011 Characterization of Convex and Concave Resource Allocation Problems in Interference Coupled Wireless Systems Holger Boche, Fellow, IEEE,

More information

A Faster Strongly Polynomial Time Algorithm for Submodular Function Minimization

A Faster Strongly Polynomial Time Algorithm for Submodular Function Minimization A Faster Strongly Polynomial Time Algorithm for Submodular Function Minimization James B. Orlin Sloan School of Management, MIT Cambridge, MA 02139 jorlin@mit.edu Abstract. We consider the problem of minimizing

More information

An Approximation Algorithm for MAX-2-SAT with Cardinality Constraint

An Approximation Algorithm for MAX-2-SAT with Cardinality Constraint An Approximation Algorithm for MAX-2-SAT with Cardinality Constraint Thomas Hofmeister Informatik 2, Universität Dortmund, 44221 Dortmund, Germany th01@ls2.cs.uni-dortmund.de Abstract. We present a randomized

More information

MATHEMATICAL ENGINEERING TECHNICAL REPORTS

MATHEMATICAL ENGINEERING TECHNICAL REPORTS MATHEMATICAL ENGINEERING TECHNICAL REPORTS Combinatorial Relaxation Algorithm for the Entire Sequence of the Maximum Degree of Minors in Mixed Polynomial Matrices Shun SATO (Communicated by Taayasu MATSUO)

More information

A Note on the Budgeted Maximization of Submodular Functions

A Note on the Budgeted Maximization of Submodular Functions A Note on the udgeted Maximization of Submodular Functions Andreas Krause June 2005 CMU-CALD-05-103 Carlos Guestrin School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213 Abstract Many

More information

1 Continuous extensions of submodular functions

1 Continuous extensions of submodular functions CS 369P: Polyhedral techniques in combinatorial optimization Instructor: Jan Vondrák Lecture date: November 18, 010 1 Continuous extensions of submodular functions Submodular functions are functions assigning

More information

Lecture 26. Daniel Apon

Lecture 26. Daniel Apon Lecture 26 Daniel Apon 1 From IPPSPACE to NPPCP(log, 1): NEXP has multi-prover interactive protocols If you ve read the notes on the history of the PCP theorem referenced in Lecture 19 [3], you will already

More information

Computational Complexity of Bayesian Networks

Computational Complexity of Bayesian Networks Computational Complexity of Bayesian Networks UAI, 2015 Complexity theory Many computations on Bayesian networks are NP-hard Meaning (no more, no less) that we cannot hope for poly time algorithms that

More information

MATHEMATICAL ENGINEERING TECHNICAL REPORTS. On the Pipage Rounding Algorithm for Submodular Function Maximization A View from Discrete Convex Analysis

MATHEMATICAL ENGINEERING TECHNICAL REPORTS. On the Pipage Rounding Algorithm for Submodular Function Maximization A View from Discrete Convex Analysis MATHEMATICAL ENGINEERING TECHNICAL REPORTS On the Pipage Rounding Algorithm for Submodular Function Maximization A View from Discrete Convex Analysis Akiyoshi SHIOURA (Communicated by Kazuo MUROTA) METR

More information

Learning Combinatorial Functions from Pairwise Comparisons

Learning Combinatorial Functions from Pairwise Comparisons Learning Combinatorial Functions from Pairwise Comparisons Maria-Florina Balcan Ellen Vitercik Colin White Abstract A large body of work in machine learning has focused on the problem of learning a close

More information

8 Knapsack Problem 8.1 (Knapsack)

8 Knapsack Problem 8.1 (Knapsack) 8 Knapsack In Chapter 1 we mentioned that some NP-hard optimization problems allow approximability to any required degree. In this chapter, we will formalize this notion and will show that the knapsack

More information

Fast algorithms for even/odd minimum cuts and generalizations

Fast algorithms for even/odd minimum cuts and generalizations Fast algorithms for even/odd minimum cuts and generalizations András A. Benczúr 1 and Ottilia Fülöp 2 {benczur,otti}@cs.elte.hu 1 Computer and Automation Institute, Hungarian Academy of Sciences, and Department

More information

Introduction: The Perceptron

Introduction: The Perceptron Introduction: The Perceptron Haim Sompolinsy, MIT October 4, 203 Perceptron Architecture The simplest type of perceptron has a single layer of weights connecting the inputs and output. Formally, the perceptron

More information

CS675: Convex and Combinatorial Optimization Fall 2014 Combinatorial Problems as Linear Programs. Instructor: Shaddin Dughmi

CS675: Convex and Combinatorial Optimization Fall 2014 Combinatorial Problems as Linear Programs. Instructor: Shaddin Dughmi CS675: Convex and Combinatorial Optimization Fall 2014 Combinatorial Problems as Linear Programs Instructor: Shaddin Dughmi Outline 1 Introduction 2 Shortest Path 3 Algorithms for Single-Source Shortest

More information

CS599: Convex and Combinatorial Optimization Fall 2013 Lecture 24: Introduction to Submodular Functions. Instructor: Shaddin Dughmi

CS599: Convex and Combinatorial Optimization Fall 2013 Lecture 24: Introduction to Submodular Functions. Instructor: Shaddin Dughmi CS599: Convex and Combinatorial Optimization Fall 2013 Lecture 24: Introduction to Submodular Functions Instructor: Shaddin Dughmi Announcements Introduction We saw how matroids form a class of feasible

More information

Approximation Basics

Approximation Basics Approximation Basics, Concepts, and Examples Xiaofeng Gao Department of Computer Science and Engineering Shanghai Jiao Tong University, P.R.China Fall 2012 Special thanks is given to Dr. Guoqiang Li for

More information

A Min-Max Theorem for k-submodular Functions and Extreme Points of the Associated Polyhedra. Satoru FUJISHIGE and Shin-ichi TANIGAWA.

A Min-Max Theorem for k-submodular Functions and Extreme Points of the Associated Polyhedra. Satoru FUJISHIGE and Shin-ichi TANIGAWA. RIMS-1787 A Min-Max Theorem for k-submodular Functions and Extreme Points of the Associated Polyhedra By Satoru FUJISHIGE and Shin-ichi TANIGAWA September 2013 RESEARCH INSTITUTE FOR MATHEMATICAL SCIENCES

More information

The Lefthanded Local Lemma characterizes chordal dependency graphs

The Lefthanded Local Lemma characterizes chordal dependency graphs The Lefthanded Local Lemma characterizes chordal dependency graphs Wesley Pegden March 30, 2012 Abstract Shearer gave a general theorem characterizing the family L of dependency graphs labeled with probabilities

More information

An Algebraic View of the Relation between Largest Common Subtrees and Smallest Common Supertrees

An Algebraic View of the Relation between Largest Common Subtrees and Smallest Common Supertrees An Algebraic View of the Relation between Largest Common Subtrees and Smallest Common Supertrees Francesc Rosselló 1, Gabriel Valiente 2 1 Department of Mathematics and Computer Science, Research Institute

More information

arxiv: v1 [cs.ds] 15 Jul 2016

arxiv: v1 [cs.ds] 15 Jul 2016 Local Search for Max-Sum Diversification Alfonso Cevallos Friedrich Eisenbrand Rico Zenlusen arxiv:1607.04557v1 [cs.ds] 15 Jul 2016 July 18, 2016 Abstract We provide simple and fast polynomial time approximation

More information

Optimal Auctions with Correlated Bidders are Easy

Optimal Auctions with Correlated Bidders are Easy Optimal Auctions with Correlated Bidders are Easy Shahar Dobzinski Department of Computer Science Cornell Unversity shahar@cs.cornell.edu Robert Kleinberg Department of Computer Science Cornell Unversity

More information

A NEW SET THEORY FOR ANALYSIS

A NEW SET THEORY FOR ANALYSIS Article A NEW SET THEORY FOR ANALYSIS Juan Pablo Ramírez 0000-0002-4912-2952 Abstract: We present the real number system as a generalization of the natural numbers. First, we prove the co-finite topology,

More information

EECS 495: Combinatorial Optimization Lecture Manolis, Nima Mechanism Design with Rounding

EECS 495: Combinatorial Optimization Lecture Manolis, Nima Mechanism Design with Rounding EECS 495: Combinatorial Optimization Lecture Manolis, Nima Mechanism Design with Rounding Motivation Make a social choice that (approximately) maximizes the social welfare subject to the economic constraints

More information

Optimal Fractal Coding is NP-Hard 1

Optimal Fractal Coding is NP-Hard 1 Optimal Fractal Coding is NP-Hard 1 (Extended Abstract) Matthias Ruhl, Hannes Hartenstein Institut für Informatik, Universität Freiburg Am Flughafen 17, 79110 Freiburg, Germany ruhl,hartenst@informatik.uni-freiburg.de

More information

Theory of Computation Chapter 1: Introduction

Theory of Computation Chapter 1: Introduction Theory of Computation Chapter 1: Introduction Guan-Shieng Huang Sep. 20, 2006 Feb. 9, 2009 0-0 Text Book Computational Complexity, by C. H. Papadimitriou, Addison-Wesley, 1994. 1 References Garey, M.R.

More information

Primal-Dual Algorithms for Deterministic Inventory Problems

Primal-Dual Algorithms for Deterministic Inventory Problems Primal-Dual Algorithms for Deterministic Inventory Problems Retsef Levi Robin Roundy David B. Shmoys Abstract We consider several classical models in deterministic inventory theory: the single-item lot-sizing

More information

On the Complexity of Computing an Equilibrium in Combinatorial Auctions

On the Complexity of Computing an Equilibrium in Combinatorial Auctions On the Complexity of Computing an Equilibrium in Combinatorial Auctions Shahar Dobzinski Hu Fu Robert Kleinberg April 8, 2014 Abstract We study combinatorial auctions where each item is sold separately

More information

Submodular Secretary Problem and Extensions

Submodular Secretary Problem and Extensions Submodular Secretary Problem and Extensions MohammadHossein Bateni MohammadTaghi Hajiaghayi Morteza Zadimoghaddam Abstract Online auction is the essence of many modern markets, particularly networked markets,

More information

CS6999 Probabilistic Methods in Integer Programming Randomized Rounding Andrew D. Smith April 2003

CS6999 Probabilistic Methods in Integer Programming Randomized Rounding Andrew D. Smith April 2003 CS6999 Probabilistic Methods in Integer Programming Randomized Rounding April 2003 Overview 2 Background Randomized Rounding Handling Feasibility Derandomization Advanced Techniques Integer Programming

More information

6.842 Randomness and Computation Lecture 5

6.842 Randomness and Computation Lecture 5 6.842 Randomness and Computation 2012-02-22 Lecture 5 Lecturer: Ronitt Rubinfeld Scribe: Michael Forbes 1 Overview Today we will define the notion of a pairwise independent hash function, and discuss its

More information

Implicitely and Densely Discrete Black-Box Optimization Problems

Implicitely and Densely Discrete Black-Box Optimization Problems Implicitely and Densely Discrete Black-Box Optimization Problems L. N. Vicente September 26, 2008 Abstract This paper addresses derivative-free optimization problems where the variables lie implicitly

More information

Approximating maximum satisfiable subsystems of linear equations of bounded width

Approximating maximum satisfiable subsystems of linear equations of bounded width Approximating maximum satisfiable subsystems of linear equations of bounded width Zeev Nutov The Open University of Israel Daniel Reichman The Open University of Israel Abstract We consider the problem

More information

Knowledge spaces from a topological point of view

Knowledge spaces from a topological point of view Knowledge spaces from a topological point of view V.I.Danilov Central Economics and Mathematics Institute of RAS Abstract In this paper we consider the operations of restriction, extension and gluing of

More information

2 Notation and Preliminaries

2 Notation and Preliminaries On Asymmetric TSP: Transformation to Symmetric TSP and Performance Bound Ratnesh Kumar Haomin Li epartment of Electrical Engineering University of Kentucky Lexington, KY 40506-0046 Abstract We show that

More information

Randomized Pipage Rounding for Matroid Polytopes and Applications

Randomized Pipage Rounding for Matroid Polytopes and Applications Randomized Pipage Rounding for Matroid Polytopes and Applications Chandra Chekuri Jan Vondrák September 23, 2009 Abstract We present concentration bounds for linear functions of random variables arising

More information

An 0.5-Approximation Algorithm for MAX DICUT with Given Sizes of Parts

An 0.5-Approximation Algorithm for MAX DICUT with Given Sizes of Parts An 0.5-Approximation Algorithm for MAX DICUT with Given Sizes of Parts Alexander Ageev Refael Hassin Maxim Sviridenko Abstract Given a directed graph G and an edge weight function w : E(G) R +, themaximumdirectedcutproblem(max

More information

12. LOCAL SEARCH. gradient descent Metropolis algorithm Hopfield neural networks maximum cut Nash equilibria

12. LOCAL SEARCH. gradient descent Metropolis algorithm Hopfield neural networks maximum cut Nash equilibria 12. LOCAL SEARCH gradient descent Metropolis algorithm Hopfield neural networks maximum cut Nash equilibria Lecture slides by Kevin Wayne Copyright 2005 Pearson-Addison Wesley h ttp://www.cs.princeton.edu/~wayne/kleinberg-tardos

More information

Advanced Combinatorial Optimization Updated April 29, Lecture 20. Lecturer: Michel X. Goemans Scribe: Claudio Telha (Nov 17, 2009)

Advanced Combinatorial Optimization Updated April 29, Lecture 20. Lecturer: Michel X. Goemans Scribe: Claudio Telha (Nov 17, 2009) 18.438 Advanced Combinatorial Optimization Updated April 29, 2012 Lecture 20 Lecturer: Michel X. Goemans Scribe: Claudio Telha (Nov 17, 2009) Given a finite set V with n elements, a function f : 2 V Z

More information

Disjoint Bases in a Polymatroid

Disjoint Bases in a Polymatroid Disjoint Bases in a Polymatroid Gruia Călinescu Chandra Chekuri Jan Vondrák May 26, 2008 Abstract Let f : 2 N Z + be a polymatroid (an integer-valued non-decreasing submodular set function with f( ) =

More information

Asymptotic redundancy and prolixity

Asymptotic redundancy and prolixity Asymptotic redundancy and prolixity Yuval Dagan, Yuval Filmus, and Shay Moran April 6, 2017 Abstract Gallager (1978) considered the worst-case redundancy of Huffman codes as the maximum probability tends

More information

Proclaiming Dictators and Juntas or Testing Boolean Formulae

Proclaiming Dictators and Juntas or Testing Boolean Formulae Proclaiming Dictators and Juntas or Testing Boolean Formulae Michal Parnas The Academic College of Tel-Aviv-Yaffo Tel-Aviv, ISRAEL michalp@mta.ac.il Dana Ron Department of EE Systems Tel-Aviv University

More information

Asymptotically optimal induced universal graphs

Asymptotically optimal induced universal graphs Asymptotically optimal induced universal graphs Noga Alon Abstract We prove that the minimum number of vertices of a graph that contains every graph on vertices as an induced subgraph is (1 + o(1))2 (

More information