Optimal approximation for submodular and supermodular optimization with bounded curvature

Size: px
Start display at page:

Download "Optimal approximation for submodular and supermodular optimization with bounded curvature"

Transcription

1 Optimal approximation for submodular and supermodular optimization with bounded urvature Maxim Sviridenko Jan Vondrák Justin Ward Otober 9, 204 Abstrat We design new approximation algorithms for the problems of optimizing submodular and supermodular funtions subjet to a single matroid onstraint. Speifially, we onsider the ase in whih we wish to maximize a nondereasing submodular funtion or minimize a noninreasing supermodular funtion in the setting of bounded total urvature. In the ase of submodular maximization with urvature, we obtain a ( /e)- approximation the first improvement over the greedy ( e )/-approximation of Conforti and Cornuejols from 984, whih holds for a ardinality onstraint, as well as reent approahes that hold for an arbitrary matroid onstraint. Our approah is based on modifiations of the ontinuous greedy algorithm and non-oblivious loal searh, and allows us to approximately maximize the sum of a nonnegative, nondereasing submodular funtion and a (possibly negative) linear funtion. We show how to redue both submodular maximization and supermodular minimization to this general problem when the objetive funtion has bounded total urvature. We prove that the approximation results we obtain are the best possible in the value orale model, even in the ase of a ardinality onstraint. Finally, we give two onrete appliations of our results in the settings of maximum entropy sampling, and the olumn-subset seletion problem. Introdution The problem of maximizing a submodular funtion subjet to various onstraints is a meta-problem that appears in various settings, from ombinatorial autions [28, 2, 32] and viral marketing in soial networks [2] to optimal sensor plaement in mahine learning Yahoo! Labs, New York, NY, USA. sviri@yahoo-in.om IBM Almaden Researh Center, San Jose, CA, USA. jvondrak@us.ibm.om Department of Computer Siene, University of Warwik, Coventry, United Kingdom. J.D.Ward@warwik.a.uk Work supported by EPSRC grant EP/J0284/. [24, 25, 26, 23]. A lassi result by Nemhauser, Wolsey and Fisher [30] is that the greedy algorithm provides a ( /e)-approximation for maximizing a nondereasing submodular funtion subjet to a ardinality onstraint. The fator of /e annot be improved, under the assumption that the algorithm queries the objetive funtion a polynomial number of times [29]. The greedy algorithm has been applied in numerous settings in pratie. Although it is useful to know that it never performs worse than /e ompared to the optimum, in pratie its performane is often even better than this, in fat very lose to the optimum. To get a quantitative handle on this phenomenon, various assumptions an be made about the input. One suh assumption is the notion of urvature, introdued by Conforti and Cornuéjols [9]: A funtion f : 2 X R + has urvature [0, ], if f(s + j) f(s) does not hange by a fator larger than when varying S. A funtion with 0 is linear, so the parameter measures in some sense how far f is from linear. It was shown in [9] that the greedy algorithm for nondereasing submodular funtions provides a ( e )/-approximation, whih tends to as 0. Reently, various appliations have motivated the study of submodular optimization under various more general onstraints. In partiular, the ( /e)- approximation under a ardinality onstraint has been generalized to any matroid onstraint in [5]. This aptures various appliations suh as welfare maximization in ombinatorial autions [32], generalized assignment problems [4] and variants of sensor plaement [26]. Assuming urvature, [34] generalized the ( e )/approximation of [9] to any matroid onstraint, and hypothesized that this is the optimal approximation fator. It was proved in [34] that this fator is indeed optimal for instanes of urvature with respet to the optimum (a tehnial variation of the definition, whih depends on how values hange when measured on top of the optimal solution). In the following, we use total urvature to refer to the original definition of [9], to distinguish from urvature w.r.t. the optimum [34].

2 2 f(s)/f(o) Previous [34] This Paper Figure : Comparison of Approximation Ratios for Submodular Maximization. Our Contribution Our main result is that e given total urvature [0, ], the - approximation of Conforti and Cornuéjols [9] is suboptimal and an be improved to a ( /e O(ɛ))- approximation. We prove that this guarantee holds for the maximization of a nondereasing submodular funtion subjet to any matroid onstraint, thus improving the result of [34] as well. We give two tehniques that ahieve this result: a modifiation of the ontinuous greedy algorithm of [5], and a variant of the loal searh algorithm of [7]. Using the same tehniques, we obtain an approximation fator of + e + O(ɛ) for minimizing a noninreasing supermodular funtion subjet to a matroid onstraint. Our approximation guarantees are stritly better than existing algorithms for every value of exept 0 and. The relevant ratios are plotted in Figures and 2. In the ase of minimization, we have also plotted the inverse approximation ratio to aid in omparison. We also derive omplementary negative results, showing that no algorithm that evaluates f on only a polynomial number of sets an have approximation performane better than the algorithms we give. Thus, we resolve the question of optimal approximation as a funtion of urvature in both the submodular and supermodular ase..2 Appliations We provide two appliations of our results. In the first appliation, we are given a positive semidefinite matrix M. Let M[S, S] be a prinipal minor defined by the olumns and rows indexed by the set S {,..., n}. In the maximum entropy sampling problem (or more preisely in the generalization of that problem) we would like to find a set S k maximizing f(s) ln det M[S, S]. It is well-known that this set funtion f(s) is submodular [20] (many earlier and alternative proofs of that fat are known). In addition, we know that solving this problem exatly is NP-hard [22] (see also Lee [27] for a survey on known optimization tehniques for the problem). We onsider the maximum entropy sampling problem when the matrix M has eigenvalues λ λ n. Sine the determinant of any matrix is just a produt of its eigenvalues, the Cauhy Interlaing Theorem implies that the the submodular funtion f(s) ln det M[S, S] is nondereasing. In addition we an easily derive a bound on its urvature /λ n (see the formal definition of urvature in Setion.3). This immediately implies that our new algorithms for submodular ( maximization ) have an approximation guarantee of λ n e for the maximum entropy sampling problem. Our seond appliation is Column-Subset Seletion Problem arising in various mahine learning settings. The goal is, given a matrix A R m n, to selet a subset of k olumns suh that the matrix is wellapproximated (say in squared Frobenius norm) by a matrix whose olumns are in the span of the seleted k olumns. This is a variant of feature seletion, sine the rows might orrespond to examples and the olumns to features. The problem is to selet a subset of k features suh that the remaining features an be approximated by linear ombinations of the seleted features. This is related but not idential to Prinipal Component Analysis (PCA) where we want to selet a subspae of rank k (not neessarily generated by a subset of olumns) suh that the matrix is well approximated by its projetion to this subspae. While PCA an be solved optimally by spetral methods, the Column- Subset Seletion Problem is less well understood. Here we take the point of view of approximation algorithms: given a matrix A, we want to find a subset of k olumns suh that the squared Frobenius distane of A from its projetion on the span of these k olumns is minimized. To the best of our knowledge, this problem is not known to be NP-hard; on the other hand, the approximation fators of known algorithms are quite large. The best known algorithm for the problem as stated is a (k + )-approximation algorithm given by Deshpande and Rademaher [0]. For the related problem in whih we may selet any set of r k olumns that form a rank k submatrix of A, Deshpande and Vempala [] showed that there exist matries for whih Ω(k/ɛ) olumns must be hosen to obtain a ( + ɛ)-approximation. Boutsidis et al. [2] give a mathing algorithm, whih obtains a set of O(k/ɛ) olumns that give a ( + ɛ) approximation.

3 3 0 Approximation Ratio Inverse Approximation Ratio f(s)/f(o) f(o)/f(s) Previous [9] This Paper Figure 2: Comparison of Approximation Ratios for Supermodular Minimization We refer the reader to [2] for further bakground on the history of this and related problems. Here, we return to the setting in whih only k olumns of A may be hosen and show that this is a speial ase of supermodular minimization with bounded urvature. We show a relationship between urvature and the ondition number κ of A, whih allows us to obtain approximation fator of +(κ 2 ) e +κ2 O(ɛ) O(κ 2 ). We define the problem and the related notions more preisely in Setion 8..3 Related Work The problem of maximizing a nondereasing submodular funtion subjet to a ardinality onstraint (i.e., a uniform matroid) was studied by Nemhauser, Wolsey, and Fisher [30], who showed that the standard greedy algorithm gives a ( e )- approximation. However, in [8], they show that the greedy algorithm is only /2-approximation for maximizing a nondereasing submodular funtion subjet to an arbitrary matroid onstraint. More reently, Calinesu et al. [5] obtained a ( e ) approximation for an arbitrary matroid onstraint. In their approah, the ontinuous greedy algorithm first maximizes approximately a multilinear extension of the given submodular funtion and then applies a pipage rounding tehnique inspired by [] to obtain an integral solution. The running time of this algorithm is dominated by the pipage rounding phase. Chekuri, Vondrák, and Zenklusen [6] later showed that pipage rounding an be replaed by an alternative rounding proedure alled swap rounding based on the exhange properties of the underlying onstraint. In later work [8, 7], they developed the notion of a ontention resolution sheme, whih gives a unified treatment for a variety of onstraints, and allows rounding approahes for the ontinuous greedy algorithm to be omposed in order to solve submodular maximization problems under ombinations of onstraints. Later, Filmus and Ward [6] obtained a ( e )-approximation for submodular maximization in an arbitrary matroid by using a non-oblivious loal searh algorithm that does not require rounding. On the negative side, Nemhauser and Wolsey [29] showed that it is impossible to improve upon the bound of ( e ) in the value orale model, even under a single ardinality onstraint. In this model, f is given as a value orale and an algorithm an evaluate f on only a polynomial number of sets. Feige [5] showeds that ( e ) is the best possible approximation even when the funtion is given expliitly, unless P NP. In later work, Vondrák [33] introdued the notion of the symmetry gap of a submodular funtion f, whih unifies many inapproximability results in the value orale model, and proved new inapproximability results for some speifi onstrained settings. Later, Dobzinski and Vondrák [3] showed how these inapproximability bounds may be onverted to mathing omplexitytheoreti bounds, whih hold when f is given expliitly, under the assumption that RP NP. Conforti and Cornuéjols [9] defined the total urvature of non-dereasing submodular funtion f as (.) max j X f (j) f X j (j) f (j) f X j (j) min j X f (j).

4 4 They showed that greedy algorithm has an approximation ratio of + for the problem of maximizing a nondereasing submodular funtion with urvature at most subjet to a single matroid onstraint. In the speial ase of a uniform matroid, they were able to show that the greedy is a e -approximation algorithm. Later, Vondrák [34] onsidered the ontinuous greedy algorithm in the setting of bounded urvature. He introdued the notion of urvature with respet to the optimum, whih is a weaker notion than total urvature, and showed that the ontinuous greedy algorithm is a e -approximation for maximizing a nondereasing submodular funtion f subjet to an arbitrary matroid onstraint whenever f has urvature at most with respet to the optimum. He also showed that it is impossible to obtain a e -approximation in this setting when evaluating f on only a polynomial number of sets. Unfortunately, unlike total urvature, it is in general not possible to ompute the urvature of a funtion with respet to the optimum, as it requires knowledge of an optimal solution. We shall also onsider the problem of minimizing noninreasing supermodular funtions f : 2 X R 0. By analogy with total urvature, Il ev [9] defines the steepness s of a noninreasing supermodular funtion. His definition, whih is stated in terms of the marginal dereases of the funtion, is equivalent to (.) when reformulated in terms of marginal gains. He showed that, in ontrast to submodular maximization, the simple greedy heuristi does not give a onstant fator approximation algorithm in the general ase. However, when the supermodular funtion f has total urvature at most, he shows that the reverse greedy algorithm is an ep p -approximation where p. 2 Preliminaries We now fix some of our notation and give two lemmas pertaining to funtions with bounded urvature. 2. Set Funtions A set funtion f : 2 X R 0 is submodular if f(a) + f(b) f(a B) + f(a B) for all A, B X. Submodularity an equivalently be haraterized in terms of marginal values, defined by f A (i) f(a + i) f(a) for i X and A X i. Then, f is submodular if and only if f A (i) f B (i) for all A B X and i B. That is, submodular funtions are haraterized by dereasing marginal values. Intuitively, f is supermodular if and only if f is submodular. That is, f is supermodular if and only if f A (i) f B (i) for all A B X and i B. Finally, we say that a funtion is non-dereasing, or monotone inreasing, if f A (i) 0 for all i X and A X, and non-inreasing, or monotone dereasing, if f A (i) 0 for all i X and A X. 2.2 Matroids We now present the definitions and notations that we shall require when dealing with matroids. We refer the reader [3] for a detailed introdution to basi matroid theory. Let M (X, I) be a matroid defined on ground set X with independent sets given by I. We denote by B(M) the set of all bases (inlusion-wise maximal sets in I) of M. We denote by P (M) the matroid polytope for M, given by: P (M) onv{ I : I I} {x 0 : j S x j r M (S), S X}, where r M denotes the rank funtion assoiated with M. The seond equality above is due to Edmonds [4]. Similarly, we denote by B(M) the base polytope assoiated with M: B(M) onv{ I : I B(M)} {x P (M) : j X x j r M (X)}. For a matroid M (X, I), we denote by M the dual system (X, I ) whose independent sets I are defined as those subsets A X that satisfy A B for some B I (i.e., those subsets that are disjoint from some base of M. Then, a standard result of matroid theory shows that M is a matroid whenever M is a matroid, and, moreover, B(M ) is preisely the set {X \ B : B B(M)} of omplements of bases of M. Finally, given a set of elements D X, we denote by M D the matroid (X D, I ) obtained by restriting to M to D. The independent sets I of M D are simply those independent sets of M that ontain only elements from D. That is, I {A I : A D A}. 2.3 Lemmas for Funtions with Bounded Curvature We now give two general lemmas pertaining to funtions of bounded urvature that will be useful in our analysis. The proofs, whih follow diretly from (.), are given in the Appendix. Lemma 2.. If f : 2 X R 0 is a monotone inreasing submodular funtion with total urvature at most, then j A f X j(j) ( )f(a) for all A X. Lemma 2.2. If f : 2 X R 0 is a monotone dereasing supermodular funtion with total urvature at most, then ( ) j A f (j) f(x \ A) for all A X.

5 5 3 Submodular + Linear Maximization Our new results for both submodular maximization and supermodular minimization with bounded urvature make use of an algorithm for the following metaproblem: we are given a monotone inreasing, nonnegative, submodular funtion g : 2 X R 0, a linear funtion l : 2 X R, and a matroid M (X, I) and must find a base S B(M) maximizing g(s) + l(s). Note that we do not require l to be nonnegative. Indeed, in the ase of supermodular minimization (disussed in Setion 6.2), our approah shall require that l be a negative, monotone dereasing funtion. For j X, we shall write l(j) and g(j) as a shorthand for l({j}) and g({j}). We note that beause l is linear, we have l(a) j A l(j) for all A X. Let ˆv g max j X g(j), ˆv l max j X l(j), and ˆv max(ˆv g, ˆv l ). Then, beause g is submodular and l is linear, we have both g(a) nˆv and l(a) nˆv for every set A X. Moreover, given l and g, we an easily ompute ˆv in time O(n). Our main tehnial result is the following, whih gives a joint approximation for g and l. Theorem 3.. For every ɛ > 0, there is an algorithm that, given a monotone inreasing submodular funtion g : 2 X R 0, a linear funtion l : 2 X R and a matroid M, produes a set S B(M) in polynomial time satisfying g(s) + l(s) ( e ) g(o) + l(o) O(ɛ) ˆv, for every O B(M), with high probability. In the next two setions, we give two different algorithms satisfying the onditions of Theorem A Modified Continuous Greedy Algorithm The first algorithm we onsider is a modifiation of the ontinuous greedy algorithm of [5] We first sketh the algorithm oneptually in the ontinuous setting, ignoring ertain tehnialities. 4. Overview of the Algorithm Consider x [0, ] X. For any funtion f : 2 X R, the multilinear extension of f is a funtion F : [0, ] X R given by F (x) E [f(r x )], where R x is a random subset of X in whih eah element e appears independently with probability x e. We let G denote the multilinear extension of the given, monotone inreasing submodular funtion g, and L denote the multilinear extension of the given linear funtion l. Note that due to the linearity of expetation, L(x) E [l(r x )] j X x jl(j). That is, the multilinear extension L orresponds to the natural, linear extension of l. Let P (M) and B(M) be the Modified Continuous Greedy Guess the values of λ l(o) and γ g(o). Initialize x 0. For time running from t 0 to t, update x aording to dx dt v(t), where v(t) P (M) is a vetor satisfying both: L(v(t)) λ v(t) G(x(t)) γ G(x(t)). Suh a vetor exists beause O is one possible andidate (as in the analysis of [5]). Apply pipage rounding to the point x() and return the resulting solution. Figure 3: The modified ontinuous greedy algorithm matroid polytope and matroid base polytope assoiated M, and let O be the arbitrary base in B(M) to whih we shall ompare our solution in Theorem 3.. Our algorithm is shown in Figure 3. In ontrast to the standard ontinuous greedy algorithm, in the third step we require a diretion that is larger than both the value of l(o) and the residual value γ G(x(t)). Applying the standard ontinuous greedy algorithm to (g + l) gives a diretion that is larger than the sum of these two values, but this is insuffiient for our purposes. Our analysis proeeds separately for L(x) and G(x). First, beause L is linear, we obtain ( ) dl dx dt (x(t)) L λ, dt for every time t, and hene L(x()) λ l(o). For the submodular omponent, we obtain dg dx (x(t)) G(x(t)) γ G(x(t)) dt dt similar to the analysis in [5]. This leads to a differential equation that gives G(x(t)) ( e t )γ. The final pipage rounding phase is oblivious to the value of the objetive funtion and so is not affeted by the potential negativity of l. We an view G + L as the multilinear extension of g + l and so, as in the standard ontinuous greedy analysis, pipage rounding produes an integral solution S satisfying g(s) + l(s) G(x()) + L(x()) ( e t )g(o) + l(o).

6 6 4.2 Implementation of the Modified Continuous Greedy Algorithm Now we disuss the tehnial details of how the ontinuous greedy algorithm an be implemented effiiently. There are three main issues that we ignored in our previous disussion: () How do we guess the values of l(o) and g(o); (2) How do we find a suitable diretion v(t) in eah step of the algorithm; and (3) How do we disretize time effiiently? Let us now address them one by one. Guessing the optimal values: In fat, it is enough to guess the value of l(o); we will optimize over v(t) G later. Reall that l(o) nˆv. We disretize the interval [ nˆv, nˆv] with O(ɛ ) points of the form iɛ ˆv for ɛ i ɛ, filling the interval [ ˆv, ˆv], together with O(ɛ n log n) points of the form ( + ɛ/n) i ˆv and ( + ɛ/n) i ˆv for 0 i log +ɛ/n n, filling the intervals [ˆv, nˆv] and [ nˆv, ˆv], respetively. We then run the following algorithm using eah point as a guess for λ, and return the best solution found. Then if l(o) < ˆv, we must have l(o) λ l(o) ɛ ˆv, for some iteration (using one of the guesses in [ ˆv, ˆv]). Similarly, if l(o) ˆv, then for some iteration we have l(o) λ l(o) (ɛ/n) l(o) l(o) ɛ ˆv (using one of the guesses in [ˆv, nˆv] or [ nˆv, ˆv]). For the remainder of our analysis we onsider this partiular iteration. Finding a suitable diretion: Given our guess of λ and a urrent solution x(t), our goal is to find a diretion v(t) P (M) suh that L(v(t)) λ and v(t) G(x(t)) γ G(x(t)). As in [5], we must estimate G(x(t)) by random sampling. Then, given an estimate G, we solve the linear program: (4.2) max{v G : v B(M), L(v) λ}. We an do this by the ellipsoid method, for example (or more effiiently using other methods). Following the analysis of [5], we an obtain, in polynomial time, an estimate satisfying v(t) G(x(t)) v(t) G(x(t)) O(ɛ) ˆv, with high probability. Sine L(O) l(o) λ, the base O is a feasible solution of (4.2). Beause G(x(t)) is onave along any nonnegative diretion, we then have: (4.3) v G(x(t)) g(o) G(x(t)) O(ɛ) ˆv, In the appliations we onsider l is either nonnegative or nonpositive, and so we need only onsider half of the given interval. For simpliity, here we give a general approah that does not depend on the sign of l. In general, we have favored, whenever possible, simpliity in the analysis over obtaining the best runtime bounds. just as in the analysis of [5]. Disretizing the algorithm: We disretize time into steps of length δ ɛ/n 2 ; let us assume for simpliity that /δ is an integer. In eah step, we find a diretion v(t) as desribed above and we update x(t + δ) x(t) + δ v(t). Clearly, after /δ steps we obtain a solution x() whih is a onvex ombination of points v(t) P (M), and therefore a feasible solution. In eah step, we had L(v(t)) λ and so /δ L(x()) δ L(v(i δ)) λ l(o) ɛ ˆv. i The analysis of the submodular omponent follows along the lines of [5]. In one time step, we gain G(x(t + δ)) G(x(t)) ( nδ) (x(t + δ) x(t)) G(x(t)) ( nδ) δ [g(o) G(x(t)) O(ɛ) ˆv] δ [g(o) G(x(t)) O(ɛ) ˆv (ɛ/n) g(o)] δ [g(o) G(x(t)) O(ɛ) ˆv] using the bound (4.3). obtain and so By indution (as in [5]), we G(x(t)) ( e t )g(o) O(ɛ) ˆv, G(x()) + L(x()) ( e )g(o) + l(o) O(ɛ) ˆv, as required. 5 Non-Oblivious Loal Searh We now give another proof of Theorem 3., using a modifiation of the loal searh algorithm of [7]. In ontrast to the modified ontinuous greedy algorithm, our modified loal searh algorithm does not need to guess the optimal value of l(o), and also does not need to solve the assoiated ontinuous optimization problem given in (4.2). However, here the onvergene time of the algorithm beomes an issue that must be dealt with. 5. Overview of the Algorithm We begin by presenting a few neessary lemmas and definitions from the analysis of [7]. We shall require the following general property of matroid bases, first proved by Brualdi [3], whih an also be found in e.g. [3, Corollary 39.2a]. Lemma 5.. Let M be a matroid and A and B be two bases in B(M). Then, there exists a bijetion π : A B suh that A x + π(x) B(M) for all x A.

7 7 Non-Oblivious Loal Searh Let δ ɛ n ˆv. S an arbitrary base S 0 B(M). While there exists a S and b X \ S suh that S a + b B(M) and set S S a + b. Return S ψ(s a + b) ψ(s) + δ, Figure 4: The non-oblivious loal searh algorithm We an restate Lemma 5. as follows: let A {a,..., a r } and B be bases of a matroid M of r. Then we an index the elements of b i B so that b i π(a i ), and then we have that A a i + b i B(M) for all i r. The resulting olletion of sets {A a i +b i } i A will define a set of feasible swaps between the bases A and B that we onsider when analyzing our loal searh algorithm. The loal searh algorithm of [7] maximizes a monotone submodular funtion g using a simple loal searh routine that evaluates the quality of the urrent solution using an auxiliary potential h, derived from g as follows: h(a) e p g(b) B A 0 e p B ( p) A B dp. We defer a disussion of issues related to onvergene and omputing h until the next subsetion, and first sketh the main idea of our modified algorithm. We shall make use of the following fat, proved in [7, Lemma 4.4, p ]: for all A, g(a) h(a) C g(a) ln n, for some onstant C. In order to jointly maximize g(s)+l(s), we employ a modified loal searh algorithm that is guided by the potential ψ, given by: ψ(a) ( e )h(a) + l(a). Our final algorithm is shown in Figure 4. The following Lemma shows that if it is impossible to signifiantly improve ψ(s) by exhanging a single element, then both g(s) and l(s) must have relatively high values. Lemma 5.2. Let A {a,..., a r } and B {b,..., b r } be any two bases of a matroid M, and suppose that the elements of B are indexed aording to Lemma 5. so that A a i + b i B(M) for all i r. Then, g(a) + l(a) ( e )g(b) + l(b) + r [ψ(a) ψ(a a i +b i )]. i Proof. Filmus and Ward [7, Theorem 5., p. 526] show that for any submodular funtion g, the assoiated funtion h satisfies (5.4) e r e g(a) g(b) + [h(a) h(a a i + b i )]. i We note that sine l is linear, we have: (5.5) l(a) l(b) + l(b) + r [l(a i ) l(b i )] i r [l(a) l(a a i + b i )] i Adding ( e ) times (5.4) to (5.5) then ompletes the proof. Suppose that S B(M) is loally optimal for ψ under single-element exhanges, and let O be an arbitrary base of M. Then, loal optimality of S implies that ψ(s) ψ(s s i + o i ) 0 for all i [r], where the elements s i of S and o i of O have been indexed aording to Lemma 5.. Then, Lemma 5.2 gives g(s) + l(s) ( e ) g(o) + l(o), as required by Theorem Implementation of the Non-Oblivious Loal Searh Algorithm We now show how to obtain a polynomial-time algorithm from Lemma 5.2. We fae two tehnial diffiulties: () how do we ompute ψ effiiently in polynomial time; and (2) how do we ensure that the searh for improvements onverges to a loal optimum in polynomial time? As in the ase of the ontinuous greedy algorithm, we an address these issues by using standard tehniques, but we must be areful sine l may take negative values. As in that ase, we have not attempted to obtain the most effiient possible running time analysis here, fousing instead on simplifying the arguments. Estimating ψ effiiently: Although the definition of h requires evaluating g on a potentially exponential number of sets, Filmus and Ward show that h an be estimated effiiently using a sampling proedure:

8 8 Lemma 5.3. ([7, Lemma 5., p. 525]) Let h(a) be an estimate of h(a) omputed using N Ω(ɛ 2 ln 2 n ln M) samples of g. Then, [ ] Pr h(a) h(a) ɛ h(a) O(M ). We let ψ(a) h(a) + l(a) be an estimate of ψ. Set δ ɛ n ˆv. We shall ensure that ψ(a) differs from ψ(a) by at most δ ɛ n ˆv ɛ n 2 g(a) ɛ C n 2 C g(a) ln n ln n ɛ C n 2 ln n h(a). Applying Lemma 5.3, we an then ensure that [ Pr ψ(a) ] ψ(a) δ O(M ), by using Ω(ɛ 2 n 4 ln 4 n ln M) samples for eah omputation of ψ. By the union bound, we an ensure that ψ(a) ψ(a) δ holds with high probability for all sets A onsidered by the algorithm, by setting M appropriately. In partiular, if we evaluate ψ on any polynomial number of distint sets A, it suffies to make M polynomially small, whih requires only a polynomial number of samples for eah evaluation. Bounding the onvergene time of the algorithm: We initialize our searh with an arbitrary base S 0 B(M), and at eah step of the algorithm, we restrit our searh to those improvements that yield a signifiant inrease in the value of ψ. Speifially, we require that eah improvement inreases the urrent value of ψ by at least an additive term δ ɛ n ˆv. We now bound the total number of improvements made by the algorithm. We suppose that all values ψ(a) omputed by the algorithm satisfy ψ(a) δ ψ(a) ψ(a) + δ. From the previous disussion, we an ensure that this is indeed the ase with high probability. Let O ψ arg max A B(M) ψ(a). Then, the total number of improvements applied by the algorithm is at most: δ ( ψ(o ψ ) ψ(s 0 )) δ (ψ(o ψ) ψ(s 0 ) + 2δ) δ (( e ) (h(o ψ ) h(s 0 )) + l(o ψ ) l(s 0 ) + 2δ) δ (( e ) h(o ψ ) + l(o ψ ) + l(s 0 ) + 2δ) δ (( e ) C g(o ψ ) ln n + l(o ψ ) + l(s 0 ) + 2δ) δ (( e ) C ˆv n ln n + n ˆv + n ˆv + 2δ) O(ɛ n 2 ln n). Eah improvement step requires O(n 2 ) evaluations of ψ. From the disussion in the previous setion, setting M suffiiently high will ensures that all of the estimates made for the first Ω(ɛ n 2 ln n) iterations will satisfy our assumptions with high probability, and so the algorithm will onverge in polynomial time. In order to obtain a deterministi bound on the running time of the algorithm we simply terminate our searh if it has not onverged in Ω(ɛ n 2 ln n) steps and return the urrent solution. Then when the resulting algorithm terminates, with high probability, we indeed have ψ(s) ψ(s s i + o i ) δ for every i [r] and so r [ψ(s) ψ(s s i + o i )] r(δ + 2δ) 3ɛ ˆv. i From Lemma 5.2, the set S produed by the algorithm then satisfies g(s) + l(s) ( e )g(o) + l(o) O(ɛ) ˆv, as required by Theorem Submodular Maximization and Supermodular Minimization We now return to the problems of submodular maximization and supermodular minimization with bounded urvature. We redue both problems to the general setting introdued in Setion 3. In both ases, we suppose that we are seeking optimize a funtion f : 2 X R 0 over a given matroid M (X, I) and we let O denote any optimal base of M (i.e., a base of M that either maximizes or minimizes f, aording to the setting). 6. Submodular Maximization Suppose that f is a monotone inreasing submodular funtion with urvature at most [0, ], and we seek to maximize f over a matroid M. Theorem 6.. For every ɛ > 0 and [0, ], there is an algorithm that given a monotone inreasing submodular funtion f : 2 X R 0 of urvature and a matroid M (X, I), produes a set S I in polynomial time satisfying f(s) ( /e O(ɛ))f(O) for every O I, with high probability. Proof. Define the funtions: l(a) j A f X j (j) g(a) f(a) l(a).

9 9 Then, l is linear and g is submodular, monotone inreasing, and nonnegative (as verified in Lemma A. of the appendix). Moreover, beause f has urvature at most, Lemma 2. implies that for any set A X, l(a) j A f X j(j) ( )f(a). In order to apply Theorem 3. we must bound the term ˆv. By optimality of O and non-negativity of l and g, we have ˆv g(o) + l(o) f(o). From Theorem 3., we an find a solution S satisfying: f(s) g(s) + l(s) ( e ) g(o) + l(o) O(ɛ) f(o) ( e ) f(o) + e l(o) O(ɛ) f(o) ( e ) f(o) + ( )e f(o) O(ɛ) f(o) ( e O(ɛ) ) f(o). 6.2 Supermodular Minimization Suppose that f is a monotone dereasing supermodular funtion with urvature at most [0, ) and we seek to minimize f over a matroid M. Theorem 6.2. For every ɛ > 0 and [0, ), there is an algorithm that given a monotone dereasing supermodular funtion f : 2 X R 0 of urvature and a matroid M (X, I), produes a set S I in polynomial time satisfying ( f(s) + e + ) O(ɛ) f(o) for every O I, with high probability. Proof. Define the linear and submodular funtions: l(a) j A f (j) g(a) l(a) f(x \ A). Beause f is monotone dereasing, we have f (j) 0 and so l(a) 0 for all A X. Thus, l is a nonpositive, dereasing linear funtion. However, as we verify in Lemma A.2 of the appendix, g is submodular, monotone inreasing, and nonnegative. We shall onsider the problem of maximizing g(s)+ l(s) f(x \S) in the dual matroid M, whose bases orrespond to omplements of bases of M. We ompare our solution S to this problem to the base O X \ O of M. Again, in order to apply Theorem 3., we must bound the term ˆv. Here, beause l(a) is nonpositive, we annot bound ˆv diretly as in the previous setion. Rather, we proeed by partial enumeration. Let ê arg max j O max(g(j), l(j) ). We iterate through all possible guesses e X for ê, and for eah suh e onsider ˆv e max(g(e), l(e) ). We set X e to be the set {j X : g(j) ˆv e l(j) ˆv e }, and onsider the matroid M e M X e, obtained by restriting M to the ground set X e. For eah e satisfying r M e (X e ) r M (X), we apply our algorithm to the problem max{g(a)+l(a) : A M e}, and return the best solution S obtained. Note sine r M e (X e ) r M (X), the set S is also a base of M and so X \ S is a base of M. Consider the iteration in whih we orretly guess e ê. In the orresponding restrited instane we have g(j) ˆv e ˆv and l(j) ˆv e ˆv for all j X e. Additionally, O X e and so O B(M e ), as required by our analysis. Finally, from the definition of g and l, we have f(o) l(o ) g(o ). Sine ê O, and l is nonpositive while f is nonnegative, ˆv g(o ) + l(o ) l(o ) f(o) l(o ) 2l(O ). Therefore, by Theorem 3., the base S of M returned by the algorithm satisfies: g(s) + l(s) ( e )g(o ) + l(o ) + O(ɛ) l(o ) Finally, sine f is supermodular with urvature at most, Lemma 2.2 implies that for all A X, Thus, we have l(a) j A f (j) f(x \ A). f(x \ S) g(s) l(s) ( e ) g(o ) l(o ) O(ɛ) l(o ) ( e ) f(o) ( e + O(ɛ) ) l(o ) ( e ) f(o) + ( e + O(ɛ) ) ( ) f(o) + e + O(ɛ) f(o). We note that beause the error term depends on, our result requires that is bounded away from by a onstant. 7 Inapproximability Results We now show that our approximation guarantees are the best ahievable using only a polynomial number of funtion evaluations, even in the speial ase that M is a uniform matroid (i.e., a ardinality onstraint). Nemhauser and Wolsey [29] onsidered the problem of finding a set S of ardinality at most r that maximizes a monotone submodular funtion. They give a lass of funtions for whih obtaining a ( e + ɛ)- approximation for any onstant ɛ > 0 requires a superpolynomial number of funtion evaluations. Our analysis uses the following additional property, satisfied by

10 0 every funtion f in their lass: let p max i X f (i), and let O be a set of size r on whih f takes its maximum value. Then, f(o) rp. Theorem 7.. For any onstant δ > 0 and (0, ), there is no ( e +δ)-approximation algorithm for the problem max{ ˆf(S) : S r}, where ˆf is a monotone inreasing submodular funtion with urvature at most, that evaluates ˆf on only a polynomial number of sets. Proof. Let f be a funtion in the family given by [29] for the ardinality onstraint r, and let O be a set of size r on whih f takes its maximum value. Consider the funtion: ˆf(A) f(a) + A p. In Lemma A.3 of the appendix, we show that ˆf is monotone inreasing, submodular, and nonnegative with urvature at most. We onsider the problem max{ ˆf(S) : S r}. Let α ( e + δ), and suppose that some algorithm returns a solution S satisfying ˆf(S) α ˆf(O), evaluating ˆf on only a polynomial number of sets. Beause f is monotone inreasing, we assume without loss of generality that S r. Then, f(s) + and so rp α f(o) + α α f(o) + α rp f(o) α f(o), f(s) α f(o) rp ( α ) f(o) ( e + δ ) f(o) ( e + δ ) f(o). Beause eah evaluation of ˆf requires only a single evaluation of f, this ontradits the negative result of [29]. Theorem 7.2. For any onstant δ > 0 and (0, ), there is no (+ e δ)-approximation algorithm for the problem min{ ˆf(S) : S r}, where ˆf is a monotone dereasing supermodular funtion with urvature at most, that evaluates ˆf on only a polynomial number of sets. Proof. Again, let f be a funtion in the family given by [29] for the ardinality onstraint r. Let O be a set of size r on whih f takes its maximum value, and reall that f(o) rp, where p max i X f (i). Consider the funtion: ˆf(A) p X \ A f(x \ A). In Lemma A.4 of the appendix we show that ˆf is monotone dereasing, supermodular, and nonnegative with urvature at most. We onsider the problem min{ ˆf(A) : A n r}. Let α ( + e δ), and suppose that some algorithm returns a solution A, satisfying ˆf(A) α ˆf(X \ O), evaluating ˆf on only a polynomial number of sets. Suppose we run the algorithm for minimizing ˆf and return the set S X \ A. Beause ˆf is monotone dereasing, we assume without loss of generality that A n r and so S r. Then, rp ( rp ) f(s) α f(o) ( ) f(o) α f(o) α f(o), and so f(s) rp α f(o) ( α ) f(o) ( ) e + ( )δ f(o) ( e + ) δ f(o). Again, sine eah evaluation of ˆf requires only one evaluation of f, this ontradits the negative result of [29]. 8 Appliation: the Column-Subset Seletion Problem Let A be an m n real matrix. We denote the olumns of A by,..., n. I.e., for x R n, Ax x i i. The (squared) Frobenius norm of A is defined as A 2 F i,j a 2 ij i 2, i where here, and throughout this setion, we use to denote the standard, l 2 vetor norm.

11 For a matrix A with independent olumns, the ondition number is defined as κ(a) sup x Ax inf x Ax. If the olumns of A are dependent, then κ(a) (there is a nonzero vetor x suh that Ax 0). Given a matrix A with olumns,..., n, and a subset S [n], we denote by proj S (x) argmin y span({i:i S}) x y the projetion of x onto the subspae spanned by the respetive olumns of A. Given S [n], it is easy to see that the matrix A(S) with olumns spanned by { i : i S} that is losest to A in squared Frobenius norm is A(S) (proj S ( ), proj S ( 2 ),..., proj S ( n )). The distane between two matries is thus A A(S) 2 F i proj S ( i ) 2. i We define f A : 2 [n] R to be this quantity as a funtion of S: f A (S) i proj S ( i ) 2 i ( i 2 proj S ( i ) 2 ), i where the final equality follows from the fat that proj S ( i ) and i proj S ( i ) are orthogonal. In the following lemma, we show that the funtion proj S ( i ) 2 is a monotone inreasing submodular funtion of S. It then follows that f A is a monotone dereasing supermodular funtion. Lemma 8.. Let { i } i [n] be a set of vetors in R m, and let v be an arbitrary vetor in R m. Then, the funtion f(s) proj S (v) 2 is monotone inreasing and submodular. Proof. Monotoniity follows from the standard properties of projetion. To prove submodularity, let S [n], and let y and x be two values in [n] \ S. It suffies to show that for any v: proj S {x,y} (v) 2 proj S {y} (v) 2 proj S {x} (v) 2 proj S (v) 2. To simplify our notation, let x x and y y. Note that if x span(s), then both sides of the inequality are 0, and if y span(s) both sides are equal. Suppose, then, that neither x nor y are in span(s), and let x x proj S (x). Then, proj S {x} (v) 2 proj S (v) 2 proj S (v) 2 + x, v 2 x 2 proj S (v) 2 x, v 2 x 2. Now, let y proj S {x} (y) and y (y y ). Then, we have proj S {x,y} (v) 2 proj S (v) 2 + x, v 2 x 2 + y, v 2 y 2. Now, set the quantity q 0 if y span(s) and y proj S (y ), v / y proj S (y ) otherwise. Then, Hene, proj S {y} (v) 2 proj S (v) 2 + q 2 + y, v 2 y 2. proj S {x,y} (v) 2 proj S {y} (v) 2 x, v 2 x 2 q 2 x, v 2 x 2 proj S {x} (v) 2 proj S (v) 2. Given a matrix A R m n and an integer k, the olumn-subset seletion problem (CSSP) is to selet a subset S of k olumns of A so as to minimize f A (S). It follows from Lemma 8. that CSSP is a speial ase of supermodular minimization subjet to a uniform matroid onstraint. We now show that the urvature of f A is related to the ondition number of A. Lemma 8.2. For any non-singular matrix A, the urvature of the assoiated set funtion f A is (f A ) κ 2 (A). Proof. Consider any i [n]. We want to prove that f A(i)/f [n] i A (i) κ2 (A). Reall that the marginal values of f A are negative, but only the ratio matters so we an onsider the respetive absolute values. First, let us analyze f A (i). We have f A (i) f A ( ) f A ({i}) ( j 2 j proj {i} ( j ) 2) j proj {i} ( j ) 2, j using the fat that proj {i} ( j ) is orthogonal to j proj {i} ( j ). Our goal is to show that if this quantity is large, then there is a unit vetor x suh that Ax

12 2 is large. In partiular, let us define p j proj {i} ( j ) and x j p j / n l p2 l. We have x 2 n j x2 j. Multiplying by matrix A, we obtain Ax n j x j j. We an estimate Ax as follows: i (Ax) i j j j x j j j p j n ( i j ) l p2 l p j n l p2 l i proj {i} ( j ) p 2 j n l p2 l i i p 2 j. j By the Cauhy-Shwartz inequality, this implies that n Ax j p2 j f A(i). Next, let us analyze f A [n] i (i). Sine f A ([n]) 0, we have f A [n] i (i) f A ([n] i) j proj [n] i ( j ) 2 j i proj [n] i ( i ) 2, sine for j i, we have j proj [n] i ( j ). We laim that if i proj [n] i ( i ) is small, then there is a unit vetor x suh that Ax is small. To this purpose, write proj [n] i ( i ) as a linear ombination of the vetors { j : j i}: proj [n] i ( i ) j i y j j. Finally, we define y i, and normalize to obtain x y/ y. We get the following: Ax y Ay n y y j j j y proj [n] i( i ) i. Sine y, and proj [n] i ( i ) i we obtain Ax f[n] i A (i). f A [n] i (i), In summary, we have given two unit vetors x, x with Ax / Ax f A(i) / f [n] i A (i). This proves that κ 2 (A) f A(i)/f [n] i A (i) for eah i [n]. Thus, applying the algorithmi results of setions 4 and 5, we obtain algorithms for the olumn-subset seletion problem with approximation fator + (κ(a) 2 ) e + κ(a) 2 O(ɛ) O(κ 2 ). The following lemma shows that Lemma 8.2 is asymptotially tight. Lemma 8.3. There exists a matrix A with ondition number κ for whih the assoiated funtion f A has urvature O(/κ 2 ). Let us denote by dist S the distane from x to the subspae spanned by the olumns orresponding to S. dist S (x) x proj S (x) min y span({i:i S}) x y. For some ɛ > 0, onsider A (,..., n ) where e and j ɛe + e j for j 2. (A similar example was used in [2] for a lower bound on olumnsubset approximation.) Here, e i is the i-th anonial basis vetor in R n. We laim that the ondition number of A is κ O(max{, ɛ 2 (n )}), while the urvature of f A is O( max{,ɛ 4 (n ) 2 } ) O( κ 2 ). To bound the ondition number, onsider a unit vetor x. We have and Ax (x + ɛ Ax 2 (x + ɛ x i, x 2, x 3,..., x n ) i2 x i ) 2 + i2 i2 i2 x 2 i n x 2 + 2ɛx x i + ɛ 2 ( x i ) 2. i2 We need a lower bound and an upper bound on Ax, assuming that x. On the one hand, we have Ax 2 + (x + ɛ x i ) 2 + ( + ɛ n ) 2 i2 O(max{, ɛ 2 (n )}). On the other hand, to get a lower bound: if x /2, then Ax 2 n i2 x2 i x If x > /2, then either n i2 x i 4ɛ, in whih ase or n i2 x i > 4ɛ Ax 2 (x + ɛ i2 x i ) 2 6, in whih ase by onvexity we get x 2 i 6ɛ 2 (n ). i2

13 3 So, in all ases Ax 2 Ω(/ max{, ɛ 2 (n )}). This means that the ondition number of A is κ O(max{, ɛ 2 (n )}). To lower-bound the urvature of f A, onsider the first olumn and let us estimate f A () and f A [n]\{} (). We have f A () 2 + On the other side, proj ( j ) 2 + ɛ 2 (n ). j2 f A [n]\{} () proj [n]\{} 2 (dist [n]\{} ( )) 2. We exhibit a linear ombination of the olumns 2,..., n whih is lose to : Let y n ɛ(n ) j2 j. We obtain dist [n]\{} ( ) y ɛ(n ) (0,,,..., ) ɛ n. Alternatively, we an also pik y 0 whih shows that dist [n]\{} ( ). So we have f[n]\{} A () (dist [n]\{}( )) 2 { } min, ɛ 2 (n ) max{, ɛ 2 (n )}. We onlude that the urvature of f A is at least ( ) max{, ɛ 4 (n ) 2 } O κ 2. Aknowledgment We thank Christos Boutsidis for suggesting a onnetion between urvature and ondition number. Referenes [] Alexander Ageev and Maxim Sviridenko. Pipage rounding: A new method of onstruting algorithms with proven performane guarantee. J. Combinatorial Optimization, 8(3): , [2] Christos Boutsidis, Petros Drineas, and Malik Magdon- Ismail. Near-optimal olumn-based matrix reonstrution. SIAM J. Comput., 43(2):687 77, 204. [3] Rihard A Brualdi. Comments on bases in dependene strutures. Bull. of the Australian Math. So., (02):6 67, 969. [4] Gruia Calinesu, Chandra Chekuri, Martin Pál, and Jan Vondrák. Maximizing a submodular set funtion subjet to a matroid onstraint (extended abstrat). In Pro. 2th IPCO, pages 82 96, [5] Gruia Calinesu, Chandra Chekuri, Martin Pál, and Jan Vondrák. Maximizing a submodular set funtion subjet to a matroid onstraint. SIAM J. Comput., 40(6): , 20. [6] Chandra Chekuri, Jan Vondrák, and Rio Zenklusen. Dependent randomized rounding via exhange properties of ombinatorial strutures. In Pro. 5st FOCS, pages , 200. [7] Chandra Chekuri, Jan Vondrák, and Rio Zenklusen. Multi-budgeted mathings and matroid intersetion via dependent rounding. In Pro. 22nd SODA, pages , 20. [8] Chandra Chekuri, Jan Vondrák, and Rio Zenklusen. Submodular funtion maximization via the multilinear relaxation and ontention resolution shemes. In Pro. 43rd STOC, pages , 20. [9] Mihele Conforti and Gérard Cornuéjols. Submodular set funtions, matroids and the greedy algorithm: Tight worst-ase bounds and some generalizations of the rado-edmonds theorem. Disrete Applied Mathematis, 7(3):25 274, 984. [0] Amit Deshpande and Luis Rademaher. Effiient volume sampling for row/olumn subset seletion. In Pro. 5st FOCS, pages , 200. [] Amit Deshpande and Santosh Vempala. Adaptive sampling and fast low-rank matrix approximation. In Pro. 9th APPROX, pages Springer, [2] S. Dobzinski, N. Nisan, and M. Shapira. Approximation algorithms for ombinatorial autions with omplement-free bidders. In Pro. 37th STOC, pages 60 68, [3] Shahar Dobzinski and Jan Vondrák. From query omplexity to omputational omplexity. In Pro. 44th STOC, pages 07 6, 202. [4] Jak Edmonds. Matroids and the greedy algorithm. Mathematial Programming, ():27 36, 97. [5] Uriel Feige. A threshold of ln n for approximating set over. J. ACM, 45: , 998. [6] Yuval Filmus and Justin Ward. A tight ombinatorial algorithm for submodular maximization subjet to a matroid onstraint. In Pro. 53nd FOCS, 202. [7] Yuval Filmus and Justin Ward. Monotone submodular maximization over a matroid via non-oblivious loal searh. SIAM J. Comput., 43(2):54 542, 204. [8] M.L. Fisher, G.L. Nemhauser, and L.A. Wolsey. An analysis of approximations for maximizing submodular set funtions II. Mathematial Programming Studies, 8:73 87, 978. [9] Vitor P. Il ev. An approximation guarantee of the greedy desent algorithm for minimizing a supermodular set funtion. Disrete Applied Mathematis, 4(- 3):3 46, Otober 200. [20] A.K. Kelmans. Multipliative submodularity of a matrix s prinipal minor as a funtion of the set of its rows. Disrete Mathematis, 44():3 6, 983. [2] David Kempe, Jon M. Kleinberg, and Éva Tardos. Maximizing the spread of influene through a soial network. In Pro. 9th KDD, pages 37 46, 2003.

14 4 [22] C.W. Ko, Jon Lee, and Maurie Queyranne. An exat algorithm for maximum entropy sampling. Operations Researh, 43(4):684 69, 996. [23] A. Krause and C. Guestrin. Submodularity and its appliations in optimized information gathering. ACM Trans. on Intelligent Systems and Tehnology, 2(4):32, 20. [24] A. Krause, C. Guestrin, A. Gupta, and J. Kleinberg. Near-optimal sensor plaements: maximizing information while minimizing ommuniation ost. In Pro. 5th IPSN, pages 2 0, [25] A. Krause, A. Singh, and C. Guestrin. Near-optimal sensor plaements in gaussian proesses: Theory, effiient algorithms and empirial studies. J. Mahine Learning Researh, 9: , [26] Andreas Krause, Ram Rajagopal, Anupam Gupta, and Carlos Guestrin. Simultaneous plaement and sheduling of sensors. In Pro. 8th IPSN, pages 8 92, [27] Jon Lee. Maximum entropy sampling. Enylopedia of Environmetris, 3: , [28] B. Lehmann, D. J. Lehmann, and N. Nisan. Combinatorial autions with dereasing marginal utilities. Games and Eonomi Behavior, 55: , [29] G.L. Nemhauser and L.A. Wolsey. Best algorithms for approximating the maximum of a submodular set funtion. Mathematis of Operations Researh, 3(3):77 88, 978. [30] G.L. Nemhauser, L.A. Wolsey, and M.L. Fisher. An analysis of approximations for maximizing submodular set funtions I. Mathematial Programming, 4(): , 978. [3] Alexander Shrijver. Combinatorial Optimization: Polyhedra and Effiieny. Springer, [32] Jan Vondrák. Optimal approximation for the submodular welfare problem in the value orale model. In Pro. 40th STOC, pages 67 74, [33] Jan Vondrák. Symmetry and approximability of submodular maximization problems. In Pro. 50th FOCS, pages , [34] Jan Vondrák. Submodularity and urvature: the A optimal algorithm. In RIMS Kokyuroku Bessatsu, volume B23, pages , Kyoto, 200. Proofs and Claims Omitted from the Main Body Lemma 2.. If f : 2 X R 0 is a monotone inreasing submodular funtion with total urvature at most, then j A f X j(j) ( )f(a) for all A X. Proof. We order the elements of X arbitrarily, and let A j be the set ontaining all those elements of A that preede the element j. Then, j A f A j (j) f(a) f( ). From (.), we have f X j(j) f (j) whih, sine f (j) 0, is equivalent to f X j (j) ( )f (j), for eah j A. Beause f is submodular, we have f (j) f Aj (j) for all j, and so j A f X j (j) ( ) j A f (j) ( ) j A f Aj (j) ( )[f(a) f( )] ( )f(a). Lemma 2.2. If f : 2 X R 0 is a monotone dereasing supermodular funtion with total urvature at most, then ( ) j A f (j) f(x \ A) for all A X. Proof. Order A arbitrarily, and let A j be the set of all elements in A that preede element j, inluding j itself. Then, j A f X\A j (j) f(x) f(x \A). From (.), we have f X j(j) f (j), whih, sine f (j) 0, is equivalent to f X j (j) ( )f (j). Then, sine f is supermodular, we have f X\Aj (j) f X j (j) for all j A, and so ( ) j A f (j) j A f X j (j) j A f X\Aj (j) f(x) f(x \A) f(x \A). Lemma A.. Let f : 2 X R 0 be a monotoneinreasing submodular funtion and define l(a) j A f X j(j) and g(a) f(a) l(a). Then, g is submodular, monotone inreasing, and nonnegative. Proof. The funtion g is the sum of a submodular funtion f and a linear funtion l, and so must be submodular. For any set A X and element j A, g A (j) f A (j) f X j (j) 0 sine f is submodular. Thus, g is monotone inreasing. Finally, we note that g( ) f( ) l( ) f( ) 0 and so g must be nonnegative. Lemma A.2. Let f : 2 X R 0 be a monotonedereasing supermodular funtion and define l(a) j A f (j) and g(a) l(a) f(x \ A). Then, g is submodular, monotone inreasing, and nonnegative.

max min z i i=1 x j k s.t. j=1 x j j:i T j

max min z i i=1 x j k s.t. j=1 x j j:i T j AM 221: Advaned Optimization Spring 2016 Prof. Yaron Singer Leture 22 April 18th 1 Overview In this leture, we will study the pipage rounding tehnique whih is a deterministi rounding proedure that an be

More information

Complexity of Regularization RBF Networks

Complexity of Regularization RBF Networks Complexity of Regularization RBF Networks Mark A Kon Department of Mathematis and Statistis Boston University Boston, MA 02215 mkon@buedu Leszek Plaskota Institute of Applied Mathematis University of Warsaw

More information

Maximum Entropy and Exponential Families

Maximum Entropy and Exponential Families Maximum Entropy and Exponential Families April 9, 209 Abstrat The goal of this note is to derive the exponential form of probability distribution from more basi onsiderations, in partiular Entropy. It

More information

Hankel Optimal Model Order Reduction 1

Hankel Optimal Model Order Reduction 1 Massahusetts Institute of Tehnology Department of Eletrial Engineering and Computer Siene 6.245: MULTIVARIABLE CONTROL SYSTEMS by A. Megretski Hankel Optimal Model Order Redution 1 This leture overs both

More information

Lecture 7: Sampling/Projections for Least-squares Approximation, Cont. 7 Sampling/Projections for Least-squares Approximation, Cont.

Lecture 7: Sampling/Projections for Least-squares Approximation, Cont. 7 Sampling/Projections for Least-squares Approximation, Cont. Stat60/CS94: Randomized Algorithms for Matries and Data Leture 7-09/5/013 Leture 7: Sampling/Projetions for Least-squares Approximation, Cont. Leturer: Mihael Mahoney Sribe: Mihael Mahoney Warning: these

More information

Monotone Submodular Maximization over a Matroid

Monotone Submodular Maximization over a Matroid Monotone Submodular Maximization over a Matroid Yuval Filmus January 31, 2013 Abstract In this talk, we survey some recent results on monotone submodular maximization over a matroid. The survey does not

More information

Methods of evaluating tests

Methods of evaluating tests Methods of evaluating tests Let X,, 1 Xn be i.i.d. Bernoulli( p ). Then 5 j= 1 j ( 5, ) T = X Binomial p. We test 1 H : p vs. 1 1 H : p>. We saw that a LRT is 1 if t k* φ ( x ) =. otherwise (t is the observed

More information

Submodularity and curvature: the optimal algorithm

Submodularity and curvature: the optimal algorithm RIMS Kôkyûroku Bessatsu Bx (200x), 000 000 Submodularity and curvature: the optimal algorithm By Jan Vondrák Abstract Let (X, I) be a matroid and let f : 2 X R + be a monotone submodular function. The

More information

Millennium Relativity Acceleration Composition. The Relativistic Relationship between Acceleration and Uniform Motion

Millennium Relativity Acceleration Composition. The Relativistic Relationship between Acceleration and Uniform Motion Millennium Relativity Aeleration Composition he Relativisti Relationship between Aeleration and niform Motion Copyright 003 Joseph A. Rybzyk Abstrat he relativisti priniples developed throughout the six

More information

A Characterization of Wavelet Convergence in Sobolev Spaces

A Characterization of Wavelet Convergence in Sobolev Spaces A Charaterization of Wavelet Convergene in Sobolev Spaes Mark A. Kon 1 oston University Louise Arakelian Raphael Howard University Dediated to Prof. Robert Carroll on the oasion of his 70th birthday. Abstrat

More information

REFINED UPPER BOUNDS FOR THE LINEAR DIOPHANTINE PROBLEM OF FROBENIUS. 1. Introduction

REFINED UPPER BOUNDS FOR THE LINEAR DIOPHANTINE PROBLEM OF FROBENIUS. 1. Introduction Version of 5/2/2003 To appear in Advanes in Applied Mathematis REFINED UPPER BOUNDS FOR THE LINEAR DIOPHANTINE PROBLEM OF FROBENIUS MATTHIAS BECK AND SHELEMYAHU ZACKS Abstrat We study the Frobenius problem:

More information

Lightpath routing for maximum reliability in optical mesh networks

Lightpath routing for maximum reliability in optical mesh networks Vol. 7, No. 5 / May 2008 / JOURNAL OF OPTICAL NETWORKING 449 Lightpath routing for maximum reliability in optial mesh networks Shengli Yuan, 1, * Saket Varma, 2 and Jason P. Jue 2 1 Department of Computer

More information

7 Max-Flow Problems. Business Computing and Operations Research 608

7 Max-Flow Problems. Business Computing and Operations Research 608 7 Max-Flow Problems Business Computing and Operations Researh 68 7. Max-Flow Problems In what follows, we onsider a somewhat modified problem onstellation Instead of osts of transmission, vetor now indiates

More information

CMSC 451: Lecture 9 Greedy Approximation: Set Cover Thursday, Sep 28, 2017

CMSC 451: Lecture 9 Greedy Approximation: Set Cover Thursday, Sep 28, 2017 CMSC 451: Leture 9 Greedy Approximation: Set Cover Thursday, Sep 28, 2017 Reading: Chapt 11 of KT and Set 54 of DPV Set Cover: An important lass of optimization problems involves overing a ertain domain,

More information

Nonreversibility of Multiple Unicast Networks

Nonreversibility of Multiple Unicast Networks Nonreversibility of Multiple Uniast Networks Randall Dougherty and Kenneth Zeger September 27, 2005 Abstrat We prove that for any finite direted ayli network, there exists a orresponding multiple uniast

More information

A NETWORK SIMPLEX ALGORITHM FOR THE MINIMUM COST-BENEFIT NETWORK FLOW PROBLEM

A NETWORK SIMPLEX ALGORITHM FOR THE MINIMUM COST-BENEFIT NETWORK FLOW PROBLEM NETWORK SIMPLEX LGORITHM FOR THE MINIMUM COST-BENEFIT NETWORK FLOW PROBLEM Cen Çalışan, Utah Valley University, 800 W. University Parway, Orem, UT 84058, 801-863-6487, en.alisan@uvu.edu BSTRCT The minimum

More information

Estimating the probability law of the codelength as a function of the approximation error in image compression

Estimating the probability law of the codelength as a function of the approximation error in image compression Estimating the probability law of the odelength as a funtion of the approximation error in image ompression François Malgouyres Marh 7, 2007 Abstrat After some reolletions on ompression of images using

More information

Sensitivity Analysis in Markov Networks

Sensitivity Analysis in Markov Networks Sensitivity Analysis in Markov Networks Hei Chan and Adnan Darwihe Computer Siene Department University of California, Los Angeles Los Angeles, CA 90095 {hei,darwihe}@s.ula.edu Abstrat This paper explores

More information

Average Rate Speed Scaling

Average Rate Speed Scaling Average Rate Speed Saling Nikhil Bansal David P. Bunde Ho-Leung Chan Kirk Pruhs May 2, 2008 Abstrat Speed saling is a power management tehnique that involves dynamially hanging the speed of a proessor.

More information

Probabilistic Graphical Models

Probabilistic Graphical Models Probabilisti Graphial Models David Sontag New York University Leture 12, April 19, 2012 Aknowledgement: Partially based on slides by Eri Xing at CMU and Andrew MCallum at UMass Amherst David Sontag (NYU)

More information

Ordered fields and the ultrafilter theorem

Ordered fields and the ultrafilter theorem F U N D A M E N T A MATHEMATICAE 59 (999) Ordered fields and the ultrafilter theorem by R. B e r r (Dortmund), F. D e l o n (Paris) and J. S h m i d (Dortmund) Abstrat. We prove that on the basis of ZF

More information

Stochastic Combinatorial Optimization with Risk Evdokia Nikolova

Stochastic Combinatorial Optimization with Risk Evdokia Nikolova Computer Siene and Artifiial Intelligene Laboratory Tehnial Report MIT-CSAIL-TR-2008-055 September 13, 2008 Stohasti Combinatorial Optimization with Risk Evdokia Nikolova massahusetts institute of tehnology,

More information

Robust Recovery of Signals From a Structured Union of Subspaces

Robust Recovery of Signals From a Structured Union of Subspaces Robust Reovery of Signals From a Strutured Union of Subspaes 1 Yonina C. Eldar, Senior Member, IEEE and Moshe Mishali, Student Member, IEEE arxiv:87.4581v2 [nlin.cg] 3 Mar 29 Abstrat Traditional sampling

More information

The Effectiveness of the Linear Hull Effect

The Effectiveness of the Linear Hull Effect The Effetiveness of the Linear Hull Effet S. Murphy Tehnial Report RHUL MA 009 9 6 Otober 009 Department of Mathematis Royal Holloway, University of London Egham, Surrey TW0 0EX, England http://www.rhul.a.uk/mathematis/tehreports

More information

A Queueing Model for Call Blending in Call Centers

A Queueing Model for Call Blending in Call Centers A Queueing Model for Call Blending in Call Centers Sandjai Bhulai and Ger Koole Vrije Universiteit Amsterdam Faulty of Sienes De Boelelaan 1081a 1081 HV Amsterdam The Netherlands E-mail: {sbhulai, koole}@s.vu.nl

More information

Submodular Functions and Their Applications

Submodular Functions and Their Applications Submodular Functions and Their Applications Jan Vondrák IBM Almaden Research Center San Jose, CA SIAM Discrete Math conference, Minneapolis, MN June 204 Jan Vondrák (IBM Almaden) Submodular Functions and

More information

Chapter 8 Hypothesis Testing

Chapter 8 Hypothesis Testing Leture 5 for BST 63: Statistial Theory II Kui Zhang, Spring Chapter 8 Hypothesis Testing Setion 8 Introdution Definition 8 A hypothesis is a statement about a population parameter Definition 8 The two

More information

A Functional Representation of Fuzzy Preferences

A Functional Representation of Fuzzy Preferences Theoretial Eonomis Letters, 017, 7, 13- http://wwwsirporg/journal/tel ISSN Online: 16-086 ISSN Print: 16-078 A Funtional Representation of Fuzzy Preferenes Susheng Wang Department of Eonomis, Hong Kong

More information

Product Policy in Markets with Word-of-Mouth Communication. Technical Appendix

Product Policy in Markets with Word-of-Mouth Communication. Technical Appendix rodut oliy in Markets with Word-of-Mouth Communiation Tehnial Appendix August 05 Miro-Model for Inreasing Awareness In the paper, we make the assumption that awareness is inreasing in ustomer type. I.e.,

More information

Taste for variety and optimum product diversity in an open economy

Taste for variety and optimum product diversity in an open economy Taste for variety and optimum produt diversity in an open eonomy Javier Coto-Martínez City University Paul Levine University of Surrey Otober 0, 005 María D.C. Garía-Alonso University of Kent Abstrat We

More information

Packing Plane Spanning Trees into a Point Set

Packing Plane Spanning Trees into a Point Set Paking Plane Spanning Trees into a Point Set Ahmad Biniaz Alfredo Garía Abstrat Let P be a set of n points in the plane in general position. We show that at least n/3 plane spanning trees an be paked into

More information

Modeling of discrete/continuous optimization problems: characterization and formulation of disjunctions and their relaxations

Modeling of discrete/continuous optimization problems: characterization and formulation of disjunctions and their relaxations Computers and Chemial Engineering (00) 4/448 www.elsevier.om/loate/omphemeng Modeling of disrete/ontinuous optimization problems: haraterization and formulation of disjuntions and their relaxations Aldo

More information

Tight bounds for selfish and greedy load balancing

Tight bounds for selfish and greedy load balancing Tight bounds for selfish and greedy load balaning Ioannis Caragiannis Mihele Flammini Christos Kaklamanis Panagiotis Kanellopoulos Lua Mosardelli Deember, 009 Abstrat We study the load balaning problem

More information

Control Theory association of mathematics and engineering

Control Theory association of mathematics and engineering Control Theory assoiation of mathematis and engineering Wojieh Mitkowski Krzysztof Oprzedkiewiz Department of Automatis AGH Univ. of Siene & Tehnology, Craow, Poland, Abstrat In this paper a methodology

More information

The Hanging Chain. John McCuan. January 19, 2006

The Hanging Chain. John McCuan. January 19, 2006 The Hanging Chain John MCuan January 19, 2006 1 Introdution We onsider a hain of length L attahed to two points (a, u a and (b, u b in the plane. It is assumed that the hain hangs in the plane under a

More information

Advanced Computational Fluid Dynamics AA215A Lecture 4

Advanced Computational Fluid Dynamics AA215A Lecture 4 Advaned Computational Fluid Dynamis AA5A Leture 4 Antony Jameson Winter Quarter,, Stanford, CA Abstrat Leture 4 overs analysis of the equations of gas dynamis Contents Analysis of the equations of gas

More information

Convergence of reinforcement learning with general function approximators

Convergence of reinforcement learning with general function approximators Convergene of reinforement learning with general funtion approximators assilis A. Papavassiliou and Stuart Russell Computer Siene Division, U. of California, Berkeley, CA 94720-1776 fvassilis,russellg@s.berkeley.edu

More information

SINCE Zadeh s compositional rule of fuzzy inference

SINCE Zadeh s compositional rule of fuzzy inference IEEE TRANSACTIONS ON FUZZY SYSTEMS, VOL. 14, NO. 6, DECEMBER 2006 709 Error Estimation of Perturbations Under CRI Guosheng Cheng Yuxi Fu Abstrat The analysis of stability robustness of fuzzy reasoning

More information

Perturbation Analyses for the Cholesky Factorization with Backward Rounding Errors

Perturbation Analyses for the Cholesky Factorization with Backward Rounding Errors Perturbation Analyses for the holesky Fatorization with Bakward Rounding Errors Xiao-Wen hang Shool of omputer Siene, MGill University, Montreal, Quebe, anada, H3A A7 Abstrat. This paper gives perturbation

More information

Sensitivity analysis for linear optimization problem with fuzzy data in the objective function

Sensitivity analysis for linear optimization problem with fuzzy data in the objective function Sensitivity analysis for linear optimization problem with fuzzy data in the objetive funtion Stephan Dempe, Tatiana Starostina May 5, 2004 Abstrat Linear programming problems with fuzzy oeffiients in the

More information

Singular Event Detection

Singular Event Detection Singular Event Detetion Rafael S. Garía Eletrial Engineering University of Puerto Rio at Mayagüez Rafael.Garia@ee.uprm.edu Faulty Mentor: S. Shankar Sastry Researh Supervisor: Jonathan Sprinkle Graduate

More information

Solutions to Problem Set 1

Solutions to Problem Set 1 Eon602: Maro Theory Eonomis, HKU Instrutor: Dr. Yulei Luo September 208 Solutions to Problem Set. [0 points] Consider the following lifetime optimal onsumption-saving problem: v (a 0 ) max f;a t+ g t t

More information

ON A PROCESS DERIVED FROM A FILTERED POISSON PROCESS

ON A PROCESS DERIVED FROM A FILTERED POISSON PROCESS ON A PROCESS DERIVED FROM A FILTERED POISSON PROCESS MARIO LEFEBVRE and JEAN-LUC GUILBAULT A ontinuous-time and ontinuous-state stohasti proess, denoted by {Xt), t }, is defined from a proess known as

More information

c 2014 Society for Industrial and Applied Mathematics

c 2014 Society for Industrial and Applied Mathematics SIAM J. COMPUT. Vol. 43, No. 2, pp. 514 542 c 2014 Society for Industrial and Applied Mathematics MONOTONE SUBMODULAR MAXIMIZATION OVER A MATROID VIA NON-OBLIVIOUS LOCAL SEARCH YUVAL FILMUS AND JUSTIN

More information

Parallel disrete-event simulation is an attempt to speed-up the simulation proess through the use of multiple proessors. In some sense parallel disret

Parallel disrete-event simulation is an attempt to speed-up the simulation proess through the use of multiple proessors. In some sense parallel disret Exploiting intra-objet dependenies in parallel simulation Franeso Quaglia a;1 Roberto Baldoni a;2 a Dipartimento di Informatia e Sistemistia Universita \La Sapienza" Via Salaria 113, 198 Roma, Italy Abstrat

More information

Normative and descriptive approaches to multiattribute decision making

Normative and descriptive approaches to multiattribute decision making De. 009, Volume 8, No. (Serial No.78) China-USA Business Review, ISSN 57-54, USA Normative and desriptive approahes to multiattribute deision making Milan Terek (Department of Statistis, University of

More information

The Laws of Acceleration

The Laws of Acceleration The Laws of Aeleration The Relationships between Time, Veloity, and Rate of Aeleration Copyright 2001 Joseph A. Rybzyk Abstrat Presented is a theory in fundamental theoretial physis that establishes the

More information

Internet Appendix for Proxy Advisory Firms: The Economics of Selling Information to Voters

Internet Appendix for Proxy Advisory Firms: The Economics of Selling Information to Voters Internet Appendix for Proxy Advisory Firms: The Eonomis of Selling Information to Voters Andrey Malenko and Nadya Malenko The first part of the Internet Appendix presents the supplementary analysis for

More information

Journal of Inequalities in Pure and Applied Mathematics

Journal of Inequalities in Pure and Applied Mathematics Journal of Inequalities in Pure and Applied Mathematis A NEW ARRANGEMENT INEQUALITY MOHAMMAD JAVAHERI University of Oregon Department of Mathematis Fenton Hall, Eugene, OR 97403. EMail: javaheri@uoregon.edu

More information

Discrete Bessel functions and partial difference equations

Discrete Bessel functions and partial difference equations Disrete Bessel funtions and partial differene equations Antonín Slavík Charles University, Faulty of Mathematis and Physis, Sokolovská 83, 186 75 Praha 8, Czeh Republi E-mail: slavik@karlin.mff.uni.z Abstrat

More information

On Component Order Edge Reliability and the Existence of Uniformly Most Reliable Unicycles

On Component Order Edge Reliability and the Existence of Uniformly Most Reliable Unicycles Daniel Gross, Lakshmi Iswara, L. William Kazmierzak, Kristi Luttrell, John T. Saoman, Charles Suffel On Component Order Edge Reliability and the Existene of Uniformly Most Reliable Uniyles DANIEL GROSS

More information

Complementarities in Spectrum Markets

Complementarities in Spectrum Markets Complementarities in Spetrum Markets Hang Zhou, Randall A. Berry, Mihael L. Honig and Rakesh Vohra EECS Department Northwestern University, Evanston, IL 6008 {hang.zhou, rberry, mh}@ees.northwestern.edu

More information

A Unified View on Multi-class Support Vector Classification Supplement

A Unified View on Multi-class Support Vector Classification Supplement Journal of Mahine Learning Researh??) Submitted 7/15; Published?/?? A Unified View on Multi-lass Support Vetor Classifiation Supplement Ürün Doğan Mirosoft Researh Tobias Glasmahers Institut für Neuroinformatik

More information

Simplification of Network Dynamics in Large Systems

Simplification of Network Dynamics in Large Systems Simplifiation of Network Dynamis in Large Systems Xiaojun Lin and Ness B. Shroff Shool of Eletrial and Computer Engineering Purdue University, West Lafayette, IN 47906, U.S.A. Email: {linx, shroff}@en.purdue.edu

More information

the following action R of T on T n+1 : for each θ T, R θ : T n+1 T n+1 is defined by stated, we assume that all the curves in this paper are defined

the following action R of T on T n+1 : for each θ T, R θ : T n+1 T n+1 is defined by stated, we assume that all the curves in this paper are defined How should a snake turn on ie: A ase study of the asymptoti isoholonomi problem Jianghai Hu, Slobodan N. Simić, and Shankar Sastry Department of Eletrial Engineering and Computer Sienes University of California

More information

Frugality Ratios And Improved Truthful Mechanisms for Vertex Cover

Frugality Ratios And Improved Truthful Mechanisms for Vertex Cover Frugality Ratios And Improved Truthful Mehanisms for Vertex Cover Edith Elkind Hebrew University of Jerusalem, Israel, and University of Southampton, Southampton, SO17 1BJ, U.K. Leslie Ann Goldberg University

More information

Optimization of Submodular Functions Tutorial - lecture I

Optimization of Submodular Functions Tutorial - lecture I Optimization of Submodular Functions Tutorial - lecture I Jan Vondrák 1 1 IBM Almaden Research Center San Jose, CA Jan Vondrák (IBM Almaden) Submodular Optimization Tutorial 1 / 1 Lecture I: outline 1

More information

LECTURE NOTES FOR , FALL 2004

LECTURE NOTES FOR , FALL 2004 LECTURE NOTES FOR 18.155, FALL 2004 83 12. Cone support and wavefront set In disussing the singular support of a tempered distibution above, notie that singsupp(u) = only implies that u C (R n ), not as

More information

Bilinear Formulated Multiple Kernel Learning for Multi-class Classification Problem

Bilinear Formulated Multiple Kernel Learning for Multi-class Classification Problem Bilinear Formulated Multiple Kernel Learning for Multi-lass Classifiation Problem Takumi Kobayashi and Nobuyuki Otsu National Institute of Advaned Industrial Siene and Tehnology, -- Umezono, Tsukuba, Japan

More information

Frequency hopping does not increase anti-jamming resilience of wireless channels

Frequency hopping does not increase anti-jamming resilience of wireless channels Frequeny hopping does not inrease anti-jamming resiliene of wireless hannels Moritz Wiese and Panos Papadimitratos Networed Systems Seurity Group KTH Royal Institute of Tehnology, Stoholm, Sweden {moritzw,

More information

c-perfect Hashing Schemes for Binary Trees, with Applications to Parallel Memories

c-perfect Hashing Schemes for Binary Trees, with Applications to Parallel Memories -Perfet Hashing Shemes for Binary Trees, with Appliations to Parallel Memories (Extended Abstrat Gennaro Cordaso 1, Alberto Negro 1, Vittorio Sarano 1, and Arnold L.Rosenberg 2 1 Dipartimento di Informatia

More information

Volume 29, Issue 3. On the definition of nonessentiality. Udo Ebert University of Oldenburg

Volume 29, Issue 3. On the definition of nonessentiality. Udo Ebert University of Oldenburg Volume 9, Issue 3 On the definition of nonessentiality Udo Ebert University of Oldenburg Abstrat Nonessentiality of a good is often used in welfare eonomis, ost-benefit analysis and applied work. Various

More information

arxiv: v2 [cs.dm] 4 May 2018

arxiv: v2 [cs.dm] 4 May 2018 Disrete Morse theory for the ollapsibility of supremum setions Balthazar Bauer INRIA, DIENS, PSL researh, CNRS, Paris, Frane Luas Isenmann LIRMM, Université de Montpellier, CNRS, Montpellier, Frane arxiv:1803.09577v2

More information

(q) -convergence. Comenius University, Bratislava, Slovakia

(q) -convergence.   Comenius University, Bratislava, Slovakia Annales Mathematiae et Informatiae 38 (2011) pp. 27 36 http://ami.ektf.hu On I (q) -onvergene J. Gogola a, M. Mačaj b, T. Visnyai b a University of Eonomis, Bratislava, Slovakia e-mail: gogola@euba.sk

More information

On the Complexity of the Weighted Fused Lasso

On the Complexity of the Weighted Fused Lasso ON THE COMPLEXITY OF THE WEIGHTED FUSED LASSO On the Compleity of the Weighted Fused Lasso José Bento jose.bento@b.edu Ralph Furmaniak rf@am.org Surjyendu Ray rays@b.edu Abstrat The solution path of the

More information

The Power of Local Search: Maximum Coverage over a Matroid

The Power of Local Search: Maximum Coverage over a Matroid The Power of Local Search: Maximum Coverage over a Matroid Yuval Filmus 1,2 and Justin Ward 1 1 Department of Computer Science, University of Toronto {yuvalf,jward}@cs.toronto.edu 2 Supported by NSERC

More information

A new initial search direction for nonlinear conjugate gradient method

A new initial search direction for nonlinear conjugate gradient method International Journal of Mathematis Researh. ISSN 0976-5840 Volume 6, Number 2 (2014), pp. 183 190 International Researh Publiation House http://www.irphouse.om A new initial searh diretion for nonlinear

More information

On the Licensing of Innovations under Strategic Delegation

On the Licensing of Innovations under Strategic Delegation On the Liensing of Innovations under Strategi Delegation Judy Hsu Institute of Finanial Management Nanhua University Taiwan and X. Henry Wang Department of Eonomis University of Missouri USA Abstrat This

More information

No Time to Observe: Adaptive Influence Maximization with Partial Feedback

No Time to Observe: Adaptive Influence Maximization with Partial Feedback Proeedings of the Twenty-Sixth International Joint Conferene on Artifiial Intelligene IJCAI-17 No Time to Observe: Adaptive Influene Maximization with Partial Feedbak Jing Yuan Department of Computer Siene

More information

Counting Idempotent Relations

Counting Idempotent Relations Counting Idempotent Relations Beriht-Nr. 2008-15 Florian Kammüller ISSN 1436-9915 2 Abstrat This artile introdues and motivates idempotent relations. It summarizes haraterizations of idempotents and their

More information

A Spatiotemporal Approach to Passive Sound Source Localization

A Spatiotemporal Approach to Passive Sound Source Localization A Spatiotemporal Approah Passive Sound Soure Loalization Pasi Pertilä, Mikko Parviainen, Teemu Korhonen and Ari Visa Institute of Signal Proessing Tampere University of Tehnology, P.O.Box 553, FIN-330,

More information

A (k + 3)/2-approximation algorithm for monotone submodular k-set packing and general k-exchange systems

A (k + 3)/2-approximation algorithm for monotone submodular k-set packing and general k-exchange systems A (k + 3)/-approximation algorithm for monotone submodular k-set packing and general k-exchange systems Justin Ward Department of Computer Science, University of Toronto Toronto, Canada jward@cs.toronto.edu

More information

Understanding Elementary Landscapes

Understanding Elementary Landscapes Understanding Elementary Landsapes L. Darrell Whitley Andrew M. Sutton Adele E. Howe Department of Computer Siene Colorado State University Fort Collins, CO 853 {whitley,sutton,howe}@s.olostate.edu ABSTRACT

More information

Integration of the Finite Toda Lattice with Complex-Valued Initial Data

Integration of the Finite Toda Lattice with Complex-Valued Initial Data Integration of the Finite Toda Lattie with Complex-Valued Initial Data Aydin Huseynov* and Gusein Sh Guseinov** *Institute of Mathematis and Mehanis, Azerbaijan National Aademy of Sienes, AZ4 Baku, Azerbaijan

More information

Transformation to approximate independence for locally stationary Gaussian processes

Transformation to approximate independence for locally stationary Gaussian processes ransformation to approximate independene for loally stationary Gaussian proesses Joseph Guinness, Mihael L. Stein We provide new approximations for the likelihood of a time series under the loally stationary

More information

Where as discussed previously we interpret solutions to this partial differential equation in the weak sense: b

Where as discussed previously we interpret solutions to this partial differential equation in the weak sense: b Consider the pure initial value problem for a homogeneous system of onservation laws with no soure terms in one spae dimension: Where as disussed previously we interpret solutions to this partial differential

More information

Modeling of Threading Dislocation Density Reduction in Heteroepitaxial Layers

Modeling of Threading Dislocation Density Reduction in Heteroepitaxial Layers A. E. Romanov et al.: Threading Disloation Density Redution in Layers (II) 33 phys. stat. sol. (b) 99, 33 (997) Subjet lassifiation: 6.72.C; 68.55.Ln; S5.; S5.2; S7.; S7.2 Modeling of Threading Disloation

More information

Word of Mass: The Relationship between Mass Media and Word-of-Mouth

Word of Mass: The Relationship between Mass Media and Word-of-Mouth Word of Mass: The Relationship between Mass Media and Word-of-Mouth Roman Chuhay Preliminary version Marh 6, 015 Abstrat This paper studies the optimal priing and advertising strategies of a firm in the

More information

arxiv:math/ v1 [math.ca] 27 Nov 2003

arxiv:math/ v1 [math.ca] 27 Nov 2003 arxiv:math/011510v1 [math.ca] 27 Nov 200 Counting Integral Lamé Equations by Means of Dessins d Enfants Sander Dahmen November 27, 200 Abstrat We obtain an expliit formula for the number of Lamé equations

More information

Computer Science 786S - Statistical Methods in Natural Language Processing and Data Analysis Page 1

Computer Science 786S - Statistical Methods in Natural Language Processing and Data Analysis Page 1 Computer Siene 786S - Statistial Methods in Natural Language Proessing and Data Analysis Page 1 Hypothesis Testing A statistial hypothesis is a statement about the nature of the distribution of a random

More information

Simplified Buckling Analysis of Skeletal Structures

Simplified Buckling Analysis of Skeletal Structures Simplified Bukling Analysis of Skeletal Strutures B.A. Izzuddin 1 ABSRAC A simplified approah is proposed for bukling analysis of skeletal strutures, whih employs a rotational spring analogy for the formulation

More information

EDGE-DISJOINT CLIQUES IN GRAPHS WITH HIGH MINIMUM DEGREE

EDGE-DISJOINT CLIQUES IN GRAPHS WITH HIGH MINIMUM DEGREE EDGE-DISJOINT CLIQUES IN GRAPHS WITH HIGH MINIMUM DEGREE RAPHAEL YUSTER Abstrat For a graph G and a fixed integer k 3, let ν k G) denote the maximum number of pairwise edge-disjoint opies of K k in G For

More information

11.1 Polynomial Least-Squares Curve Fit

11.1 Polynomial Least-Squares Curve Fit 11.1 Polynomial Least-Squares Curve Fit A. Purpose This subroutine determines a univariate polynomial that fits a given disrete set of data in the sense of minimizing the weighted sum of squares of residuals.

More information

Danielle Maddix AA238 Final Project December 9, 2016

Danielle Maddix AA238 Final Project December 9, 2016 Struture and Parameter Learning in Bayesian Networks with Appliations to Prediting Breast Caner Tumor Malignany in a Lower Dimension Feature Spae Danielle Maddix AA238 Final Projet Deember 9, 2016 Abstrat

More information

The Power of Local Search: Maximum Coverage over a Matroid

The Power of Local Search: Maximum Coverage over a Matroid The Power of Local Search: Maximum Coverage over a Matroid Yuval Filmus,2 and Justin Ward Department of Computer Science, University of Toronto {yuvalf,jward}@cs.toronto.edu 2 Supported by NSERC Abstract

More information

Feature Selection by Independent Component Analysis and Mutual Information Maximization in EEG Signal Classification

Feature Selection by Independent Component Analysis and Mutual Information Maximization in EEG Signal Classification Feature Seletion by Independent Component Analysis and Mutual Information Maximization in EEG Signal Classifiation Tian Lan, Deniz Erdogmus, Andre Adami, Mihael Pavel BME Department, Oregon Health & Siene

More information

1 sin 2 r = 1 n 2 sin 2 i

1 sin 2 r = 1 n 2 sin 2 i Physis 505 Fall 005 Homework Assignment #11 Solutions Textbook problems: Ch. 7: 7.3, 7.5, 7.8, 7.16 7.3 Two plane semi-infinite slabs of the same uniform, isotropi, nonpermeable, lossless dieletri with

More information

Coding for Random Projections and Approximate Near Neighbor Search

Coding for Random Projections and Approximate Near Neighbor Search Coding for Random Projetions and Approximate Near Neighbor Searh Ping Li Department of Statistis & Biostatistis Department of Computer Siene Rutgers University Pisataay, NJ 8854, USA pingli@stat.rutgers.edu

More information

Time Domain Method of Moments

Time Domain Method of Moments Time Domain Method of Moments Massahusetts Institute of Tehnology 6.635 leture notes 1 Introdution The Method of Moments (MoM) introdued in the previous leture is widely used for solving integral equations

More information

HILLE-KNESER TYPE CRITERIA FOR SECOND-ORDER DYNAMIC EQUATIONS ON TIME SCALES

HILLE-KNESER TYPE CRITERIA FOR SECOND-ORDER DYNAMIC EQUATIONS ON TIME SCALES HILLE-KNESER TYPE CRITERIA FOR SECOND-ORDER DYNAMIC EQUATIONS ON TIME SCALES L ERBE, A PETERSON AND S H SAKER Abstrat In this paper, we onsider the pair of seond-order dynami equations rt)x ) ) + pt)x

More information

CORC Report TR : Short Version Optimal Procurement Mechanisms for Divisible Goods with Capacitated Suppliers

CORC Report TR : Short Version Optimal Procurement Mechanisms for Divisible Goods with Capacitated Suppliers CORC Report TR-2006-01: Short Version Optimal Prourement Mehanisms for Divisible Goods with Capaitated Suppliers Garud Iyengar Anuj Kumar First version: June 30, 2006 This version: August 31, 2007 Abstrat

More information

KRANNERT GRADUATE SCHOOL OF MANAGEMENT

KRANNERT GRADUATE SCHOOL OF MANAGEMENT KRANNERT GRADUATE SCHOOL OF MANAGEMENT Purdue University West Lafayette, Indiana A Comment on David and Goliath: An Analysis on Asymmetri Mixed-Strategy Games and Experimental Evidene by Emmanuel Dehenaux

More information

Tests of fit for symmetric variance gamma distributions

Tests of fit for symmetric variance gamma distributions Tests of fit for symmetri variane gamma distributions Fragiadakis Kostas UADPhilEon, National and Kapodistrian University of Athens, 4 Euripidou Street, 05 59 Athens, Greee. Keywords: Variane Gamma Distribution,

More information

SURFACE WAVES OF NON-RAYLEIGH TYPE

SURFACE WAVES OF NON-RAYLEIGH TYPE SURFACE WAVES OF NON-RAYLEIGH TYPE by SERGEY V. KUZNETSOV Institute for Problems in Mehanis Prosp. Vernadskogo, 0, Mosow, 75 Russia e-mail: sv@kuznetsov.msk.ru Abstrat. Existene of surfae waves of non-rayleigh

More information

Aharonov-Bohm effect. Dan Solomon.

Aharonov-Bohm effect. Dan Solomon. Aharonov-Bohm effet. Dan Solomon. In the figure the magneti field is onfined to a solenoid of radius r 0 and is direted in the z- diretion, out of the paper. The solenoid is surrounded by a barrier that

More information

A variant of Coppersmith s Algorithm with Improved Complexity and Efficient Exhaustive Search

A variant of Coppersmith s Algorithm with Improved Complexity and Efficient Exhaustive Search A variant of Coppersmith s Algorithm with Improved Complexity and Effiient Exhaustive Searh Jean-Sébastien Coron 1, Jean-Charles Faugère 2, Guénaël Renault 2, and Rina Zeitoun 2,3 1 University of Luxembourg

More information

Moments and Wavelets in Signal Estimation

Moments and Wavelets in Signal Estimation Moments and Wavelets in Signal Estimation Edward J. Wegman 1 Center for Computational Statistis George Mason University Hung T. Le 2 International usiness Mahines Abstrat: The problem of generalized nonparametri

More information

THE NUMBER FIELD SIEVE FOR INTEGERS OF LOW WEIGHT

THE NUMBER FIELD SIEVE FOR INTEGERS OF LOW WEIGHT MATHEMATICS OF COMPUTATION Volume 79, Number 269, January 2010, Pages 583 602 S 0025-5718(09)02198-X Artile eletronially published on July 27, 2009 THE NUMBER FIELD SIEVE FOR INTEGERS OF LOW WEIGHT OLIVER

More information

Optimization of Statistical Decisions for Age Replacement Problems via a New Pivotal Quantity Averaging Approach

Optimization of Statistical Decisions for Age Replacement Problems via a New Pivotal Quantity Averaging Approach Amerian Journal of heoretial and Applied tatistis 6; 5(-): -8 Published online January 7, 6 (http://www.sienepublishinggroup.om/j/ajtas) doi:.648/j.ajtas.s.65.4 IN: 36-8999 (Print); IN: 36-96 (Online)

More information

Weighted K-Nearest Neighbor Revisited

Weighted K-Nearest Neighbor Revisited Weighted -Nearest Neighbor Revisited M. Biego University of Verona Verona, Italy Email: manuele.biego@univr.it M. Loog Delft University of Tehnology Delft, The Netherlands Email: m.loog@tudelft.nl Abstrat

More information