Mixed Robust/Average Submodular Partitioning

Size: px
Start display at page:

Download "Mixed Robust/Average Submodular Partitioning"

Transcription

1 Mixed Robust/Average Subodular Partitioning Kai Wei 1 Rishabh Iyer 1 Shengjie Wang 2 Wenruo Bai 1 Jeff Biles 1 1 Departent of Electrical Engineering, University of Washington 2 Departent of Coputer Science, University of Washington {kaiwei, rkiyer, wangsj, wrbai, biles}@u.washington.edu Abstract We investigate two novel ixed robust/average-case subodular data partitioning probles that we collectively call Subodular Partitioning. These probles generalize purely robust instances of the proble, naely ax-in subodular fair allocation (SFA) [8] and in-ax subodular load balancing (SLB) [15], and also average-case instances, that is the subodular welfare proble (SWP) [16] and subodular ultiway partition (SMP) [2]. While the robust versions have been studied in the theory counity [7, 8, 11, 15, 16], existing work has focused on tight approxiation guarantees, and the resultant algoriths are not generally scalable to large real-world applications. This is in contrast to the average case, where ost of the algoriths are scalable. In the present paper, we bridge this gap, by proposing several new algoriths (including greedy and relaxation algoriths) that not only scale to large datasets but that also achieve theoretical approxiation guarantees coparable to the state-of-the-art. We oreover provide new scalable algoriths that apply to additive cobinations of the robust and average-case objectives. 1 Introduction This paper studies two new ixed robust/average-case partitioning probles of the following for: Proble 1: ax [ λ in f i (A π i ) + λ ] f j (A π j ), Proble 2: in [ λ ax f i (A π i ) + λ ] f j (A π j ), π Π i π Π i where 0 λ 1, λ 1 λ, the set of sets π = (A π 1, A π 2,, A π ) is a partition of a finite set V (i.e, i A π i = V and i j, A π i Aπ j = ), and Π refers to the set of all partitions of V into blocks. The paraeter λ controls the objective: λ = 1 is the average case, λ = 0 is the robust case, and 0 < λ < 1 is a ixed case. In general, Probles 1 and 2 are hopelessly intractable, even to approxiate, but we assue that the f 1, f 2,, f are all onotone nondecreasing (i.e., f i (S) f i (T ) whenever S T ), noralized (f i ( ) = 0), and subodular [6] (i.e., S, T V, f i (S) + f i (T ) f i (S T ) + f i (S T )). These assuptions allow us to develop fast, siple, and scalable algoriths that have approxiation guarantees, as is done in this paper. These assuptions, oreover, allow us to retain the naturalness and applicability of Probles 1 and 2 to a wide variety of practical probles. Subodularity is a natural property in any real-world ML applications [13, 10, 12, 17]. When iniizing, subodularity naturally odel notions of interacting costs and coplexity, while when axiizing it readily odels notions of diversity, suarization quality, and inforation. Hence, Proble 1 asks for a partition whose blocks each (and that collectively) are a good, say, suary of the whole. Proble 2 on the other hand, asks for a partition whose blocks each (and that collectively) are internally hoogeneous (as is typical in clustering). Taken together, we call Probles 1 and 2 Subodular Partitioning. We further categorize these probles depending on if the f i s are identical to each other (hoogeneous) or not (heterogeneous). 1 1 Siilar sub-categorizations have been called the unifor vs. the non-unifor case in the past [15, 7]. 1

2 Proble 1 (Max-(Min+Avg)) Approxiation factor Proble 2 (Min-(Max+Avg)) Approxiation factor λ = 0, MATCHING [8] 1/(n + 1) λ = 0, SAMPLING [15] O( n log n) λ = 0, ELLIPSOID [7] O( n 1/4 log n log 3/2 ) λ = 0, ELLIPSOID [7] O( n log n) λ = 0, BINSRCH [11] 1/(2 1) λ = 1, GREEDSPLIT [18, 14] 2 λ = 1, GREEDWELFARE [5] 1/2 λ = 1, RELAX [2] O(log n) λ = 0, GREEDSAT (1/2 δ, δ 1/2+δ ) LOVÁSZ ROUND 0 λ 1, GENERALGREEDSAT λ/2 0 λ 1 GENERALLOVÁSZ ROUND λ = 0, Hardness 1/2 [8] λ = 0, Hardness λ = 1, Hardness 1 1/e [16] λ = 1, Hardness 2 2/ [4] Table 1: Suary of our contributions and existing work on Probles 1 and 2. 2 See text for details. Previous Work: Special cases of Probles 1 and 2 have appeared previously. Proble 1 with λ = 0 is called subodular fair allocation (SFA), and Proble 2 with λ = 0 is called subodular load balancing (SLB), robust optiization probles both of which previously have been studied. For SLB even in the hoogeneous setting, [15] show that the proble is inforation theoretically hard to approxiate within o( n/ log n). They give a sapling-based algorith achieving O( n/ log n) for the hoogeneous setting. However, the sapling-based algorith is not practical and scalable since it involves solving, in the worst-case, O(n 3 log n) instances of subodular function iniization. Another approach approxiates each subodular function by its ellipsoid approxiation (again non-scalable) and reduces SLB to its odular version (iniu akespan scheduling) leading to an approxiation factor of O( n log n) [7]. SFA, on the other hand, has been studied ostly in the heterogeneous setting. When f i s are all odular, the tightest algorith, so far, is to iteratively round an LP solution achieving O(1/( log 3 )) approxiation [1], whereas the proble is NP-hard to 1/2 + ɛ approxiate for any ɛ > 0 [8]. When f i s are subodular, [8] gives a atching-based algorith with a factor 1/(n + 1) approxiation that perfors poorly when n. [11] proposes a binary search algorith yielding an iproved factor of 1/(2 1). Siilar to SLB, [7] applies the sae ellipsoid approxiation techniques leading to a factor of O( n 1/4 log n log 3/2 ). These approaches are theoretically interesting, but they do not scale to large probles. Probles 1 and 2, when λ = 1, have also been previously studied. Proble 2 becoes the subodular ultiway partition (SMP) for which one can obtain a relaxation based 2-approxiation [2] in the hoogeneous case. In the heterogeneous case, the guarantee is O(log n) [3]. Siilarly, [18, 14] propose a greedy splitting 2-approxiation algorith for the hoogeneous setting. Proble 1 becoes the subodular welfare [16] for which a scalable greedy algorith achieves a 1/2 approxiation [5]. Unlike the worst case (λ = 0), any of the algoriths proposed for these probles are scalable. The general case (0 < λ < 1) of Probles 1 and 2 differs fro either of these extree cases since we wish both for a robust and average case partitioning, and controlling λ allows one to trade off between the two. Our contributions: In contrast to Probles 1 and 2 in the average case (i.e., λ = 1), existing algoriths for the worst case (λ = 0) are not scalable. This paper closes this gap, by proposing new classes of algorithic fraeworks to solve SFA and SLB: (1) greedy algoriths; and (2) a Lovász extension based relaxation algorith. For SFA under the heterogeneous setting, we propose a saturate greedy algorith (GREEDSAT) that iteratively solves instances of subodular welfare probles. We show GREEDSAT has a bi-criterion guarantee of (1/2 δ, δ/(1/2 + δ)), which ensures at least (1/2 δ) blocks receive utility at least δ/(1/2 + δ)op T for any 0 < δ < 1/2. For SLB, we first generalize the hardness result in [15] and show that it is hard to approxiate better than for any = o( n/ log n) even in the hoogeneous setting. We then give a Lovász extension based relaxation algorith (LOVÁSZ ROUND) yielding a tight factor of for the heterogeneous setting. As far as we know, this is the first algorith achieving a factor of for SLB in this setting. Next we show algoriths that handle generalizations of SFA and SLB to Probles 1 and 2. In particular we generalize GREEDSAT leading to GENERALGREEDSAT to solve Proble 1. GENERALGREEDSAT provides a guarantee that soothly interpolates in ters of λ between the bi-criterion factor by GREEDSAT in the case of λ = 0 and the constant factor of 1/2 by the greedy algorith in the case of λ = 1. For Proble 2 we introduce GENERALLOVÁSZ ROUND that generalizes LOVÁSZ ROUND and obtain an -approxiation for general λ. The theoretical contributions and the existing work for Probles 1 and 2 are suarized in Table 1. 2 Robust Subodular Partitioning (Probles 1 and 2 when λ = 0) Notation: we define f(j S) f(s j) f(s) as the gain of j V in the context of S V. We assue w.l.o.g. that the ground set is V = {1, 2,, n}. 2 Results obtained in this paper are arked as. Methods for only the hoogeneous setting are arked as. 2

3 GREEDSAT for SFA: We give a greedy style algorith Saturate Greedy to solve SFA (GREEDSAT, see Alg. 1). Siilar in flavor to the one proposed in [12] GREEDSAT defines an interediate objective F c (π) = i=1 f i c(aπ i ), where f i c(a) = 1 in{f i(a), c} (Line 2). The paraeter c controls the saturation in each block. fi c satisfies subodularity for each i. Unlike SFA, the cobinatorial optiization proble ax π Π F c (π) (Line 6) is uch easier and is an instance of the subodular welfare proble (SWP) [16]. 1: Input: {f i } i=1,, V, α. Algorith 1: GREEDSAT 2: Let F c (π) = 1 In this work, we solve Line 6 by the efficient greedy algorith as described in [5] with a factor 1/2. It should 3: Let c in = 0, c ax = in i f i (V ) i=1 in{f i(a π i ), c}. be clear that one can also ipleent a coputationally 4: while c ax c in ɛ do expensive ulti-linear relaxation algorith as given 5: c = 1 2 (cax + c in) in [16] to solve it with a tight factor (1 1/e). Setting 6: ˆπ c argax π Π F c (π) the input arguent α = 1/2 as the approxiation factor for Line 6, the essential idea of GREEDSAT is to 9: else 7: if F c (ˆπ c ) < αc then 8: c ax = c perfor a binary search over the paraeter c to find the 10: c in = c; ˆπ ˆπ c 11: end if largest c such that the returned solution ˆπ c for the instance of SWP satisfies F c (ˆπ c ) αc. GREEDSAT 13: Output: ˆπ. 12: end while ini fi(v ) terinates in solving O(log( ɛ )) instances of SWP. Theore 2.1 gives a bi-criterion optiality guarantee. Theore 2.1. Given ɛ > 0, 0 α 1 and any 0 < δ < α, GREEDSAT finds a partition such that δ at least (α δ) blocks receive utility at least 1 α+δ (ax π Π in i f i (A π i ) ɛ). For any 0 < δ < α Theore 2.1 ensures that the top valued (α δ) blocks in the partition returned by GREEDSAT are (δ/(1 α + δ) ɛ)-optial. δ controls the trade-off between the nuber of top valued blocks to bound and the perforance guarantee attained for these blocks. The saller δ is, the ore top blocks are bounded, but with a weaker guarantee. We set the input arguent α = 1/2 as the worst-case perforance guarantee for solving SWP so that the above theoretical analysis follows. However, the worst-case factor is often achieved by very contrived exaples of subodular functions. For the ones used in practice, the greedy algorith often leads to near-optial solution [12]. Setting α as the actual perforance guarantee for SWP (often very close to 1), the bound can be significantly iproved. In practice, we suggest setting α = LOVÁSZ ROUND for SLB (Proble 2 with λ = 0) In this section, we investigate the proble of SLB. Existing hardness result in [15] is o( n/ log n), which is independent of and iplicitly assues that = Θ( n/ log n). However, the applications for SLB are often dependent on, which is often chosen as n. We offer the hardness analysis in ters of in the following Theore. Theore 2.2. For any ɛ > 0, SLB cannot be approxiated to a factor of (1 ɛ) for any = o( n/ log n) with polynoial nuber of queries even under the hoogeneous setting. For the rest of the paper, we assue = o( n/ log n) for SLB, unless stated otherwise. Theore 2.2 iplies that SLB is hard to approxiate better than. However, arbitrary partition π Π already achieves the best approxiation factor of that one can hope for under the hoogeneous setting, since ax i f(a π i ) f(v ) i f(aπ i ) ax i f(a π i ) for any π Π. LOVÁSZ ROUND: Therefore we consider only the heterogeneous setting for SLB, for which we propose a tight algorith LOVÁSZ ROUND (see Alg. 2). The algorith proceeds as follows: (1) apply the Lovász extension of subodular functions to relax SLB to a convex prograing, which is exactly Algorith 2: LOVÁSZ ROUND 1: Input: {f i } i=1, { f i } i=1,, V. solved for a fractional solution (Line 2); (2) ap 2: Solve for {x i } i=1 via convex relaxation. 3: Rounding: Let A 1 =,..., = A =. the fractional solution to a partition using the 4: for j = 1,..., n do θ-rounding technique as proposed in [9] (Line 3-6). 5: î argax i x i (j); A î = A î The Lovász extension, which naturally connects a j 6: end for subodular function f with its convex relaxation f, 7: Output {A i } i=1. is defined as follows: given any x [0, 1] n, we obtain a perutation σ x by ordering its eleents in non-increasing order, and thereby a chain of sets S σx 0,..., Sn σx with S σx j = {σ x (1),..., σ x (j)} 3

4 for j = 1,..., n. The Lovász extension f for f is the weighted su of the ordered entries of x: f(x) = n x(σ x(j))(f(s σx j ) f(s σx j 1 )). Given the convexity of the f i s, SLB is relaxed to the following convex progra: in ax f i (x i ), s.t x i (j) 1, for j = 1,..., n (1) x 1,...,x [0,1] n i i=1 Denoting the optial solution for Eqn 1 as {x 1,..., x }, the θ-rounding step siply aps each ite j V to a block î such that î argin i x i (j). We obtain the bound for LOVÁSZ ROUND as follows: Theore 2.3. LOVÁSZROUND achieves a worst-case approxiation factor. We reark that, to the best of our knowledge, LOVÁSZROUND is the first algorith that is tight and that gives an approxiation in ters of for the heterogeneous setting. 3 General Subodular Partitioning (Probles 1 and 2 when 0 < λ < 1) Lastly we study Proble 1 and Proble 2, in the ost general case, i.e., 0 < λ < 1. We use the proposed algoriths for SFA (Proble 1 with λ = 0) and SLB (Proble 2 with λ = 1) as the building blocks to design algoriths for the general scenarios (0 < λ < 1). We first generalize the proposed GREEDSAT leading to GENERALGREEDSAT for Proble 1. For Proble 2 we generalize LOVÁSZ ROUND to obtain a Lovász extension based algorith. c Alost the sae as GREEDSAT we define an interediate objective: F λ (π) = 1 i=1 in{ λf i (A π i ) + λ 1 f j(a π j ), c} in GENERALGREEDSAT. Following the sae algorithic design as in GREEDSAT, GENERALGREEDSAT only differs fro GREEDSAT in Line 6, where the subodular welfare is defined on the interediate objective F λ c (π). Let α be the guarantee for solving subodular welfare, GENERALGREEDSAT approxiates Proble 1 with the following bounds: Theore 3.1. Given ɛ, α, and 0 λ 1, GENERALGREEDSAT finds a partition ˆπ that satisfies the following: λ ini f (Aˆπ i i ) + λ 1 i=1 fi(aˆπ i ) λα(op T ɛ) where OP T = ax π Π λ ini f i (A π i )+λ 1 i=1 f i(a π i ). Moreover, let F λ,i(π) = λf i (A π i )+λ 1 f j(a π j ). Given any 0 < δ < α, there is a set I {1,..., } such that I (α δ) and δ F i,λ (ˆπ) ax{, λα}(op T ɛ), i I. 1 α+δ Theore 3.1 generalizes Theore 2.1 when λ = 0, i.e., it recovers the bi-criterion guarantee in Theore 2.1 for the worst-case scenario (λ = 0). Moreover Theore 3.1 iplies that the factor of α for the average-case objective can alost be recovered by GENERALGREEDSAT if λ = 1. It also gives an iproved guarantee as λ increases suggesting that Proble 1 becoes easier as the ixed objective weights ore on the average-case objective. We also point out that the optiality guarantee of GENERALGREEDSAT soothly interpolates the two extree cases in ters of λ. Next we focus on Proble 2 for general λ. We generalize LOVÁSZ ROUND leading to GENER- ALLOVÁSZ ROUND. Alost the sae as LOVÁSZ ROUND, GENERALLOVÁSZ ROUND only differs in Line 2, where Proble 2 is relaxed as the following convex progra: in λ ax f i (x i ) + λ 1 x 1,...,x [0,1] n i f j (x j ), s.t x i (j) 1, for j = 1,..., n (2) After solving for the fractional solution {x i } i=1 to the convex prograing, GENERALLOVÁSZ ROUND then rounds it to a partition using the sae rounding technique as LOVÁSZ ROUND. The following Theore holds: Theore 3.2. GENERALLOVÁSZ ROUND is guaranteed to find a partition ˆπ Π such that ax i λfi (Aˆπ i ) + λ 1 f j(aˆπ j ) in π Π ax i λfi (A π i ) + λ 1 f j(a π j ). Theore 3.2 generalizes Theore 2.3 when λ = 0. Moreover we achieve a factor of by GENER- ALLOVÁSZ ROUND for any λ. Though the approxiation guarantee is independent of λ the algorith naturally exploits the trade-off between the worst-case and average-case objectives in ters of λ. i=1 4

5 References [1] A. Asadpour and A. Saberi. An approxiation algorith for ax-in fair allocation of indivisible goods. In SICOMP, [2] C. Chekuri and A. Ene. Approxiation algoriths for subodular ultiway partition. In FOCS, [3] C. Chekuri and A. Ene. Subodular cost allocation proble and applications. In Autoata, Languages and Prograing, pages Springer, [4] A. Ene, J. Vondrák, and Y. Wu. Local distribution and the syetry gap: Approxiability of ultiway partitioning probles. In SODA, [5] M. Fisher, G. Nehauser, and L. Wolsey. An analysis of approxiations for axiizing subodular set functions II. In Polyhedral cobinatorics, [6] S. Fujishige. Subodular functions and optiization, volue 58. Elsevier, [7] M. Goeans, N. Harvey, S. Iwata, and V. Mirrokni. Approxiating subodular functions everywhere. In SODA, [8] D. Golovin. Max-in fair allocation of indivisible goods. Technical Report CMU-CS , [9] R. Iyer, S. Jegelka, and J. Biles. Monotone closure of relaxed constraints in subodular optiization: Connections between iniization and axiization: Extended version. [10] S. Jegelka and J. Biles. Subodularity beyond subodular energies: coupling edges in graph cuts. In CVPR, [11] S. Khot and A. Ponnuswai. Approxiation algoriths for the ax-in allocation proble. In APPROX, [12] A. Krause, B. McMahan, C. Guestrin, and A. Gupta. Robust subodular observation selection. In JMLR, [13] M. Li, D. Andersen, and A. Sola. Graph partitioning via parallel subodular approxiation to accelerate distributed achine learning. In arxiv preprint arxiv: , [14] M. Narasihan, N. Jojic, and J. A. Biles. Q-clustering. In NIPS, [15] Z. Svitkina and L. Fleischer. Subodular approxiation: Sapling-based algoriths and lower bounds. In FOCS, [16] J. Vondrák. Optial approxiation for the subodular welfare proble in the value oracle odel. In STOC, [17] K. Wei, R. Iyer, and J. Biles. Subodularity in data subset selection and active learning. In ICML, [18] L. Zhao, H. Nagaochi, and T. Ibaraki. On generalized greedy splitting algoriths for ultiway partition probles. Discrete applied atheatics, 143(1): ,

On the Inapproximability of Vertex Cover on k-partite k-uniform Hypergraphs

On the Inapproximability of Vertex Cover on k-partite k-uniform Hypergraphs On the Inapproxiability of Vertex Cover on k-partite k-unifor Hypergraphs Venkatesan Guruswai and Rishi Saket Coputer Science Departent Carnegie Mellon University Pittsburgh, PA 1513. Abstract. Coputing

More information

Tight Information-Theoretic Lower Bounds for Welfare Maximization in Combinatorial Auctions

Tight Information-Theoretic Lower Bounds for Welfare Maximization in Combinatorial Auctions Tight Inforation-Theoretic Lower Bounds for Welfare Maxiization in Cobinatorial Auctions Vahab Mirrokni Jan Vondrák Theory Group, Microsoft Dept of Matheatics Research Princeton University Redond, WA 9805

More information

List Scheduling and LPT Oliver Braun (09/05/2017)

List Scheduling and LPT Oliver Braun (09/05/2017) List Scheduling and LPT Oliver Braun (09/05/207) We investigate the classical scheduling proble P ax where a set of n independent jobs has to be processed on 2 parallel and identical processors (achines)

More information

A Simple Regression Problem

A Simple Regression Problem A Siple Regression Proble R. M. Castro March 23, 2 In this brief note a siple regression proble will be introduced, illustrating clearly the bias-variance tradeoff. Let Y i f(x i ) + W i, i,..., n, where

More information

Computational and Statistical Learning Theory

Computational and Statistical Learning Theory Coputational and Statistical Learning Theory Proble sets 5 and 6 Due: Noveber th Please send your solutions to learning-subissions@ttic.edu Notations/Definitions Recall the definition of saple based Radeacher

More information

Intelligent Systems: Reasoning and Recognition. Perceptrons and Support Vector Machines

Intelligent Systems: Reasoning and Recognition. Perceptrons and Support Vector Machines Intelligent Systes: Reasoning and Recognition Jaes L. Crowley osig 1 Winter Seester 2018 Lesson 6 27 February 2018 Outline Perceptrons and Support Vector achines Notation...2 Linear odels...3 Lines, Planes

More information

Convex Programming for Scheduling Unrelated Parallel Machines

Convex Programming for Scheduling Unrelated Parallel Machines Convex Prograing for Scheduling Unrelated Parallel Machines Yossi Azar Air Epstein Abstract We consider the classical proble of scheduling parallel unrelated achines. Each job is to be processed by exactly

More information

Homework 3 Solutions CSE 101 Summer 2017

Homework 3 Solutions CSE 101 Summer 2017 Hoework 3 Solutions CSE 0 Suer 207. Scheduling algoriths The following n = 2 jobs with given processing ties have to be scheduled on = 3 parallel and identical processors with the objective of iniizing

More information

arxiv: v1 [cs.ds] 3 Feb 2014

arxiv: v1 [cs.ds] 3 Feb 2014 arxiv:40.043v [cs.ds] 3 Feb 04 A Bound on the Expected Optiality of Rando Feasible Solutions to Cobinatorial Optiization Probles Evan A. Sultani The Johns Hopins University APL evan@sultani.co http://www.sultani.co/

More information

Fixed-to-Variable Length Distribution Matching

Fixed-to-Variable Length Distribution Matching Fixed-to-Variable Length Distribution Matching Rana Ali Ajad and Georg Böcherer Institute for Counications Engineering Technische Universität München, Gerany Eail: raa2463@gail.co,georg.boecherer@tu.de

More information

13.2 Fully Polynomial Randomized Approximation Scheme for Permanent of Random 0-1 Matrices

13.2 Fully Polynomial Randomized Approximation Scheme for Permanent of Random 0-1 Matrices CS71 Randoness & Coputation Spring 018 Instructor: Alistair Sinclair Lecture 13: February 7 Disclaier: These notes have not been subjected to the usual scrutiny accorded to foral publications. They ay

More information

A Better Algorithm For an Ancient Scheduling Problem. David R. Karger Steven J. Phillips Eric Torng. Department of Computer Science

A Better Algorithm For an Ancient Scheduling Problem. David R. Karger Steven J. Phillips Eric Torng. Department of Computer Science A Better Algorith For an Ancient Scheduling Proble David R. Karger Steven J. Phillips Eric Torng Departent of Coputer Science Stanford University Stanford, CA 9435-4 Abstract One of the oldest and siplest

More information

Bipartite subgraphs and the smallest eigenvalue

Bipartite subgraphs and the smallest eigenvalue Bipartite subgraphs and the sallest eigenvalue Noga Alon Benny Sudaov Abstract Two results dealing with the relation between the sallest eigenvalue of a graph and its bipartite subgraphs are obtained.

More information

Boosting with log-loss

Boosting with log-loss Boosting with log-loss Marco Cusuano-Towner Septeber 2, 202 The proble Suppose we have data exaples {x i, y i ) i =... } for a two-class proble with y i {, }. Let F x) be the predictor function with the

More information

1 Identical Parallel Machines

1 Identical Parallel Machines FB3: Matheatik/Inforatik Dr. Syaantak Das Winter 2017/18 Optiizing under Uncertainty Lecture Notes 3: Scheduling to Miniize Makespan In any standard scheduling proble, we are given a set of jobs J = {j

More information

e-companion ONLY AVAILABLE IN ELECTRONIC FORM

e-companion ONLY AVAILABLE IN ELECTRONIC FORM OPERATIONS RESEARCH doi 10.1287/opre.1070.0427ec pp. ec1 ec5 e-copanion ONLY AVAILABLE IN ELECTRONIC FORM infors 07 INFORMS Electronic Copanion A Learning Approach for Interactive Marketing to a Custoer

More information

Algorithms for parallel processor scheduling with distinct due windows and unit-time jobs

Algorithms for parallel processor scheduling with distinct due windows and unit-time jobs BULLETIN OF THE POLISH ACADEMY OF SCIENCES TECHNICAL SCIENCES Vol. 57, No. 3, 2009 Algoriths for parallel processor scheduling with distinct due windows and unit-tie obs A. JANIAK 1, W.A. JANIAK 2, and

More information

Lower Bounds for Quantized Matrix Completion

Lower Bounds for Quantized Matrix Completion Lower Bounds for Quantized Matrix Copletion Mary Wootters and Yaniv Plan Departent of Matheatics University of Michigan Ann Arbor, MI Eail: wootters, yplan}@uich.edu Mark A. Davenport School of Elec. &

More information

arxiv: v3 [cs.lg] 7 Jan 2016

arxiv: v3 [cs.lg] 7 Jan 2016 Efficient and Parsionious Agnostic Active Learning Tzu-Kuo Huang Alekh Agarwal Daniel J. Hsu tkhuang@icrosoft.co alekha@icrosoft.co djhsu@cs.colubia.edu John Langford Robert E. Schapire jcl@icrosoft.co

More information

Best Arm Identification: A Unified Approach to Fixed Budget and Fixed Confidence

Best Arm Identification: A Unified Approach to Fixed Budget and Fixed Confidence Best Ar Identification: A Unified Approach to Fixed Budget and Fixed Confidence Victor Gabillon Mohaad Ghavazadeh Alessandro Lazaric INRIA Lille - Nord Europe, Tea SequeL {victor.gabillon,ohaad.ghavazadeh,alessandro.lazaric}@inria.fr

More information

Support Vector Machine Classification of Uncertain and Imbalanced data using Robust Optimization

Support Vector Machine Classification of Uncertain and Imbalanced data using Robust Optimization Recent Researches in Coputer Science Support Vector Machine Classification of Uncertain and Ibalanced data using Robust Optiization RAGHAV PAT, THEODORE B. TRAFALIS, KASH BARKER School of Industrial Engineering

More information

CS Lecture 13. More Maximum Likelihood

CS Lecture 13. More Maximum Likelihood CS 6347 Lecture 13 More Maxiu Likelihood Recap Last tie: Introduction to axiu likelihood estiation MLE for Bayesian networks Optial CPTs correspond to epirical counts Today: MLE for CRFs 2 Maxiu Likelihood

More information

time time δ jobs jobs

time time δ jobs jobs Approxiating Total Flow Tie on Parallel Machines Stefano Leonardi Danny Raz y Abstract We consider the proble of optiizing the total ow tie of a strea of jobs that are released over tie in a ultiprocessor

More information

On the Communication Complexity of Lipschitzian Optimization for the Coordinated Model of Computation

On the Communication Complexity of Lipschitzian Optimization for the Coordinated Model of Computation journal of coplexity 6, 459473 (2000) doi:0.006jco.2000.0544, available online at http:www.idealibrary.co on On the Counication Coplexity of Lipschitzian Optiization for the Coordinated Model of Coputation

More information

arxiv: v1 [cs.ds] 17 Mar 2016

arxiv: v1 [cs.ds] 17 Mar 2016 Tight Bounds for Single-Pass Streaing Coplexity of the Set Cover Proble Sepehr Assadi Sanjeev Khanna Yang Li Abstract arxiv:1603.05715v1 [cs.ds] 17 Mar 2016 We resolve the space coplexity of single-pass

More information

Stochastic Subgradient Methods

Stochastic Subgradient Methods Stochastic Subgradient Methods Lingjie Weng Yutian Chen Bren School of Inforation and Coputer Science University of California, Irvine {wengl, yutianc}@ics.uci.edu Abstract Stochastic subgradient ethods

More information

This model assumes that the probability of a gap has size i is proportional to 1/i. i.e., i log m e. j=1. E[gap size] = i P r(i) = N f t.

This model assumes that the probability of a gap has size i is proportional to 1/i. i.e., i log m e. j=1. E[gap size] = i P r(i) = N f t. CS 493: Algoriths for Massive Data Sets Feb 2, 2002 Local Models, Bloo Filter Scribe: Qin Lv Local Models In global odels, every inverted file entry is copressed with the sae odel. This work wells when

More information

Lecture October 23. Scribes: Ruixin Qiang and Alana Shine

Lecture October 23. Scribes: Ruixin Qiang and Alana Shine CSCI699: Topics in Learning and Gae Theory Lecture October 23 Lecturer: Ilias Scribes: Ruixin Qiang and Alana Shine Today s topic is auction with saples. 1 Introduction to auctions Definition 1. In a single

More information

E0 370 Statistical Learning Theory Lecture 6 (Aug 30, 2011) Margin Analysis

E0 370 Statistical Learning Theory Lecture 6 (Aug 30, 2011) Margin Analysis E0 370 tatistical Learning Theory Lecture 6 (Aug 30, 20) Margin Analysis Lecturer: hivani Agarwal cribe: Narasihan R Introduction In the last few lectures we have seen how to obtain high confidence bounds

More information

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Course Notes for EE7C (Spring 018: Convex Optiization and Approxiation Instructor: Moritz Hardt Eail: hardt+ee7c@berkeley.edu Graduate Instructor: Max Sichowitz Eail: sichow+ee7c@berkeley.edu October 15,

More information

Lecture 21. Interior Point Methods Setup and Algorithm

Lecture 21. Interior Point Methods Setup and Algorithm Lecture 21 Interior Point Methods In 1984, Kararkar introduced a new weakly polynoial tie algorith for solving LPs [Kar84a], [Kar84b]. His algorith was theoretically faster than the ellipsoid ethod and

More information

Fairness via priority scheduling

Fairness via priority scheduling Fairness via priority scheduling Veeraruna Kavitha, N Heachandra and Debayan Das IEOR, IIT Bobay, Mubai, 400076, India vavitha,nh,debayan}@iitbacin Abstract In the context of ulti-agent resource allocation

More information

On Poset Merging. 1 Introduction. Peter Chen Guoli Ding Steve Seiden. Keywords: Merging, Partial Order, Lower Bounds. AMS Classification: 68W40

On Poset Merging. 1 Introduction. Peter Chen Guoli Ding Steve Seiden. Keywords: Merging, Partial Order, Lower Bounds. AMS Classification: 68W40 On Poset Merging Peter Chen Guoli Ding Steve Seiden Abstract We consider the follow poset erging proble: Let X and Y be two subsets of a partially ordered set S. Given coplete inforation about the ordering

More information

Randomized Recovery for Boolean Compressed Sensing

Randomized Recovery for Boolean Compressed Sensing Randoized Recovery for Boolean Copressed Sensing Mitra Fatei and Martin Vetterli Laboratory of Audiovisual Counication École Polytechnique Fédéral de Lausanne (EPFL) Eail: {itra.fatei, artin.vetterli}@epfl.ch

More information

Kernel Methods and Support Vector Machines

Kernel Methods and Support Vector Machines Intelligent Systes: Reasoning and Recognition Jaes L. Crowley ENSIAG 2 / osig 1 Second Seester 2012/2013 Lesson 20 2 ay 2013 Kernel ethods and Support Vector achines Contents Kernel Functions...2 Quadratic

More information

A Note on Online Scheduling for Jobs with Arbitrary Release Times

A Note on Online Scheduling for Jobs with Arbitrary Release Times A Note on Online Scheduling for Jobs with Arbitrary Release Ties Jihuan Ding, and Guochuan Zhang College of Operations Research and Manageent Science, Qufu Noral University, Rizhao 7686, China dingjihuan@hotail.co

More information

Learnability and Stability in the General Learning Setting

Learnability and Stability in the General Learning Setting Learnability and Stability in the General Learning Setting Shai Shalev-Shwartz TTI-Chicago shai@tti-c.org Ohad Shair The Hebrew University ohadsh@cs.huji.ac.il Nathan Srebro TTI-Chicago nati@uchicago.edu

More information

Tight Complexity Bounds for Optimizing Composite Objectives

Tight Complexity Bounds for Optimizing Composite Objectives Tight Coplexity Bounds for Optiizing Coposite Objectives Blake Woodworth Toyota Technological Institute at Chicago Chicago, IL, 60637 blake@ttic.edu Nathan Srebro Toyota Technological Institute at Chicago

More information

On Constant Power Water-filling

On Constant Power Water-filling On Constant Power Water-filling Wei Yu and John M. Cioffi Electrical Engineering Departent Stanford University, Stanford, CA94305, U.S.A. eails: {weiyu,cioffi}@stanford.edu Abstract This paper derives

More information

A Smoothed Boosting Algorithm Using Probabilistic Output Codes

A Smoothed Boosting Algorithm Using Probabilistic Output Codes A Soothed Boosting Algorith Using Probabilistic Output Codes Rong Jin rongjin@cse.su.edu Dept. of Coputer Science and Engineering, Michigan State University, MI 48824, USA Jian Zhang jian.zhang@cs.cu.edu

More information

Fast Montgomery-like Square Root Computation over GF(2 m ) for All Trinomials

Fast Montgomery-like Square Root Computation over GF(2 m ) for All Trinomials Fast Montgoery-like Square Root Coputation over GF( ) for All Trinoials Yin Li a, Yu Zhang a, a Departent of Coputer Science and Technology, Xinyang Noral University, Henan, P.R.China Abstract This letter

More information

Sharp Time Data Tradeoffs for Linear Inverse Problems

Sharp Time Data Tradeoffs for Linear Inverse Problems Sharp Tie Data Tradeoffs for Linear Inverse Probles Saet Oyak Benjain Recht Mahdi Soltanolkotabi January 016 Abstract In this paper we characterize sharp tie-data tradeoffs for optiization probles used

More information

Supplementary to Learning Discriminative Bayesian Networks from High-dimensional Continuous Neuroimaging Data

Supplementary to Learning Discriminative Bayesian Networks from High-dimensional Continuous Neuroimaging Data Suppleentary to Learning Discriinative Bayesian Networks fro High-diensional Continuous Neuroiaging Data Luping Zhou, Lei Wang, Lingqiao Liu, Philip Ogunbona, and Dinggang Shen Proposition. Given a sparse

More information

Support Vector Machines. Goals for the lecture

Support Vector Machines. Goals for the lecture Support Vector Machines Mark Craven and David Page Coputer Sciences 760 Spring 2018 www.biostat.wisc.edu/~craven/cs760/ Soe of the slides in these lectures have been adapted/borrowed fro aterials developed

More information

Interactive Markov Models of Evolutionary Algorithms

Interactive Markov Models of Evolutionary Algorithms Cleveland State University EngagedScholarship@CSU Electrical Engineering & Coputer Science Faculty Publications Electrical Engineering & Coputer Science Departent 2015 Interactive Markov Models of Evolutionary

More information

On the Use of A Priori Information for Sparse Signal Approximations

On the Use of A Priori Information for Sparse Signal Approximations ITS TECHNICAL REPORT NO. 3/4 On the Use of A Priori Inforation for Sparse Signal Approxiations Oscar Divorra Escoda, Lorenzo Granai and Pierre Vandergheynst Signal Processing Institute ITS) Ecole Polytechnique

More information

A Theoretical Analysis of a Warm Start Technique

A Theoretical Analysis of a Warm Start Technique A Theoretical Analysis of a War Start Technique Martin A. Zinkevich Yahoo! Labs 701 First Avenue Sunnyvale, CA Abstract Batch gradient descent looks at every data point for every step, which is wasteful

More information

Lecture 9: Multi Kernel SVM

Lecture 9: Multi Kernel SVM Lecture 9: Multi Kernel SVM Stéphane Canu stephane.canu@litislab.eu Sao Paulo 204 April 6, 204 Roadap Tuning the kernel: MKL The ultiple kernel proble Sparse kernel achines for regression: SVR SipleMKL:

More information

Stochastic Machine Scheduling with Precedence Constraints

Stochastic Machine Scheduling with Precedence Constraints Stochastic Machine Scheduling with Precedence Constraints Martin Skutella Fakultät II, Institut für Matheatik, Sekr. MA 6-, Technische Universität Berlin, 0623 Berlin, Gerany skutella@ath.tu-berlin.de

More information

Consistent Multiclass Algorithms for Complex Performance Measures. Supplementary Material

Consistent Multiclass Algorithms for Complex Performance Measures. Supplementary Material Consistent Multiclass Algoriths for Coplex Perforance Measures Suppleentary Material Notations. Let λ be the base easure over n given by the unifor rando variable (say U over n. Hence, for all easurable

More information

ASSUME a source over an alphabet size m, from which a sequence of n independent samples are drawn. The classical

ASSUME a source over an alphabet size m, from which a sequence of n independent samples are drawn. The classical IEEE TRANSACTIONS ON INFORMATION THEORY Large Alphabet Source Coding using Independent Coponent Analysis Aichai Painsky, Meber, IEEE, Saharon Rosset and Meir Feder, Fellow, IEEE arxiv:67.7v [cs.it] Jul

More information

On the Existence of Pure Nash Equilibria in Weighted Congestion Games

On the Existence of Pure Nash Equilibria in Weighted Congestion Games MATHEMATICS OF OPERATIONS RESEARCH Vol. 37, No. 3, August 2012, pp. 419 436 ISSN 0364-765X (print) ISSN 1526-5471 (online) http://dx.doi.org/10.1287/oor.1120.0543 2012 INFORMS On the Existence of Pure

More information

1 Bounding the Margin

1 Bounding the Margin COS 511: Theoretical Machine Learning Lecturer: Rob Schapire Lecture #12 Scribe: Jian Min Si March 14, 2013 1 Bounding the Margin We are continuing the proof of a bound on the generalization error of AdaBoost

More information

Quantum algorithms (CO 781, Winter 2008) Prof. Andrew Childs, University of Waterloo LECTURE 15: Unstructured search and spatial search

Quantum algorithms (CO 781, Winter 2008) Prof. Andrew Childs, University of Waterloo LECTURE 15: Unstructured search and spatial search Quantu algoriths (CO 781, Winter 2008) Prof Andrew Childs, University of Waterloo LECTURE 15: Unstructured search and spatial search ow we begin to discuss applications of quantu walks to search algoriths

More information

Approximation in Stochastic Scheduling: The Power of LP-Based Priority Policies

Approximation in Stochastic Scheduling: The Power of LP-Based Priority Policies Approxiation in Stochastic Scheduling: The Power of -Based Priority Policies Rolf Möhring, Andreas Schulz, Marc Uetz Setting (A P p stoch, r E( w and (B P p stoch E( w We will assue that the processing

More information

EMPIRICAL COMPLEXITY ANALYSIS OF A MILP-APPROACH FOR OPTIMIZATION OF HYBRID SYSTEMS

EMPIRICAL COMPLEXITY ANALYSIS OF A MILP-APPROACH FOR OPTIMIZATION OF HYBRID SYSTEMS EMPIRICAL COMPLEXITY ANALYSIS OF A MILP-APPROACH FOR OPTIMIZATION OF HYBRID SYSTEMS Jochen Till, Sebastian Engell, Sebastian Panek, and Olaf Stursberg Process Control Lab (CT-AST), University of Dortund,

More information

Using EM To Estimate A Probablity Density With A Mixture Of Gaussians

Using EM To Estimate A Probablity Density With A Mixture Of Gaussians Using EM To Estiate A Probablity Density With A Mixture Of Gaussians Aaron A. D Souza adsouza@usc.edu Introduction The proble we are trying to address in this note is siple. Given a set of data points

More information

Improved Guarantees for Agnostic Learning of Disjunctions

Improved Guarantees for Agnostic Learning of Disjunctions Iproved Guarantees for Agnostic Learning of Disjunctions Pranjal Awasthi Carnegie Mellon University pawasthi@cs.cu.edu Avri Blu Carnegie Mellon University avri@cs.cu.edu Or Sheffet Carnegie Mellon University

More information

Model Fitting. CURM Background Material, Fall 2014 Dr. Doreen De Leon

Model Fitting. CURM Background Material, Fall 2014 Dr. Doreen De Leon Model Fitting CURM Background Material, Fall 014 Dr. Doreen De Leon 1 Introduction Given a set of data points, we often want to fit a selected odel or type to the data (e.g., we suspect an exponential

More information

Bayes Decision Rule and Naïve Bayes Classifier

Bayes Decision Rule and Naïve Bayes Classifier Bayes Decision Rule and Naïve Bayes Classifier Le Song Machine Learning I CSE 6740, Fall 2013 Gaussian Mixture odel A density odel p(x) ay be ulti-odal: odel it as a ixture of uni-odal distributions (e.g.

More information

RANDOM GRADIENT EXTRAPOLATION FOR DISTRIBUTED AND STOCHASTIC OPTIMIZATION

RANDOM GRADIENT EXTRAPOLATION FOR DISTRIBUTED AND STOCHASTIC OPTIMIZATION RANDOM GRADIENT EXTRAPOLATION FOR DISTRIBUTED AND STOCHASTIC OPTIMIZATION GUANGHUI LAN AND YI ZHOU Abstract. In this paper, we consider a class of finite-su convex optiization probles defined over a distributed

More information

Robustness and Regularization of Support Vector Machines

Robustness and Regularization of Support Vector Machines Robustness and Regularization of Support Vector Machines Huan Xu ECE, McGill University Montreal, QC, Canada xuhuan@ci.cgill.ca Constantine Caraanis ECE, The University of Texas at Austin Austin, TX, USA

More information

A Note on Scheduling Tall/Small Multiprocessor Tasks with Unit Processing Time to Minimize Maximum Tardiness

A Note on Scheduling Tall/Small Multiprocessor Tasks with Unit Processing Time to Minimize Maximum Tardiness A Note on Scheduling Tall/Sall Multiprocessor Tasks with Unit Processing Tie to Miniize Maxiu Tardiness Philippe Baptiste and Baruch Schieber IBM T.J. Watson Research Center P.O. Box 218, Yorktown Heights,

More information

Combining Classifiers

Combining Classifiers Cobining Classifiers Generic ethods of generating and cobining ultiple classifiers Bagging Boosting References: Duda, Hart & Stork, pg 475-480. Hastie, Tibsharini, Friedan, pg 246-256 and Chapter 10. http://www.boosting.org/

More information

A Low-Complexity Congestion Control and Scheduling Algorithm for Multihop Wireless Networks with Order-Optimal Per-Flow Delay

A Low-Complexity Congestion Control and Scheduling Algorithm for Multihop Wireless Networks with Order-Optimal Per-Flow Delay A Low-Coplexity Congestion Control and Scheduling Algorith for Multihop Wireless Networks with Order-Optial Per-Flow Delay Po-Kai Huang, Xiaojun Lin, and Chih-Chun Wang School of Electrical and Coputer

More information

Nyström Method vs Random Fourier Features: A Theoretical and Empirical Comparison

Nyström Method vs Random Fourier Features: A Theoretical and Empirical Comparison yströ Method vs : A Theoretical and Epirical Coparison Tianbao Yang, Yu-Feng Li, Mehrdad Mahdavi, Rong Jin, Zhi-Hua Zhou Machine Learning Lab, GE Global Research, San Raon, CA 94583 Michigan State University,

More information

1 Proof of learning bounds

1 Proof of learning bounds COS 511: Theoretical Machine Learning Lecturer: Rob Schapire Lecture #4 Scribe: Akshay Mittal February 13, 2013 1 Proof of learning bounds For intuition of the following theore, suppose there exists a

More information

Necessity of low effective dimension

Necessity of low effective dimension Necessity of low effective diension Art B. Owen Stanford University October 2002, Orig: July 2002 Abstract Practitioners have long noticed that quasi-monte Carlo ethods work very well on functions that

More information

Examples are not Enough, Learn to Criticize! Criticism for Interpretability

Examples are not Enough, Learn to Criticize! Criticism for Interpretability Exaples are not Enough, Learn to Criticize! Criticis for Interpretability Been Ki Allen Institute for AI beenki@csail.it.edu Rajiv Khanna UT Austin rajivak@utexas.edu Oluwasani Koyejo UIUC sani@illinois.edu

More information

3.8 Three Types of Convergence

3.8 Three Types of Convergence 3.8 Three Types of Convergence 3.8 Three Types of Convergence 93 Suppose that we are given a sequence functions {f k } k N on a set X and another function f on X. What does it ean for f k to converge to

More information

Polygonal Designs: Existence and Construction

Polygonal Designs: Existence and Construction Polygonal Designs: Existence and Construction John Hegean Departent of Matheatics, Stanford University, Stanford, CA 9405 Jeff Langford Departent of Matheatics, Drake University, Des Moines, IA 5011 G

More information

Handout 7. and Pr [M(x) = χ L (x) M(x) =? ] = 1.

Handout 7. and Pr [M(x) = χ L (x) M(x) =? ] = 1. Notes on Coplexity Theory Last updated: October, 2005 Jonathan Katz Handout 7 1 More on Randoized Coplexity Classes Reinder: so far we have seen RP,coRP, and BPP. We introduce two ore tie-bounded randoized

More information

Design of Spatially Coupled LDPC Codes over GF(q) for Windowed Decoding

Design of Spatially Coupled LDPC Codes over GF(q) for Windowed Decoding IEEE TRANSACTIONS ON INFORMATION THEORY (SUBMITTED PAPER) 1 Design of Spatially Coupled LDPC Codes over GF(q) for Windowed Decoding Lai Wei, Student Meber, IEEE, David G. M. Mitchell, Meber, IEEE, Thoas

More information

Uniform Approximation and Bernstein Polynomials with Coefficients in the Unit Interval

Uniform Approximation and Bernstein Polynomials with Coefficients in the Unit Interval Unifor Approxiation and Bernstein Polynoials with Coefficients in the Unit Interval Weiang Qian and Marc D. Riedel Electrical and Coputer Engineering, University of Minnesota 200 Union St. S.E. Minneapolis,

More information

The Simplex Method is Strongly Polynomial for the Markov Decision Problem with a Fixed Discount Rate

The Simplex Method is Strongly Polynomial for the Markov Decision Problem with a Fixed Discount Rate The Siplex Method is Strongly Polynoial for the Markov Decision Proble with a Fixed Discount Rate Yinyu Ye April 20, 2010 Abstract In this note we prove that the classic siplex ethod with the ost-negativereduced-cost

More information

Grafting: Fast, Incremental Feature Selection by Gradient Descent in Function Space

Grafting: Fast, Incremental Feature Selection by Gradient Descent in Function Space Journal of Machine Learning Research 3 (2003) 1333-1356 Subitted 5/02; Published 3/03 Grafting: Fast, Increental Feature Selection by Gradient Descent in Function Space Sion Perkins Space and Reote Sensing

More information

New upper bound for the B-spline basis condition number II. K. Scherer. Institut fur Angewandte Mathematik, Universitat Bonn, Bonn, Germany.

New upper bound for the B-spline basis condition number II. K. Scherer. Institut fur Angewandte Mathematik, Universitat Bonn, Bonn, Germany. New upper bound for the B-spline basis condition nuber II. A proof of de Boor's 2 -conjecture K. Scherer Institut fur Angewandte Matheati, Universitat Bonn, 535 Bonn, Gerany and A. Yu. Shadrin Coputing

More information

Computable Shell Decomposition Bounds

Computable Shell Decomposition Bounds Coputable Shell Decoposition Bounds John Langford TTI-Chicago jcl@cs.cu.edu David McAllester TTI-Chicago dac@autoreason.co Editor: Leslie Pack Kaelbling and David Cohn Abstract Haussler, Kearns, Seung

More information

Compression and Predictive Distributions for Large Alphabet i.i.d and Markov models

Compression and Predictive Distributions for Large Alphabet i.i.d and Markov models 2014 IEEE International Syposiu on Inforation Theory Copression and Predictive Distributions for Large Alphabet i.i.d and Markov odels Xiao Yang Departent of Statistics Yale University New Haven, CT, 06511

More information

Best Arm Identification: A Unified Approach to Fixed Budget and Fixed Confidence

Best Arm Identification: A Unified Approach to Fixed Budget and Fixed Confidence Best Ar Identification: A Unified Approach to Fixed Budget and Fixed Confidence Victor Gabillon, Mohaad Ghavazadeh, Alessandro Lazaric To cite this version: Victor Gabillon, Mohaad Ghavazadeh, Alessandro

More information

Transductive Learning of Structural SVMs via Prior Knowledge Constraints

Transductive Learning of Structural SVMs via Prior Knowledge Constraints Transductive Learning of Structural SVMs via Prior Knowledge Constraints Chun-Na Yu Dept. of Coputing Science & AICML University of Alberta, AB, Canada chunna@cs.ualberta.ca Abstract Reducing the nuber

More information

Support Vector Machines. Machine Learning Series Jerry Jeychandra Blohm Lab

Support Vector Machines. Machine Learning Series Jerry Jeychandra Blohm Lab Support Vector Machines Machine Learning Series Jerry Jeychandra Bloh Lab Outline Main goal: To understand how support vector achines (SVMs) perfor optial classification for labelled data sets, also a

More information

Efficient Learning of Generalized Linear and Single Index Models with Isotonic Regression

Efficient Learning of Generalized Linear and Single Index Models with Isotonic Regression Efficient Learning of Generalized Linear and Single Index Models with Isotonic Regression Sha M Kakade Microsoft Research and Wharton, U Penn skakade@icrosoftco Varun Kanade SEAS, Harvard University vkanade@fasharvardedu

More information

Support Vector Machines. Maximizing the Margin

Support Vector Machines. Maximizing the Margin Support Vector Machines Support vector achines (SVMs) learn a hypothesis: h(x) = b + Σ i= y i α i k(x, x i ) (x, y ),..., (x, y ) are the training exs., y i {, } b is the bias weight. α,..., α are the

More information

arxiv: v1 [cs.ds] 29 Jan 2012

arxiv: v1 [cs.ds] 29 Jan 2012 A parallel approxiation algorith for ixed packing covering seidefinite progras arxiv:1201.6090v1 [cs.ds] 29 Jan 2012 Rahul Jain National U. Singapore January 28, 2012 Abstract Penghui Yao National U. Singapore

More information

Estimating Entropy and Entropy Norm on Data Streams

Estimating Entropy and Entropy Norm on Data Streams Estiating Entropy and Entropy Nor on Data Streas Ait Chakrabarti 1, Khanh Do Ba 1, and S. Muthukrishnan 2 1 Departent of Coputer Science, Dartouth College, Hanover, NH 03755, USA 2 Departent of Coputer

More information

arxiv: v1 [cs.dc] 19 Jul 2011

arxiv: v1 [cs.dc] 19 Jul 2011 DECENTRALIZED LIST SCHEDULING MARC TCHIBOUKDJIAN, NICOLAS GAST, AND DENIS TRYSTRAM arxiv:1107.3734v1 [cs.dc] 19 Jul 2011 ABSTRACT. Classical list scheduling is a very popular and efficient technique for

More information

Acyclic Colorings of Directed Graphs

Acyclic Colorings of Directed Graphs Acyclic Colorings of Directed Graphs Noah Golowich Septeber 9, 014 arxiv:1409.7535v1 [ath.co] 6 Sep 014 Abstract The acyclic chroatic nuber of a directed graph D, denoted χ A (D), is the iniu positive

More information

Ch 12: Variations on Backpropagation

Ch 12: Variations on Backpropagation Ch 2: Variations on Backpropagation The basic backpropagation algorith is too slow for ost practical applications. It ay take days or weeks of coputer tie. We deonstrate why the backpropagation algorith

More information

A Markov Framework for the Simple Genetic Algorithm

A Markov Framework for the Simple Genetic Algorithm A arkov Fraework for the Siple Genetic Algorith Thoas E. Davis*, Jose C. Principe Electrical Engineering Departent University of Florida, Gainesville, FL 326 *WL/NGS Eglin AFB, FL32542 Abstract This paper

More information

Complex Quadratic Optimization and Semidefinite Programming

Complex Quadratic Optimization and Semidefinite Programming Coplex Quadratic Optiization and Seidefinite Prograing Shuzhong Zhang Yongwei Huang August 4 Abstract In this paper we study the approxiation algoriths for a class of discrete quadratic optiization probles

More information

Graphical Models in Local, Asymmetric Multi-Agent Markov Decision Processes

Graphical Models in Local, Asymmetric Multi-Agent Markov Decision Processes Graphical Models in Local, Asyetric Multi-Agent Markov Decision Processes Ditri Dolgov and Edund Durfee Departent of Electrical Engineering and Coputer Science University of Michigan Ann Arbor, MI 48109

More information

Robust polynomial regression up to the information theoretic limit

Robust polynomial regression up to the information theoretic limit 58th Annual IEEE Syposiu on Foundations of Coputer Science Robust polynoial regression up to the inforation theoretic liit Daniel Kane Departent of Coputer Science and Engineering / Departent of Matheatics

More information

The Weierstrass Approximation Theorem

The Weierstrass Approximation Theorem 36 The Weierstrass Approxiation Theore Recall that the fundaental idea underlying the construction of the real nubers is approxiation by the sipler rational nubers. Firstly, nubers are often deterined

More information

Pattern Recognition and Machine Learning. Learning and Evaluation for Pattern Recognition

Pattern Recognition and Machine Learning. Learning and Evaluation for Pattern Recognition Pattern Recognition and Machine Learning Jaes L. Crowley ENSIMAG 3 - MMIS Fall Seester 2017 Lesson 1 4 October 2017 Outline Learning and Evaluation for Pattern Recognition Notation...2 1. The Pattern Recognition

More information

APPROXIMATION BOUNDS FOR SPARSE PRINCIPAL COMPONENT ANALYSIS

APPROXIMATION BOUNDS FOR SPARSE PRINCIPAL COMPONENT ANALYSIS APPROXIMATION BOUNDS FOR SPARSE PRINCIPAL COMPONENT ANALYSIS ALEXANDRE D ASPREMONT, FRANCIS BACH, AND LAURENT EL GHAOUI ABSTRACT. We produce approxiation bounds on a seidefinite prograing relaxation for

More information

Computational and Statistical Learning Theory

Computational and Statistical Learning Theory Coputational and Statistical Learning Theory TTIC 31120 Prof. Nati Srebro Lecture 2: PAC Learning and VC Theory I Fro Adversarial Online to Statistical Three reasons to ove fro worst-case deterinistic

More information

Pattern Recognition and Machine Learning. Artificial Neural networks

Pattern Recognition and Machine Learning. Artificial Neural networks Pattern Recognition and Machine Learning Jaes L. Crowley ENSIMAG 3 - MMIS Fall Seester 2016 Lessons 7 14 Dec 2016 Outline Artificial Neural networks Notation...2 1. Introduction...3... 3 The Artificial

More information

Exact tensor completion with sum-of-squares

Exact tensor completion with sum-of-squares Proceedings of Machine Learning Research vol 65:1 54, 2017 30th Annual Conference on Learning Theory Exact tensor copletion with su-of-squares Aaron Potechin Institute for Advanced Study, Princeton David

More information

Feature Extraction Techniques

Feature Extraction Techniques Feature Extraction Techniques Unsupervised Learning II Feature Extraction Unsupervised ethods can also be used to find features which can be useful for categorization. There are unsupervised ethods that

More information