Exact Algorithms for 2-Clustering with Size Constraints in the Euclidean Plane

Size: px
Start display at page:

Download "Exact Algorithms for 2-Clustering with Size Constraints in the Euclidean Plane"

Transcription

1 Exact Algorithms for 2-Clustering with Size Constraints in the Euclidean Plane Alberto Bertoni, Massimiliano Goldwurm, Jianyi Lin Dipartimento di Informatica Università degli Studi di Milano SOFSEM Pec pod Sněžkou, Czech Republic - January 24 29, 2015, 41st Int. Conf. on Current Trends in Theory and Practice of Computer Science

2 Introduction Clustering Problem Instance X R d, X = n, m N, 1 < m < n Solution {A 1, A 2,..., A m } partition of X optimal (criterion: weight, variance,...)

3 Introduction Application areas: image analysis bioinformatics unsupervised learning pattern recognition data mining, statistical data analysis Traditional approaches: heuristics (e.g. K-Means) approximate solutions unknown or exponential worst-case computation time Goals of our research: exact solutions complexity results (e.g. NP-hardness) efficient algorithms in the easy cases (polynomial time) size-constraints version of the problem

4 Problems Formal definitions (dimension d, l p norm, p 1) given X = {x 1,...,x n } R d, cluster : A X, A centroid : C A = argmin µ R d a µ p p a A weight of A: W p (A) = a A a C A p p weight of a partition {A 1,..., A m } of X W p (A 1,..., A m ) = W p (A 1 )+W p (A 2 )+ +W p (A m )

5 Problems Clustering Problem (p 1) Istance : X = {x 1,..., x n } R d, X = n m N, 1 < m < n Solution : m-partition {A 1,..., A m } of X with minimum W p (A 1,...,A m ) Parameters: p, d, m, n Known results: - NP-hard for m = 2 and arbitrary d (p = 2) [Aloise et al. 09] - NP-hard for d = 2 and arbitrary m (p = 2) [Dasgupta 07, Mahajan-et-al.09] - p = 2 : Minimum Sum of Squares Clustering [Aloise, Hansen 09] - k-means Heuristic [Lloyd 57, MacQueen 67, Vattani 09]

6 Problems Clustering Problems with Constraints Constraint types: must-link, cannot-link, diameter, size [Wagstaff-Cardie 00, Bradley-et-al.00, Zhu-et-al.10] Size-Constrained-Clustering(p) (SCC) Instance : X = {x 1,..., x n } R d, X = n integers k 1, k 2,..., k m, i k i = n Solution : m-partition {A 1,..., A m } of X with A i = k i with i = 1,...,m and minimum W p (A 1,...,A m ) Variants: m-scc : fixed m SCC-d : fixed d m-scc-d : both m and d fixed

7 Previous Results Complexity Results 1) p > 1 2-SCC is NP-hard [BGLS 12] arbitrary d HALF-PARTITION reduction from MINIMUM-BISECTION it holds p > 1 2) p 1 SCC-1 is NP-hard [Sacca 10] arbitrary m reduction from 3-PARTITION Notice: Clustering(d = 1, p = 2) FP 3) SCC-2(p = 2, k i {2, 3}) is NP-hard [Lin 13]

8 Previous Results 4) p Q\N s.t. Centroid localization is difficult Problem p-centroid Localization (p-cl) Instance : X = {x 1,...,x n } N, h N Question : C X > h? Problem SQRT-Sum [Garey-Johnson76, Allender-et-al 06] Instance : a 1,..., a r, b 1,...,b s N Question : a1 + + a r > b b s? SQRT-Sum NP? 3 = SQRT-Sum P -LC [Saccà 10, BGLS12] 2

9 Previous Results Tractable cases Necessary conditions : fixed p N, d e m 1) 2-SCC-1 (uniform unary p N) FP [BGLS12] 2) p N +, d N + 2-SCC-d FP [Lin 13] 3) In R 2 with Manhattan norm (d = 2, p = 1) [BGLP14, Pini 14] 2-SCC-2 Problem with p = 1 on input (X, k), X = n Plain Algorithm T(n, k) = O(n 2 log n) (Simpler) Full Algorithm ( k = 1,...) T(n) = O(n 3 log n)

10 Algorithms for 2-SCC-2 with p = 2 2-SCC-2 Problem (p = 2) Instance: X = {x 1,..., x n } R 2, X = n k {1,..., n/2 } Solution: 2-partition {A, B} of X with A = k s.t. W 2 (A, B) = a A a C A b B b C B 2 2 is minimum Main results : - Plain Algorithm in T(n, k) = O(n 3 k log 2 n) time - Full Algorithm ( k = 1,..., n/2 ) in T(n) = O(n 2 log n) time (both working in O(n) space)

11 Ingredients : 1) separation result (straight line) [BGLS12] 2) number of k-sets in the plane O(n 3 k) [Erdös-et-al73, Dey98] 3) dynamic data structure for convex hull insert, delete operations O(log 2 n) [Overmars-van Leeuwen81]

12 Separation Result [BGLS12] Hypotheses: p > 1 {A, B} is optimal solution of 2-SCC-d with A = k Thesis: c R s.t. u R d : u A implies u C A p p u C B p p < c u B implies u C A p p u C B p p > c = x C A p p x C B p p = c (x R d ) is a hypersurface separating A and B. { the hypersurface is a straight line = If d = 2 = p then A is a k-set 2x(C Bx C Ax )+2y(C By C Ay ) = c + C B 2 C A 2

13 Idea of the plain algorithm: 1) build an initial k-set A, compute W 2 (A, Ā), the convex-hulls Conv(A) and Conv(Ā), a bitangent(a, b), 2) S := A, W := W 2 (A, Ā) Conv(Ā) (a, b) b a Conv(A)

14 Conv(Ā) Conv(A) (a, b) a b Conv(Ā ) Conv(A ) (a, b ) (a, b) a b

15 NextBitangent iteration (0) Conv(Ā ) v = a v u u = b Conv(A ) Procedure NextBitangent(a, b) A := Insert(Delete(A, a),b) Ā := Insert(Delete(Ā, b),a) u := b; u := Succ(u) v := a; v := Succ(v) repeat if (v, v )Left(u, v) then u := u ; u :=Succ(u) elseif (u, u )Right(u, v) elseif (u, u )Right(v, v ) else u := u ; u :=Succ(u) until (u, u )Right(u, v) (v, v )Left(u, v) return (u, v)

16 NextBitangent iteration (1) Conv(Ā ) v = a v u u b Conv(A ) Procedure NextBitangent(a, b) A := Insert(Delete(A, a),b) Ā := Insert(Delete(Ā, b),a) u := b; u := Succ(u) v := a; v := Succ(v) repeat if (v, v )Left(u, v) then u := u ; u :=Succ(u) elseif (u, u )Right(u, v) elseif (u, u )Right(v, v ) else u := u ; u :=Succ(u) until (u, u )Right(u, v) (v, v )Left(u, v) return (u, v)

17 NextBitangent iteration (2) Conv(Ā ) a v v u u b Conv(A ) Procedure NextBitangent(a, b) A := Insert(Delete(A, a),b) Ā := Insert(Delete(Ā, b),a) u := b; u := Succ(u) v := a; v := Succ(v) repeat if (v, v )Left(u, v) then u := u ; u :=Succ(u) elseif (u, u )Right(u, v) elseif (u, u )Right(v, v ) else u := u ; u :=Succ(u) until (u, u )Right(u, v) (v, v )Left(u, v) return (u, v)

18 NextBitangent iteration (3) Conv(Ā ) a v v u u b Conv(A ) Procedure NextBitangent(a, b) A := Insert(Delete(A, a),b) Ā := Insert(Delete(Ā, b),a) u := b; u := Succ(u) v := a; v := Succ(v) repeat if (v, v )Left(u, v) then u := u ; u :=Succ(u) elseif (u, u )Right(u, v) elseif (u, u )Right(v, v ) else u := u ; u :=Succ(u) until (u, u )Right(u, v) (v, v )Left(u, v) return (u, v)

19 NextBitangent iteration (4) Conv(Ā ) a v v u u b Conv(A ) Procedure NextBitangent(a, b) A := Insert(Delete(A, a),b) Ā := Insert(Delete(Ā, b),a) u := b; u := Succ(u) v := a; v := Succ(v) repeat if (v, v )Left(u, v) then u := u ; u :=Succ(u) elseif (u, u )Right(u, v) elseif (u, u )Right(v, v ) else u := u ; u :=Succ(u) until (u, u )Right(u, v) (v, v )Left(u, v) return (u, v)

20 NextBitangent iteration (5) Conv(Ā ) a v u u v b Conv(A ) Procedure NextBitangent(a, b) A := Insert(Delete(A, a),b) Ā := Insert(Delete(Ā, b),a) u := b; u := Succ(u) v := a; v := Succ(v) repeat if (v, v )Left(u, v) then u := u ; u :=Succ(u) elseif (u, u )Right(u, v) elseif (u, u )Right(v, v ) else u := u ; u :=Succ(u) until (u, u )Right(u, v) (v, v )Left(u, v) return (u, v)

21 NextBitangent iteration (6) Conv(Ā ) a v u u v b Conv(A ) Procedure NextBitangent(a, b) A := Insert(Delete(A, a),b) Ā := Insert(Delete(Ā, b),a) u := b; u := Succ(u) v := a; v := Succ(v) repeat if (v, v )Left(u, v) then u := u ; u :=Succ(u) elseif (u, u )Right(u, v) elseif (u, u )Right(v, v ) else u := u ; u :=Succ(u) until (u, u )Right(u, v) (v, v )Left(u, v) return (u, v)

22 NextBitangent iteration (7) Conv(Ā ) a (a, b ) v = b u = a b Conv(A ) Procedure NextBitangent(a, b) A := Insert(Delete(A, a),b) Ā := Insert(Delete(Ā, b),a) u := b; u := Succ(u) v := a; v := Succ(v) repeat if (v, v )Left(u, v) then u := u ; u :=Succ(u) elseif (u, u )Right(u, v) elseif (u, u )Right(v, v ) else u := u ; u :=Succ(u) until (u, u )Right(u, v) (v, v )Left(u, v) return (u, v)

23 Loop-exit condition: E = (u, u )Right(u, v) (v, v )Left(u, v) Three possible enter cases ( E) : (u, v) (u, v) v 1) (v, v )Left(u, v) = Move(u) u u v 2) (u, u )Right(u, v) = Move(v) u v v u

24 3) (u, u )Left(u, v) (v, v )Right(u, v) v (u, v) v (u, v) v v u u u u 3a) (u, u )Right(v, v ) = Move(v) 3b) (u, u )Left(v, v ) = Move(u)

25 Full Algorithm (idea): 1) for k = 1, 2,...,n/2 do { sk := w k = + 2) (q 1,..., q n ) := Sort y (X) (w.r.t. y-coordinate < y ) 3) for i = 1, 2,..., n do A := {q X : q y q i }; k := A W(A, Ā) := weight of (A, Ā) if W(A, Ā) < w k then Update(s k, w k ) { sk := (A, Ā) w k = W(A, Ā) l := horizontal line through q i repeat turn l counter-clockwise around q i till next point q in X update A (insert or delete q) update k = A and W(A, Ā) if W(A, Ā) < w k then Update(s k, w k ) until A= initial cluster 4) return s k for all k

26 Conclusions Further current work - Manhattan norm: 2-SCC-2 under l 1 (p = 1) solvable in O(n 2 log n) time [BGLP14] - Extension to higher dimensions for all d N + 2-SCC-d FP ( p N + ) [Lin 13] - Extension to larger number of clusters (m > 2) - Relaxing number of clusters M-RCC-d = Clustering Problem in dimension d, arbitrary partition size m, size of clusters in M = {2, 3}-RCC-2 is NP-hard (p = 2) [Lin 13] THANKS FOR YOUR ATTENTION

1 Column Generation and the Cutting Stock Problem

1 Column Generation and the Cutting Stock Problem 1 Column Generation and the Cutting Stock Problem In the linear programming approach to the traveling salesman problem we used the cutting plane approach. The cutting plane approach is appropriate when

More information

Math Models of OR: Branch-and-Bound

Math Models of OR: Branch-and-Bound Math Models of OR: Branch-and-Bound John E. Mitchell Department of Mathematical Sciences RPI, Troy, NY 12180 USA November 2018 Mitchell Branch-and-Bound 1 / 15 Branch-and-Bound Outline 1 Branch-and-Bound

More information

Easy Problems vs. Hard Problems. CSE 421 Introduction to Algorithms Winter Is P a good definition of efficient? The class P

Easy Problems vs. Hard Problems. CSE 421 Introduction to Algorithms Winter Is P a good definition of efficient? The class P Easy Problems vs. Hard Problems CSE 421 Introduction to Algorithms Winter 2000 NP-Completeness (Chapter 11) Easy - problems whose worst case running time is bounded by some polynomial in the size of the

More information

ON THE NP-COMPLETENESS OF SOME GRAPH CLUSTER MEASURES

ON THE NP-COMPLETENESS OF SOME GRAPH CLUSTER MEASURES ON THE NP-COMPLETENESS OF SOME GRAPH CLUSTER MEASURES JIŘÍ ŠÍMA AND SATU ELISA SCHAEFFER Academy of Sciences of the Czech Republic Helsinki University of Technology, Finland elisa.schaeffer@tkk.fi SOFSEM

More information

Parikh s Theorem and Descriptional Complexity

Parikh s Theorem and Descriptional Complexity Parikh s Theorem and Descriptional Complexity Giovanna J. Lavado and Giovanni Pighizzini Dipartimento di Informatica e Comunicazione Università degli Studi di Milano SOFSEM 2012 Špindlerův Mlýn, Czech

More information

An FPTAS for a Vector Subset Search Problem

An FPTAS for a Vector Subset Search Problem ISSN 1990-4789, Journal of Applied and Industrial Mathematics, 2014, Vol. 8, No. 3, pp. 329 336. c Pleiades Publishing, Ltd., 2014. Original Russian Text c A.V. Kel manov, S.M. Romanchenko, 2014, published

More information

CS3719 Theory of Computation and Algorithms

CS3719 Theory of Computation and Algorithms CS3719 Theory of Computation and Algorithms Any mechanically (automatically) discretely computation of problem solving contains at least three components: - problem description - computational tool - analysis

More information

On a Quadratic Euclidean Problem of Vector Subset Choice: Complexity and Algorithmic Approach

On a Quadratic Euclidean Problem of Vector Subset Choice: Complexity and Algorithmic Approach On a Quadratic Euclidean Problem of Vector Subset Choice: Complexity and Algorithmic Approach Anton Eremeev,2, Alexander Kel manov,3, and Artem Pyatkin,3 Sobolev Institute of Mathematics, 4 Koptyug Ave.,

More information

Analysis of Algorithms

Analysis of Algorithms Analysis of Algorithms Section 4.3 Prof. Nathan Wodarz Math 209 - Fall 2008 Contents 1 Analysis of Algorithms 2 1.1 Analysis of Algorithms....................... 2 2 Complexity Analysis 4 2.1 Notation

More information

Topic 17. Analysis of Algorithms

Topic 17. Analysis of Algorithms Topic 17 Analysis of Algorithms Analysis of Algorithms- Review Efficiency of an algorithm can be measured in terms of : Time complexity: a measure of the amount of time required to execute an algorithm

More information

Principles of Pattern Recognition. C. A. Murthy Machine Intelligence Unit Indian Statistical Institute Kolkata

Principles of Pattern Recognition. C. A. Murthy Machine Intelligence Unit Indian Statistical Institute Kolkata Principles of Pattern Recognition C. A. Murthy Machine Intelligence Unit Indian Statistical Institute Kolkata e-mail: murthy@isical.ac.in Pattern Recognition Measurement Space > Feature Space >Decision

More information

CS6901: review of Theory of Computation and Algorithms

CS6901: review of Theory of Computation and Algorithms CS6901: review of Theory of Computation and Algorithms Any mechanically (automatically) discretely computation of problem solving contains at least three components: - problem description - computational

More information

On Some Euclidean Clustering Problems: NP-Hardness and Efficient Approximation Algorithms

On Some Euclidean Clustering Problems: NP-Hardness and Efficient Approximation Algorithms On Some Euclidean Clustering Problems: NP-Hardness and Efficient Approximation Algorithms Alexander Kel manov Sobolev Institute of Mathematics Acad. Koptyug avenue, 4, 630090 Novosibirsk, Russia Novosibirsk

More information

A new analysis of Best Fit bin packing

A new analysis of Best Fit bin packing A new analysis of Best Fit bin packing Jiří Sgall Computer Science Institute of Charles University, Prague, Czech Republic. sgall@iuuk.mff.cuni.cz Abstract. We give a simple proof and a generalization

More information

k-means Clustering via the Frank-Wolfe Algorithm

k-means Clustering via the Frank-Wolfe Algorithm k-means Clustering via the Frank-Wolfe Algorithm Christian Bauckhage B-IT, University of Bonn, Bonn, Germany Fraunhofer IAIS, Sankt Augustin, Germany http://multimedia-pattern-recognition.info Abstract.

More information

Algorithms with Performance Guarantee for Some Quadratic Euclidean Problems of 2-Partitioning a Set and a Sequence

Algorithms with Performance Guarantee for Some Quadratic Euclidean Problems of 2-Partitioning a Set and a Sequence Algorithms with Performance Guarantee for Some Quadratic Euclidean Problems of 2-Partitioning a Set and a Sequence Alexander Kel manov Sobolev Institute of Mathematics Acad. Koptyug avenue 4, Novosibirsk

More information

NP-Completeness. Subhash Suri. May 15, 2018

NP-Completeness. Subhash Suri. May 15, 2018 NP-Completeness Subhash Suri May 15, 2018 1 Computational Intractability The classical reference for this topic is the book Computers and Intractability: A guide to the theory of NP-Completeness by Michael

More information

if t 1,...,t k Terms and P k is a k-ary predicate, then P k (t 1,...,t k ) Formulas (atomic formulas)

if t 1,...,t k Terms and P k is a k-ary predicate, then P k (t 1,...,t k ) Formulas (atomic formulas) FOL Query Evaluation Giuseppe De Giacomo Università di Roma La Sapienza Corso di Seminari di Ingegneria del Software: Data and Service Integration Laurea Specialistica in Ingegneria Informatica Università

More information

an efficient procedure for the decision problem. We illustrate this phenomenon for the Satisfiability problem.

an efficient procedure for the decision problem. We illustrate this phenomenon for the Satisfiability problem. 1 More on NP In this set of lecture notes, we examine the class NP in more detail. We give a characterization of NP which justifies the guess and verify paradigm, and study the complexity of solving search

More information

4.5 Simplex method. LP in standard form: min z = c T x s.t. Ax = b

4.5 Simplex method. LP in standard form: min z = c T x s.t. Ax = b 4.5 Simplex method LP in standard form: min z = c T x s.t. Ax = b x 0 George Dantzig (1914-2005) Examine a sequence of basic feasible solutions with non increasing objective function values until an optimal

More information

Lecture 12 : Graph Laplacians and Cheeger s Inequality

Lecture 12 : Graph Laplacians and Cheeger s Inequality CPS290: Algorithmic Foundations of Data Science March 7, 2017 Lecture 12 : Graph Laplacians and Cheeger s Inequality Lecturer: Kamesh Munagala Scribe: Kamesh Munagala Graph Laplacian Maybe the most beautiful

More information

Notes for Lecture Notes 2

Notes for Lecture Notes 2 Stanford University CS254: Computational Complexity Notes 2 Luca Trevisan January 11, 2012 Notes for Lecture Notes 2 In this lecture we define NP, we state the P versus NP problem, we prove that its formulation

More information

Deciding Emptiness of the Gomory-Chvátal Closure is NP-Complete, Even for a Rational Polyhedron Containing No Integer Point

Deciding Emptiness of the Gomory-Chvátal Closure is NP-Complete, Even for a Rational Polyhedron Containing No Integer Point Deciding Emptiness of the Gomory-Chvátal Closure is NP-Complete, Even for a Rational Polyhedron Containing No Integer Point Gérard Cornuéjols 1 and Yanjun Li 2 1 Tepper School of Business, Carnegie Mellon

More information

Network Localization via Schatten Quasi-Norm Minimization

Network Localization via Schatten Quasi-Norm Minimization Network Localization via Schatten Quasi-Norm Minimization Anthony Man-Cho So Department of Systems Engineering & Engineering Management The Chinese University of Hong Kong (Joint Work with Senshan Ji Kam-Fung

More information

Multicommodity Flows and Column Generation

Multicommodity Flows and Column Generation Lecture Notes Multicommodity Flows and Column Generation Marc Pfetsch Zuse Institute Berlin pfetsch@zib.de last change: 2/8/2006 Technische Universität Berlin Fakultät II, Institut für Mathematik WS 2006/07

More information

3. Algorithms. What matters? How fast do we solve the problem? How much computer resource do we need?

3. Algorithms. What matters? How fast do we solve the problem? How much computer resource do we need? 3. Algorithms We will study algorithms to solve many different types of problems such as finding the largest of a sequence of numbers sorting a sequence of numbers finding the shortest path between two

More information

Algorithms 2/6/2018. Algorithms. Enough Mathematical Appetizers! Algorithm Examples. Algorithms. Algorithm Examples. Algorithm Examples

Algorithms 2/6/2018. Algorithms. Enough Mathematical Appetizers! Algorithm Examples. Algorithms. Algorithm Examples. Algorithm Examples Enough Mathematical Appetizers! Algorithms What is an algorithm? Let us look at something more interesting: Algorithms An algorithm is a finite set of precise instructions for performing a computation

More information

IE 521 Convex Optimization Homework #1 Solution

IE 521 Convex Optimization Homework #1 Solution IE 521 Convex Optimization Homework #1 Solution your NAME here your NetID here February 13, 2019 Instructions. Homework is due Wednesday, February 6, at 1:00pm; no late homework accepted. Please use the

More information

Semi-Supervised Learning by Multi-Manifold Separation

Semi-Supervised Learning by Multi-Manifold Separation Semi-Supervised Learning by Multi-Manifold Separation Xiaojin (Jerry) Zhu Department of Computer Sciences University of Wisconsin Madison Joint work with Andrew Goldberg, Zhiting Xu, Aarti Singh, and Rob

More information

Learning sets and subspaces: a spectral approach

Learning sets and subspaces: a spectral approach Learning sets and subspaces: a spectral approach Alessandro Rudi DIBRIS, Università di Genova Optimization and dynamical processes in Statistical learning and inverse problems Sept 8-12, 2014 A world of

More information

Machine Learning for Data Science (CS4786) Lecture 8

Machine Learning for Data Science (CS4786) Lecture 8 Machine Learning for Data Science (CS4786) Lecture 8 Clustering Course Webpage : http://www.cs.cornell.edu/courses/cs4786/2016fa/ Announcement Those of you who submitted HW1 and are still on waitlist email

More information

Finite Fields. Mike Reiter

Finite Fields. Mike Reiter 1 Finite Fields Mike Reiter reiter@cs.unc.edu Based on Chapter 4 of: W. Stallings. Cryptography and Network Security, Principles and Practices. 3 rd Edition, 2003. Groups 2 A group G, is a set G of elements

More information

CS264: Beyond Worst-Case Analysis Lecture #15: Topic Modeling and Nonnegative Matrix Factorization

CS264: Beyond Worst-Case Analysis Lecture #15: Topic Modeling and Nonnegative Matrix Factorization CS264: Beyond Worst-Case Analysis Lecture #15: Topic Modeling and Nonnegative Matrix Factorization Tim Roughgarden February 28, 2017 1 Preamble This lecture fulfills a promise made back in Lecture #1,

More information

Class Note #20. In today s class, the following four concepts were introduced: decision

Class Note #20. In today s class, the following four concepts were introduced: decision Class Note #20 Date: 03/29/2006 [Overall Information] In today s class, the following four concepts were introduced: decision version of a problem, formal language, P and NP. We also discussed the relationship

More information

CS6902 Theory of Computation and Algorithms

CS6902 Theory of Computation and Algorithms CS6902 Theory of Computation and Algorithms Any mechanically (automatically) discretely computation of problem solving contains at least three components: - problem description - computational tool - procedure/analysis

More information

An Optimal Lower Bound for Nonregular Languages

An Optimal Lower Bound for Nonregular Languages An Optimal Lower Bound for Nonregular Languages Alberto Bertoni Carlo Mereghetti Giovanni Pighizzini Dipartimento di Scienze dell Informazione Università degli Studi di Milano via Comelico, 39 2035 Milano

More information

Finite Metric Spaces & Their Embeddings: Introduction and Basic Tools

Finite Metric Spaces & Their Embeddings: Introduction and Basic Tools Finite Metric Spaces & Their Embeddings: Introduction and Basic Tools Manor Mendel, CMI, Caltech 1 Finite Metric Spaces Definition of (semi) metric. (M, ρ): M a (finite) set of points. ρ a distance function

More information

Principles of AI Planning

Principles of AI Planning Principles of 7. State-space search: relaxed Malte Helmert Albert-Ludwigs-Universität Freiburg November 18th, 2008 A simple heuristic for deterministic planning STRIPS (Fikes & Nilsson, 1971) used the

More information

Computational Complexity - Pseudocode and Recursions

Computational Complexity - Pseudocode and Recursions Computational Complexity - Pseudocode and Recursions Nicholas Mainardi 1 Dipartimento di Elettronica e Informazione Politecnico di Milano nicholas.mainardi@polimi.it June 6, 2018 1 Partly Based on Alessandro

More information

Unsupervised Learning

Unsupervised Learning 2018 EE448, Big Data Mining, Lecture 7 Unsupervised Learning Weinan Zhang Shanghai Jiao Tong University http://wnzhang.net http://wnzhang.net/teaching/ee448/index.html ML Problem Setting First build and

More information

CS 4700: Foundations of Artificial Intelligence Ungraded Homework Solutions

CS 4700: Foundations of Artificial Intelligence Ungraded Homework Solutions CS 4700: Foundations of Artificial Intelligence Ungraded Homework Solutions 1. Neural Networks: a. There are 2 2n distinct Boolean functions over n inputs. Thus there are 16 distinct Boolean functions

More information

Skip List. CS 561, Lecture 11. Skip List. Skip List

Skip List. CS 561, Lecture 11. Skip List. Skip List Skip List CS 561, Lecture 11 Jared Saia University of New Mexico Technically, not a BST, but they implement all of the same operations Very elegant randomized data structure, simple to code but analysis

More information

Distributed Distance-Bounded Network Design Through Distributed Convex Programming

Distributed Distance-Bounded Network Design Through Distributed Convex Programming Distributed Distance-Bounded Network Design Through Distributed Convex Programming OPODIS 2017 Michael Dinitz, Yasamin Nazari Johns Hopkins University December 18, 2017 Distance Bounded Network Design

More information

Divide-and-Conquer. Reading: CLRS Sections 2.3, 4.1, 4.2, 4.3, 28.2, CSE 6331 Algorithms Steve Lai

Divide-and-Conquer. Reading: CLRS Sections 2.3, 4.1, 4.2, 4.3, 28.2, CSE 6331 Algorithms Steve Lai Divide-and-Conquer Reading: CLRS Sections 2.3, 4.1, 4.2, 4.3, 28.2, 33.4. CSE 6331 Algorithms Steve Lai Divide and Conquer Given an instance x of a prolem, the method works as follows: divide-and-conquer

More information

An Introduction to Sparse Approximation

An Introduction to Sparse Approximation An Introduction to Sparse Approximation Anna C. Gilbert Department of Mathematics University of Michigan Basic image/signal/data compression: transform coding Approximate signals sparsely Compress images,

More information

CSCE 222 Discrete Structures for Computing

CSCE 222 Discrete Structures for Computing CSCE 222 Discrete Structures for Computing Algorithms Dr. Philip C. Ritchey Introduction An algorithm is a finite sequence of precise instructions for performing a computation or for solving a problem.

More information

Computational complexity theory

Computational complexity theory Computational complexity theory Introduction to computational complexity theory Complexity (computability) theory deals with two aspects: Algorithm s complexity. Problem s complexity. References S. Cook,

More information

Maximum Likelihood Estimation for Mixtures of Spherical Gaussians is NP-hard

Maximum Likelihood Estimation for Mixtures of Spherical Gaussians is NP-hard Journal of Machine Learning Research 18 018 1-11 Submitted 1/16; Revised 1/16; Published 4/18 Maximum Likelihood Estimation for Mixtures of Spherical Gaussians is NP-hard Christopher Tosh Sanjoy Dasgupta

More information

k th -order Voronoi Diagrams

k th -order Voronoi Diagrams k th -order Voronoi Diagrams References: D.-T. Lee, On k-nearest neighbor Voronoi Diagrams in the plane, IEEE Transactions on Computers, Vol. 31, No. 6, pp. 478 487, 1982. B. Chazelle and H. Edelsbrunner,

More information

Motivating the Covariance Matrix

Motivating the Covariance Matrix Motivating the Covariance Matrix Raúl Rojas Computer Science Department Freie Universität Berlin January 2009 Abstract This note reviews some interesting properties of the covariance matrix and its role

More information

Complexity and Multi-level Optimization

Complexity and Multi-level Optimization Complexity and Multi-level Optimization Ted Ralphs 1 Joint work with: Aykut Bulut 1, Scott DeNegre 2, Andrea Lodi 4, Fabrizio Rossi 5, Stefano Smriglio 5, Gerhard Woeginger 6 1 COR@L Lab, Department of

More information

Two hours. To be provided by Examinations Office: Mathematical Formula Tables. THE UNIVERSITY OF MANCHESTER. 29 May :45 11:45

Two hours. To be provided by Examinations Office: Mathematical Formula Tables. THE UNIVERSITY OF MANCHESTER. 29 May :45 11:45 Two hours MATH20602 To be provided by Examinations Office: Mathematical Formula Tables. THE UNIVERSITY OF MANCHESTER NUMERICAL ANALYSIS 1 29 May 2015 9:45 11:45 Answer THREE of the FOUR questions. If more

More information

Reductions. Reduction. Linear Time Reduction: Examples. Linear Time Reductions

Reductions. Reduction. Linear Time Reduction: Examples. Linear Time Reductions Reduction Reductions Problem X reduces to problem Y if given a subroutine for Y, can solve X. Cost of solving X = cost of solving Y + cost of reduction. May call subroutine for Y more than once. Ex: X

More information

where X is the feasible region, i.e., the set of the feasible solutions.

where X is the feasible region, i.e., the set of the feasible solutions. 3.5 Branch and Bound Consider a generic Discrete Optimization problem (P) z = max{c(x) : x X }, where X is the feasible region, i.e., the set of the feasible solutions. Branch and Bound is a general semi-enumerative

More information

NP-Completeness. f(n) \ n n sec sec sec. n sec 24.3 sec 5.2 mins. 2 n sec 17.9 mins 35.

NP-Completeness. f(n) \ n n sec sec sec. n sec 24.3 sec 5.2 mins. 2 n sec 17.9 mins 35. NP-Completeness Reference: Computers and Intractability: A Guide to the Theory of NP-Completeness by Garey and Johnson, W.H. Freeman and Company, 1979. NP-Completeness 1 General Problems, Input Size and

More information

Introduction to Integer Programming

Introduction to Integer Programming Lecture 3/3/2006 p. /27 Introduction to Integer Programming Leo Liberti LIX, École Polytechnique liberti@lix.polytechnique.fr Lecture 3/3/2006 p. 2/27 Contents IP formulations and examples Total unimodularity

More information

Subset selection with sparse matrices

Subset selection with sparse matrices Subset selection with sparse matrices Alberto Del Pia, University of Wisconsin-Madison Santanu S. Dey, Georgia Tech Robert Weismantel, ETH Zürich February 1, 018 Schloss Dagstuhl Subset selection for regression

More information

Optimization WS 13/14:, by Y. Goldstein/K. Reinert, 9. Dezember 2013, 16: Linear programming. Optimization Problems

Optimization WS 13/14:, by Y. Goldstein/K. Reinert, 9. Dezember 2013, 16: Linear programming. Optimization Problems Optimization WS 13/14:, by Y. Goldstein/K. Reinert, 9. Dezember 2013, 16:38 2001 Linear programming Optimization Problems General optimization problem max{z(x) f j (x) 0,x D} or min{z(x) f j (x) 0,x D}

More information

CHAPTER 3 FUNDAMENTALS OF COMPUTATIONAL COMPLEXITY. E. Amaldi Foundations of Operations Research Politecnico di Milano 1

CHAPTER 3 FUNDAMENTALS OF COMPUTATIONAL COMPLEXITY. E. Amaldi Foundations of Operations Research Politecnico di Milano 1 CHAPTER 3 FUNDAMENTALS OF COMPUTATIONAL COMPLEXITY E. Amaldi Foundations of Operations Research Politecnico di Milano 1 Goal: Evaluate the computational requirements (this course s focus: time) to solve

More information

Topic: Balanced Cut, Sparsest Cut, and Metric Embeddings Date: 3/21/2007

Topic: Balanced Cut, Sparsest Cut, and Metric Embeddings Date: 3/21/2007 CS880: Approximations Algorithms Scribe: Tom Watson Lecturer: Shuchi Chawla Topic: Balanced Cut, Sparsest Cut, and Metric Embeddings Date: 3/21/2007 In the last lecture, we described an O(log k log D)-approximation

More information

CSE 417: Algorithms and Computational Complexity

CSE 417: Algorithms and Computational Complexity CSE 417: Algorithms and Computational Complexity Lecture 2: Analysis Larry Ruzzo 1 Why big-o: measuring algorithm efficiency outline What s big-o: definition and related concepts Reasoning with big-o:

More information

Finding normalized and modularity cuts by spectral clustering. Ljubjana 2010, October

Finding normalized and modularity cuts by spectral clustering. Ljubjana 2010, October Finding normalized and modularity cuts by spectral clustering Marianna Bolla Institute of Mathematics Budapest University of Technology and Economics marib@math.bme.hu Ljubjana 2010, October Outline Find

More information

Clustering. Stephen Scott. CSCE 478/878 Lecture 8: Clustering. Stephen Scott. Introduction. Outline. Clustering.

Clustering. Stephen Scott. CSCE 478/878 Lecture 8: Clustering. Stephen Scott. Introduction. Outline. Clustering. 1 / 19 sscott@cse.unl.edu x1 If no label information is available, can still perform unsupervised learning Looking for structural information about instance space instead of label prediction function Approaches:

More information

l p -Norm Constrained Quadratic Programming: Conic Approximation Methods

l p -Norm Constrained Quadratic Programming: Conic Approximation Methods OUTLINE l p -Norm Constrained Quadratic Programming: Conic Approximation Methods Wenxun Xing Department of Mathematical Sciences Tsinghua University, Beijing Email: wxing@math.tsinghua.edu.cn OUTLINE OUTLINE

More information

Strengthening Landmark Heuristics via Hitting Sets

Strengthening Landmark Heuristics via Hitting Sets Strengthening Landmark Heuristics via Hitting Sets Blai Bonet 1 Malte Helmert 2 1 Universidad Simón Boĺıvar, Caracas, Venezuela 2 Albert-Ludwigs-Universität Freiburg, Germany July 23rd, 2010 Contribution

More information

Single machine scheduling with forbidden start times

Single machine scheduling with forbidden start times 4OR manuscript No. (will be inserted by the editor) Single machine scheduling with forbidden start times Jean-Charles Billaut 1 and Francis Sourd 2 1 Laboratoire d Informatique Université François-Rabelais

More information

14 : Theory of Variational Inference: Inner and Outer Approximation

14 : Theory of Variational Inference: Inner and Outer Approximation 10-708: Probabilistic Graphical Models 10-708, Spring 2017 14 : Theory of Variational Inference: Inner and Outer Approximation Lecturer: Eric P. Xing Scribes: Maria Ryskina, Yen-Chia Hsu 1 Introduction

More information

Introduction to Integer Linear Programming

Introduction to Integer Linear Programming Lecture 7/12/2006 p. 1/30 Introduction to Integer Linear Programming Leo Liberti, Ruslan Sadykov LIX, École Polytechnique liberti@lix.polytechnique.fr sadykov@lix.polytechnique.fr Lecture 7/12/2006 p.

More information

Clustering Oligarchies

Clustering Oligarchies Margareta Ackerman Shai Ben-David David Loker Sivan Sabato Caltech University of Waterloo University of Waterloo Microsoft Research Abstract We investigate the extent to which clustering algorithms are

More information

Lecture #21. c T x Ax b. maximize subject to

Lecture #21. c T x Ax b. maximize subject to COMPSCI 330: Design and Analysis of Algorithms 11/11/2014 Lecture #21 Lecturer: Debmalya Panigrahi Scribe: Samuel Haney 1 Overview In this lecture, we discuss linear programming. We first show that the

More information

Spring Lecture 21 NP-Complete Problems

Spring Lecture 21 NP-Complete Problems CISC 320 Introduction to Algorithms Spring 2014 Lecture 21 NP-Complete Problems 1 We discuss some hard problems: how hard? (computational complexity) what makes them hard? any solutions? Definitions Decision

More information

Knapsack. Bag/knapsack of integer capacity B n items item i has size s i and profit/weight w i

Knapsack. Bag/knapsack of integer capacity B n items item i has size s i and profit/weight w i Knapsack Bag/knapsack of integer capacity B n items item i has size s i and profit/weight w i Goal: find a subset of items of maximum profit such that the item subset fits in the bag Knapsack X: item set

More information

Aditya Bhaskara CS 5968/6968, Lecture 1: Introduction and Review 12 January 2016

Aditya Bhaskara CS 5968/6968, Lecture 1: Introduction and Review 12 January 2016 Lecture 1: Introduction and Review We begin with a short introduction to the course, and logistics. We then survey some basics about approximation algorithms and probability. We also introduce some of

More information

More on NP and Reductions

More on NP and Reductions Indian Institute of Information Technology Design and Manufacturing, Kancheepuram Chennai 600 127, India An Autonomous Institute under MHRD, Govt of India http://www.iiitdm.ac.in COM 501 Advanced Data

More information

Theoretical Computer Science

Theoretical Computer Science Theoretical Computer Science 411 (010) 417 44 Contents lists available at ScienceDirect Theoretical Computer Science journal homepage: wwwelseviercom/locate/tcs Resource allocation with time intervals

More information

CS60007 Algorithm Design and Analysis 2018 Assignment 1

CS60007 Algorithm Design and Analysis 2018 Assignment 1 CS60007 Algorithm Design and Analysis 2018 Assignment 1 Palash Dey and Swagato Sanyal Indian Institute of Technology, Kharagpur Please submit the solutions of the problems 6, 11, 12 and 13 (written in

More information

Minimizing Cubic and Homogeneous Polynomials over Integers in the Plane

Minimizing Cubic and Homogeneous Polynomials over Integers in the Plane Minimizing Cubic and Homogeneous Polynomials over Integers in the Plane Alberto Del Pia Department of Industrial and Systems Engineering & Wisconsin Institutes for Discovery, University of Wisconsin-Madison

More information

More on Unsupervised Learning

More on Unsupervised Learning More on Unsupervised Learning Two types of problems are to find association rules for occurrences in common in observations (market basket analysis), and finding the groups of values of observational data

More information

Computational complexity theory

Computational complexity theory Computational complexity theory Introduction to computational complexity theory Complexity (computability) theory deals with two aspects: Algorithm s complexity. Problem s complexity. References S. Cook,

More information

STA141C: Big Data & High Performance Statistical Computing

STA141C: Big Data & High Performance Statistical Computing STA141C: Big Data & High Performance Statistical Computing Lecture 12: Graph Clustering Cho-Jui Hsieh UC Davis May 29, 2018 Graph Clustering Given a graph G = (V, E, W ) V : nodes {v 1,, v n } E: edges

More information

CS675: Convex and Combinatorial Optimization Fall 2014 Combinatorial Problems as Linear Programs. Instructor: Shaddin Dughmi

CS675: Convex and Combinatorial Optimization Fall 2014 Combinatorial Problems as Linear Programs. Instructor: Shaddin Dughmi CS675: Convex and Combinatorial Optimization Fall 2014 Combinatorial Problems as Linear Programs Instructor: Shaddin Dughmi Outline 1 Introduction 2 Shortest Path 3 Algorithms for Single-Source Shortest

More information

1 Computational Problems

1 Computational Problems Stanford University CS254: Computational Complexity Handout 2 Luca Trevisan March 31, 2010 Last revised 4/29/2010 In this lecture we define NP, we state the P versus NP problem, we prove that its formulation

More information

Computing the Norm A,1 is NP-Hard

Computing the Norm A,1 is NP-Hard Computing the Norm A,1 is NP-Hard Dedicated to Professor Svatopluk Poljak, in memoriam Jiří Rohn Abstract It is proved that computing the subordinate matrix norm A,1 is NP-hard. Even more, existence of

More information

Outline. Part 5. Computa0onal Complexity (3) Compound Interest 2/15/12. Recurrence Rela0ons: An Overview. Recurrence Rela0ons: Formal Defini0on

Outline. Part 5. Computa0onal Complexity (3) Compound Interest 2/15/12. Recurrence Rela0ons: An Overview. Recurrence Rela0ons: Formal Defini0on Outline Part 5. Computa0onal Complexity (3) Recurrence Relations Divide and Conquer Understanding the Complexity of Algorithms CS 200 Algorithms and Data Structures 1 2 Recurrence Rela0ons: An Overview

More information

Colored Bin Packing: Online Algorithms and Lower Bounds

Colored Bin Packing: Online Algorithms and Lower Bounds Noname manuscript No. (will be inserted by the editor) Colored Bin Packing: Online Algorithms and Lower Bounds Martin Böhm György Dósa Leah Epstein Jiří Sgall Pavel Veselý Received: date / Accepted: date

More information

Convex Optimization. Newton s method. ENSAE: Optimisation 1/44

Convex Optimization. Newton s method. ENSAE: Optimisation 1/44 Convex Optimization Newton s method ENSAE: Optimisation 1/44 Unconstrained minimization minimize f(x) f convex, twice continuously differentiable (hence dom f open) we assume optimal value p = inf x f(x)

More information

: Approximation and Online Algorithms with Applications Lecture Note 2: NP-Hardness

: Approximation and Online Algorithms with Applications Lecture Note 2: NP-Hardness 4810-1183: Approximation and Online Algorithms with Applications Lecture Note 2: NP-Hardness Introduction Suppose that you submitted a research manuscript to a journal. It is very unlikely that your paper

More information

15-780: LinearProgramming

15-780: LinearProgramming 15-780: LinearProgramming J. Zico Kolter February 1-3, 2016 1 Outline Introduction Some linear algebra review Linear programming Simplex algorithm Duality and dual simplex 2 Outline Introduction Some linear

More information

Embedded Systems 14. Overview of embedded systems design

Embedded Systems 14. Overview of embedded systems design Embedded Systems 14-1 - Overview of embedded systems design - 2-1 Point of departure: Scheduling general IT systems In general IT systems, not much is known about the computational processes a priori The

More information

Algorithms, Design and Analysis. Order of growth. Table 2.1. Big-oh. Asymptotic growth rate. Types of formulas for basic operation count

Algorithms, Design and Analysis. Order of growth. Table 2.1. Big-oh. Asymptotic growth rate. Types of formulas for basic operation count Types of formulas for basic operation count Exact formula e.g., C(n) = n(n-1)/2 Algorithms, Design and Analysis Big-Oh analysis, Brute Force, Divide and conquer intro Formula indicating order of growth

More information

CSC 8301 Design & Analysis of Algorithms: Lower Bounds

CSC 8301 Design & Analysis of Algorithms: Lower Bounds CSC 8301 Design & Analysis of Algorithms: Lower Bounds Professor Henry Carter Fall 2016 Recap Iterative improvement algorithms take a feasible solution and iteratively improve it until optimized Simplex

More information

A polynomial-time approximation scheme for the two-machine flow shop scheduling problem with an availability constraint

A polynomial-time approximation scheme for the two-machine flow shop scheduling problem with an availability constraint A polynomial-time approximation scheme for the two-machine flow shop scheduling problem with an availability constraint Joachim Breit Department of Information and Technology Management, Saarland University,

More information

1 Non-negative Matrix Factorization (NMF)

1 Non-negative Matrix Factorization (NMF) 2018-06-21 1 Non-negative Matrix Factorization NMF) In the last lecture, we considered low rank approximations to data matrices. We started with the optimal rank k approximation to A R m n via the SVD,

More information

THE NEWTON BRACKETING METHOD FOR THE MINIMIZATION OF CONVEX FUNCTIONS SUBJECT TO AFFINE CONSTRAINTS

THE NEWTON BRACKETING METHOD FOR THE MINIMIZATION OF CONVEX FUNCTIONS SUBJECT TO AFFINE CONSTRAINTS THE NEWTON BRACKETING METHOD FOR THE MINIMIZATION OF CONVEX FUNCTIONS SUBJECT TO AFFINE CONSTRAINTS ADI BEN-ISRAEL AND YURI LEVIN Abstract. The Newton Bracketing method [9] for the minimization of convex

More information

and the polynomial-time Turing p reduction from approximate CVP to SVP given in [10], the present authors obtained a n=2-approximation algorithm that

and the polynomial-time Turing p reduction from approximate CVP to SVP given in [10], the present authors obtained a n=2-approximation algorithm that Sampling short lattice vectors and the closest lattice vector problem Miklos Ajtai Ravi Kumar D. Sivakumar IBM Almaden Research Center 650 Harry Road, San Jose, CA 95120. fajtai, ravi, sivag@almaden.ibm.com

More information

Andrea Scozzari Università Niccolò Cusano Telematica, Roma

Andrea Scozzari Università Niccolò Cusano Telematica, Roma Andrea Scozzari Università Niccolò Cusano Telematica, Roma Joint research with: J. Puerto Universidad de Sevilla F. Ricca Sapienza, Università di Roma The Exploratory Workshop on Locational Analysis: Trends

More information

From Satisfiability to Linear Algebra

From Satisfiability to Linear Algebra From Satisfiability to Linear Algebra Fangzhen Lin Department of Computer Science Hong Kong University of Science and Technology Clear Water Bay, Kowloon, Hong Kong Technical Report August 2013 1 Introduction

More information

CSCI 1951-G Optimization Methods in Finance Part 10: Conic Optimization

CSCI 1951-G Optimization Methods in Finance Part 10: Conic Optimization CSCI 1951-G Optimization Methods in Finance Part 10: Conic Optimization April 6, 2018 1 / 34 This material is covered in the textbook, Chapters 9 and 10. Some of the materials are taken from it. Some of

More information

March 25, 2010 CHAPTER 2: LIMITS AND CONTINUITY OF FUNCTIONS IN EUCLIDEAN SPACE

March 25, 2010 CHAPTER 2: LIMITS AND CONTINUITY OF FUNCTIONS IN EUCLIDEAN SPACE March 25, 2010 CHAPTER 2: LIMIT AND CONTINUITY OF FUNCTION IN EUCLIDEAN PACE 1. calar product in R n Definition 1.1. Given x = (x 1,..., x n ), y = (y 1,..., y n ) R n,we define their scalar product as

More information