Asymptotic Polynomial-Time Approximation (APTAS) and Randomized Approximation Algorithms

Similar documents
Lecture 6,7 (Sept 27 and 29, 2011 ): Bin Packing, MAX-SAT

Comp487/587 - Boolean Formulas

8 Knapsack Problem 8.1 (Knapsack)

A An Overview of Complexity Theory for the Algorithm Designer

Approximations for MAX-SAT Problem

Approximations for MAX-SAT Problem

Approximation Preserving Reductions

Chapter 11. Approximation Algorithms. Slides by Kevin Wayne Pearson-Addison Wesley. All rights reserved.

Knapsack. Bag/knapsack of integer capacity B n items item i has size s i and profit/weight w i

Chapter 11. Approximation Algorithms. Slides by Kevin Wayne Pearson-Addison Wesley. All rights reserved.

A Fast Asymptotic Approximation Scheme for Bin Packing with Rejection

Lecture 13 March 7, 2017

Notes for Lecture 2. Statement of the PCP Theorem and Constraint Satisfaction

Bin packing and scheduling

NP Completeness and Approximation Algorithms

Non-Approximability Results (2 nd part) 1/19

More on NP and Reductions

Lecture 4. 1 FPTAS - Fully Polynomial Time Approximation Scheme

Hardness of Approximation

CSCI 1590 Intro to Computational Complexity

Approximation Basics

Polynomial kernels for constant-factor approximable problems

2 COLORING. Given G, nd the minimum number of colors to color G. Given graph G and positive integer k, is X(G) k?

Limits to Approximability: When Algorithms Won't Help You. Note: Contents of today s lecture won t be on the exam

4/12/2011. Chapter 8. NP and Computational Intractability. Directed Hamiltonian Cycle. Traveling Salesman Problem. Directed Hamiltonian Cycle

Approximation Algorithms

CSE 3500 Algorithms and Complexity Fall 2016 Lecture 25: November 29, 2016

Approximation Algorithms and Hardness of Approximation. IPM, Jan Mohammad R. Salavatipour Department of Computing Science University of Alberta

Classes of Boolean Functions

Chapter 2. Reductions and NP. 2.1 Reductions Continued The Satisfiability Problem (SAT) SAT 3SAT. CS 573: Algorithms, Fall 2013 August 29, 2013

NP and Computational Intractability

COMP Analysis of Algorithms & Data Structures

8. INTRACTABILITY I. Lecture slides by Kevin Wayne Copyright 2005 Pearson-Addison Wesley. Last updated on 2/6/18 2:16 AM

Algorithms. Outline! Approximation Algorithms. The class APX. The intelligence behind the hardware. ! Based on

Approximation Algorithms

Algorithm Design and Analysis

PCPs and Inapproximability Gap-producing and Gap-Preserving Reductions. My T. Thai

Theory of Computation CS3102 Spring 2014 A tale of computers, math, problem solving, life, love and tragic death

SAT, NP, NP-Completeness

CS261: A Second Course in Algorithms Lecture #18: Five Essential Tools for the Analysis of Randomized Algorithms

Lecture Notes CS:5360 Randomized Algorithms Lecture 20 and 21: Nov 6th and 8th, 2018 Scribe: Qianhang Sun

More NP-Complete Problems

Limitations of Algorithm Power

TRIPARTITE MATCHING, KNAPSACK, Pseudopolinomial Algorithms, Strong NP-completeness

This means that we can assume each list ) is

Essential facts about NP-completeness:

Bayesian Networks Factor Graphs the Case-Factor Algorithm and the Junction Tree Algorithm

Graph. Supply Vertices and Demand Vertices. Supply Vertices. Demand Vertices

APTAS for Bin Packing

6-1 Computational Complexity

NP-Completeness Part II

Maximum 3-SAT as QUBO

Some Algebra Problems (Algorithmic) CSE 417 Introduction to Algorithms Winter Some Problems. A Brief History of Ideas

NP-Complete problems

Closest String and Closest Substring Problems

Branching. Teppo Niinimäki. Helsinki October 14, 2011 Seminar: Exact Exponential Algorithms UNIVERSITY OF HELSINKI Department of Computer Science

1 Primals and Duals: Zero Sum Games

NP Completeness. CS 374: Algorithms & Models of Computation, Spring Lecture 23. November 19, 2015

1 The Knapsack Problem

An Approximation Algorithm for MAX-2-SAT with Cardinality Constraint

A brief introduction to Logic. (slides from

NP-problems continued

Non-Deterministic Time

Intro to Theory of Computation

CS6840: Advanced Complexity Theory Mar 29, Lecturer: Jayalal Sarma M.N. Scribe: Dinesh K.

Complexity Theory VU , SS The Polynomial Hierarchy. Reinhard Pichler

MTAT Complexity Theory October 20th-21st, Lecture 7

Outline. Complexity Theory EXACT TSP. The Class DP. Definition. Problem EXACT TSP. Complexity of EXACT TSP. Proposition VU 181.

Formal definition of P

Propositional Logic: Models and Proofs

Geometric Steiner Trees

Keywords. Approximation Complexity Theory: historical remarks. What s PCP?

NP Completeness and Approximation Algorithms

Lecture 15 - NP Completeness 1

Randomized Sorting Algorithms Quick sort can be converted to a randomized algorithm by picking the pivot element randomly. In this case we can show th

Bayesian Networks and Markov Random Fields

CSE 421 Dynamic Programming

a 1 a 2 a 3 a 4 v i c i c(a 1, a 3 ) = 3

Algorithms and Theory of Computation. Lecture 22: NP-Completeness (2)

Approximation algorithms based on LP relaxation

CS 151 Complexity Theory Spring Solution Set 5

Computability and Complexity Theory

Lecture 18: More NP-Complete Problems

Quantum algorithms (CO 781, Winter 2008) Prof. Andrew Childs, University of Waterloo LECTURE 11: From random walk to quantum walk

Hardness of Approximation of Graph Partitioning into Balanced Complete Bipartite Subgraphs

Easy Problems vs. Hard Problems. CSE 421 Introduction to Algorithms Winter Is P a good definition of efficient? The class P

COSC 341: Lecture 25 Coping with NP-hardness (2)

NP-problems continued

Lec. 2: Approximation Algorithms for NP-hard Problems (Part II)

NP-Complete Problems. More reductions

CHAPTER 3 FUNDAMENTALS OF COMPUTATIONAL COMPLEXITY. E. Amaldi Foundations of Operations Research Politecnico di Milano 1

Dynamic Programming: Interval Scheduling and Knapsack

Computational Complexity. IE 496 Lecture 6. Dr. Ted Ralphs

Graph Theory and Optimization Computational Complexity (in brief)

Computational Complexity and Intractability: An Introduction to the Theory of NP. Chapter 9

NP-Completeness. f(n) \ n n sec sec sec. n sec 24.3 sec 5.2 mins. 2 n sec 17.9 mins 35.

Summer School on Introduction to Algorithms and Optimization Techniques July 4-12, 2017 Organized by ACMU, ISI and IEEE CEDA.

Propositional Logic. Methods & Tools for Software Engineering (MTSE) Fall Prof. Arie Gurfinkel

20.1 2SAT. CS125 Lecture 20 Fall 2016

Transcription:

Approximation Algorithms Asymptotic Polynomial-Time Approximation (APTAS) and Randomized Approximation Algorithms Jens Egeblad November 29th, 2006

Agenda First lesson (Asymptotic Approximation): Bin-Packing (BPP) Formulation 2-Approximation APTAS (Asymptotic PTAS) Second lesson (Randomized Algorithms): Maximum Satisfiability (MAX-SAT) Formulation 2 Randomized Algorithms Self-Reducibility Derandomization 1

Bin-Packing Formulation Bin-Packing: Given n items with sizes a 1,..., a n (0, 1] find a packing in unit sized bins that minimizes the number of bins used. Example: NP-Hard of course! Applications: Cutting of textile, wood, metal etc. Generalizable to higher dimensions (1D-BPP, 2D-BPP, 3D-BPP,... ). 2

First-Fit Algorithm First-Fit: 1. Pick unpacked item a i 2. Run through partially packed bins B 1,..., B k. Put a i into the first bin in which it fits. 3. If a i does not fit in any bin, open a new bin B k+1 Example: 3

First-Fit Analysis Theorem: First-Fit is a 2-Approximation Algorithm. Proof If the algorithms uses m bins then at least m 1 must be more than half full; 1 Therefore: mx i=1 m a i > m 1 2 I.e. the space we occupy is more than half the space of all the half full bins. Since P m i=1 a i is a lower bound on OPT we get: m 1 2 < mx a i OPT i=1 so m 1 < 2OPT and m 2OPT. 4

Negative Result Theorem: Assume P NP then there is no approximation algorithm for BPP having an approximation guarantee of 3 2 ǫ for any ǫ > 0. Proof: Reduce from Set-Partition problem. Set-partition problem (SPP): Given a set S = a 1,..., a n of numbers. Determine if S can be partition into two sets A and A = S \ A such that X x A x = X x A x Now assume a 3 2 BPP. A A ǫ approximation algorithm A exists for Given an instance S = {a 1,..., a n } for SPP construct a BPP instance with n items and bin sizes 1 2 P n i=1 a i. A will give a 2-bin packing if the SPP instance has yes answer, since ( 3 2 ǫ) 2 = 2 5

First Fit Decreasing Bin-Packing Algorithm First-Fit Decreasing: 1. Sort items by size in non-increasing order. 2. Run First-Fit. [Johnson 73] showed that for FFD, FFD(I) 11 9 OPT(I) + 4 = 1.222... OPT(I) + 4 As OP T(I) increases the approximation guarantee approaches 11 11 9, because 4 becomes insignificant to 9 OPT(I). 6

Asymptotic Approximation Guarantee Given a minimization problem Π with instances I, R A (I) = A(I)/OPT(I). Definition: Approximation guarantee: R A = inf{r 1 : R A (I) r I}, So R FFD = 5. Definition: Asymptotic approximation guarantee: RA = inf{r 1 : N > 0, R A (I) r, I w. OPT(I) > N}, So R FFD = 11 9. A(I)/OPT(I) R A R A OPT(I) 8 7

Approximation Schemes For a problem Π with instances I. FPTAS (Fully Polynomial-Time Approximation Scheme) For any ǫ > 0 exists algorithm A ǫ, s.t. A ǫ (I) (1 + ǫ)opt(i). The running time of A ǫ is polynomial in I and 1 ǫ. PTAS (Polynomial-Time Approximation Scheme) For any ǫ > 0 exists algorithm A ǫ, s.t. A ǫ (I) (1 + ǫ)opt(i). The running time of A ǫ is polynomial in I. APTAS (Asymptotic Polynomial-Time Approximation Scheme) For any ǫ > 0 exists algorithm A ǫ and N 0 > 0, s.t. A ǫ (I) (1 + ǫ)opt(i), for OPT(I) > N 0. The running time of A ǫ is polynomial in I. 8

Complexity Classes NP APX APTAS PTAS FPTAS P APX: Problems that have a finite approximation factor. 9

Asymptotic PTAS For BPP (1) APTAS (Asymptotic Polynomial-Time Approximation Scheme) For any ǫ > 0 exists algorithm A ǫ and N 0 > 0, s.t. A ǫ (I) (1 + ǫ )OPT(I), for OPT(I) > N 0. The running time of A ǫ is polynomial in I. Theorem: For any ǫ, 0 ǫ 1 2 there is a polynomial time algorithm A ǫ which finds a packing using at most (1 +2ǫ)OPT(I) +1 bins. Proof: Next slides Algorithms A ǫ form an APTAS. Choose N 0 > 3 ǫ and ǫ = 1 3 ǫ. A ǫ (I) (1 + 2ǫ)OPT(I) + 1 «1 1 + 2ǫ + OPT(I) OPT(I) (1 + 2ǫ 3 + 1 N 0 )OPT(I) (1 + 2ǫ 3 + ǫ 3 )OPT(I) = (1 + ǫ )OPT(I). 10

Asymptotic PTAS For BPP (2) Lemma 1: Given ǫ > 0 and integer K > 0, consider the restricted BPP a 1,..., a n where a i ǫ and the number of distinct item sizes is K. There is a polynomial time algorithm for this restricted BPP. Proof: M = 1 ǫ is max. number of items in one bin. M + K Number of ways to pack a bin := R M constant n + R Number of feasible packings := P R polynomial in n ( (n+r)r R! ). Enumerate all packings and pick the best. «, «, Note: For M = 5 and K = 5: «5 + 5 = 10! 5 5! 5! = 252 So for n = 20 20 + 252 252 «9.8 10 29. (The algorithms have very high running times) 11

APTAS for BPP (3) Lemma 2: Given ǫ > 0. Consider restricted BPP a 1,..., a n where a i ǫ. There is a polynomial time algorithm with approximation guarantee (1 + ǫ). Proof: I J J Sort a 1,..., a n by increasing size and partition them into K = 1 ǫ 2 groups having at most Q = nǫ2 items. Construct instance J by rounding up items sizes. By Lemma 1 we can find optimal packing for J (also feasible for I) Construct instance J by rounding down item sizes. OPT(J ) OPT(I). J is packing for J except Q largest items of J so: OPT(J) OPT(J ) + Q OPT(I) + Q Since a i ǫ, OPT(I) nǫ. Therefore Q = nǫ 2 ǫopt(i) OPT(J) (1 + ǫ)opt(i). 12

APTAS for BPP (4) Theorem: For any ǫ, 0 ǫ 1 2 there is a polynomial time algorithm A ǫ which fins a packing using at most (1 + 2ǫ)OPT(I) + 1 bins. Proof: Obtain I from I by discarding items smaller than ǫ. (OPT(I ) OPT(I) = OPT). Obtain a packing of I using lemma 2 with at most (1 + ǫ)opt(i ) bins. Pack small items in First-Fit manner. No additional bins we are done. Otherwise, Let M be total number of bins. We know M 1 bins are at least 1 ǫ full, so we use this lower bound: (M 1)(1 ǫ) < OPT and get: M OPT 1 ǫ + 1 (1 + 2ǫ)OPT + 1, for 0 ǫ 1 2 since 1 1 ǫ = 1+2ǫ (1+2ǫ)(1 ǫ) = 1+2ǫ 1+ǫ 2ǫ 2 = 1+2ǫ 1+ǫ(1 2ǫ) 1 + 2ǫ for 0 ǫ 1 2. 13

BPP Recap First-Fit algorithm is 2-approximation Complexity classes APTAS PTAS FPTAS APTAS for BPP: Consider restricted problem. Enumerate all solutions of restricted problem. Show that non-restricted problems can be solved with approximation factor by changing sizes of items, and place small items greedily. 14

2nd Lesson (Randomized Algorithms) Maximum Satisfiability (problem formulation) Randomized 1 2-factor algorithm Self-reducibility Derandomization by self-reducibility and conditional expectation IP formulation LP-relaxation based factor 1 1 e random algorithm Combining the algorithms to get a 3 4-factor random algorithm. 15

Maximum Satisfiability (Formulation) Conjunctive normal form: A formula f on boolean variables x 1,..., x n of the form: f = V c C c, where each clause c is a disjunction of literals (boolean variable or its negation). Example: f = (x 1 x 2 ) ( x 3 x 4 ) (x 1 x 2 x 4 ) MAX-SAT: Given set of clauses C on boolean variables x 1,..., x n {0, 1} and weights w c 0 for each c, maximize X c C w c z c, where and z c {0, 1} is 1 iff c is satisfied. Definition: Let size(c) be the number of literals in clause c. MAXk-SAT: The restriction to instances where the number of literals in the clauses is at most k. Note: MAX-SAT and MAXk-SAT are NP-hard for k 2 16

A Randomized 1 2-Factor Algorithm Algorithm: Flip-A-Coin Set x i to 1 with probability 1 2 for i = 1,..., n. (Polynomial time of course!) 17

Analysis of 1 2-Factor Algorithm Define: W random variable: Weight of satisfied clauses. W c random variable: Weight contributed by clause c. W = P c C W c and E[W c ] = w c Pr[c is satisfied]. Lemma: If size(c) = k then E[W c ] = α k w c, where α k = (1 1 2 k) Proof: c is not satisfied iff all its literals are False. probability of this is ( 1 2 )k. The Since P c C w c OPT and for k 1: 1 1 2 k 1 2 E[W] = X c C E[W c ] 1 2 X w c 1 2 OPT. c C we have: Note: Because Pr[c is satisfied] increases as size(c) increases the algorithm behaves best on instances with large clauses. 18

Self-Reducibility Given an oracle for the decision version of some NP optimization problem we can: Find the value of an optimal solution by binary searching. Find an optimal solution by self-reduction (not all NPproblems). Self-Reducibility works by repeatedly reducing the problem and using the oracle on the reduced problem to determine properties of an optimal solution. 19

Self-Reducibility of MAX-SAT Self-Reducibility of MAX-SAT Given an instance I of MAX-SAT with boolean variables x 1,..., x n and an oracle for MAX-SAT: Calculate OPT(I) by the oracle and binary search. Create instances I 0 where x 1 is fixed to 0, and I 1 where x 1 is fixed to 1 Use the oracle and binary search on I 0 to determine if x 1 is 0 in the optimal solution. I.e. OPT(I 0 ) = OPT(I) x 1 = 0 in optimal solution If x 1 is 0 in an optimal solution continue with x 2 and I 0, Otherwise continue with x 2 and I 1. 20

Self-Reducibility Example Let I be: C = {(x 1 ), ( x 1 x 2 ), (x 1 x 2 ), (x 2 x 3 )} c x 1 x 1 x 2 x 1 x 2 x 2 x 3 w c 15 30 10 30 Oracle gives us that OPT(I) = 85. I 0 (x 1 = 0) c unsat. sat. x 2 x 2 x 3 w c 15 30 10 30 Oracle gives us that OPT(I 0 ) = 70 so set x 1 = 1. I 1,0 (x 1 = 1, x 2 = 0) c sat. sat. sat. x 3 w c 15 30 10 30 Oracle gives us that OPT(I 1,0 ) = 85 so set x 2 = 0. I 1,0,0 (x 1 = 1, x 2 = 0, x 3 = 0) c sat. sat. sat. unsat. w c 15 30 10 30 This assignment has value 55, so set x 3 = 1. 21

Self-Reducibility Tree A tree T is a self-reducibility tree if its internal nodes corresponds to reduced problems and leafs of sub-trees are solutions to the problem rooted at the sub-tree. MAX-SAT Example: Original problem x = 0 1 x = 1 1 x 1 fixed x = 0 2 x = 1 2 x = 0 2 x = 1 2 x 2 fixed x = 0 3 x = 1 3 x = 0 3 x = 1 x = 0 3 3 x = 1 3 x = 0 3 x = 1 3 Solutions Each internal node at level i corresponds to a partial setting of the variables. Each leaf represents a complete truth assignment. 22

Derandomization (Self-Reducibility) Let t be the self-reducibility tree of MAX-SAT expanded s.t. each node is labeled with E[W x 1 = a 1,..., x i = a i ] where a 1,..., a i is the truth assignment corresponding to this node. Example: C = {(x 1 ), ( x 1 x 2 ), (x 1 x 2 ), (x 1 x 2 x 3 )} and weights w 1, w 2, w 3, w 4. E[W] = 1 2 w 1 + 3 2 2w 2 + 3 2 2w 3 + 7 2 3w 4 In the node corresponding to I x1 =0 = {(false), (true), (x 2 ), ( x 2 x 3 )}, we have E[W x 1 = 0] = w 2 + 1 2 w 3 + 3 2 2w 4 Lemma: The conditional expectation of any node in t can be computed in polynomial time. Proof: Calculate the sum of weights of the clauses satisfied by the partial truth assignment at this node. Add the expected weight of the reduced formula. 23

Derandomization (Cond. Expectation) Theorem: We can compute, in polynomial time, a path from the root to a leaf such that the conditional expectation of each node on this path is E[W]. Proof In each node we have: E[W x 1 = a 1,..., x i = a i ] = 1 2 E[W x 1 = a 1,..., x i = a i, x i+1 = 0] + 1 2 E[W x 1 = a 1,..., x i = a i, x i+1 = 1] because both assignments are equally likely. Therefore the child with larger value must have conditional expectation at least as large as its parent. We can determine the conditional expectations in polynomial time by previous lemma. Number of steps is n. Deterministic Algorithm: Start at the root of t and recursively select the child with largest conditional expectation. This yields a deterministic factor 1 2 algorithm which run in polynomial time, since evaluation of each node takes polynomial time, and the depth of the tree is polynomial in n. 24

MAX-SAT Integer Program For each clause c C S + c is the set of non-negated variables and S c is the set of negated variables, occuring in c. IP: X maximize w c z c s.t. c C X i S + c y i + X (1 y i ) z c for c C, i S c z c {0, 1} for c C, y i {0, 1} for i {1,..., n}, LP-relaxation: maximize s.t. X w c z c c C X y i + X (1 y i ) z c for c C, i S c + i Sc 0 z c 1 for c C, 0 y i 1 for i {1,..., n}, 25

MAX-SAT LP-Based Algorithm Algorithm: LP-Relaxed based random value Solve LP-relaxation to get solution (y, z, OPT LP ). Set x i = 1 with probability y i. (Polynomial time of course!) 26

Analysis of LP-Based Algorithm (1) Lemma: If size(c) = k, then E[W c ] β k w c z c, for β k = 1 (1 1 k )k. Proof: Assume c = (x 1... x k ) w.l.o.g. Pr[c satis.] = 1 ky i=1 = 1 1 (1 y i ) 1 P k i=1 (1 y i ) k P k i=1 y i k! k! k «k 1 1 z c, k since a 1 +...+a k k k a 1... a k, and y 1 +...+y k z c from the LP-constraint. Finally we have g(z) = 1 (1 z k )k (1 (1 1 k )k )z = β k z, for z [0, 1]. So Pr[c satis.] β k z c. β k z g (z) k 0 1 z 27

Analysis of LP-Based Algorithm (2) β k is a decreasing function of k. If all clauses are of size k: E[W] = X X E[W c ] β k w c z c = β kopt LP β k OPT c C c C Now, for k Z + : β k = 1 (1 1 k )k 1 1 e, so the algorithm has approximation guarantee (1 1 e ). Derandomization: The algorithm can be derandomized like the factor- 1 2 algorithm; In step i determine the conditional expectation when x i is fixed to both 0 and 1 and choose the setting with largest conditional expectation. 28

Combining the Algorithms Algorithm: Let b equal 0 or 1 with probability 1 2 each. If b = 0 run the first randomized algorithm. If b = 1 run the second randomized algorithm. Lemma: E[W c ] 3 4 w cz c. Proof: Let k = size(c). We know: E[W c b = 0] = α k w c α k w c z c E[W c b = 1] β k w c z c So, E[W c ] = E[W c b = 0] + E[W c b = 1] 2 w c z α k + β k c. 2 Since α k + β k 3 2 for k Z+, we have E[W c ] 3 4 w cz c This leads to a 3 4-factor algorithm because: E[W] = X c C E[W c ] 3 4 X w c z c = 3 4 OPT LP 3 4 OPT c C 29

Derandomizing Everything Algorithm: Run first deterministic algorithm to get truth assignment τ 1 Run second deterministic algorithm to get truth assignment τ 2 Output the better of the two assignments Analysis: The average of the weights of satisfied clauses under τ 1 and τ 2 is 3 4 OPT. If we choose the best of the two assignments we must do at least as good. In other words: We have derandomized algorithm 1 and algorithm 2 which do at least as well as as the randomized algirithms. The combined randomized algorithm ran either of the two algorithms randomly. Running both randomized algorithms must be at least as good. Running both derandomized algorithms must also be at least as good. 30

Recap of Maximum Satisfiability Second lesson was about: A simple random algorithm Derandomized by self-reducibility A random algorithm based on LP-relaxation Derandomized Combined the two random algorithms Derandomized by running both derandomized algorithms and choosing the best solution. Hints for exercises: There is a list of hints on the webpage along with the list of exercises. Use each hint to move on when you get stuck! The hints are there because some exercises require a good idea, and it is more important that you understand the material, than get the good idea to solve the exercises. So, look at the hints before you give up or spend too much time thinking about each exercise! But don t look unless you have to. 31