Asymptotic Polynomial-Time Approximation (APTAS) and Randomized Approximation Algorithms
|
|
- Randall Cummings
- 6 years ago
- Views:
Transcription
1 Approximation Algorithms Asymptotic Polynomial-Time Approximation (APTAS) and Randomized Approximation Algorithms Jens Egeblad November 29th, 2006
2 Agenda First lesson (Asymptotic Approximation): Bin-Packing (BPP) Formulation 2-Approximation APTAS (Asymptotic PTAS) Second lesson (Randomized Algorithms): Maximum Satisfiability (MAX-SAT) Formulation 2 Randomized Algorithms Self-Reducibility Derandomization 1
3 Bin-Packing Formulation Bin-Packing: Given n items with sizes a 1,..., a n (0, 1] find a packing in unit sized bins that minimizes the number of bins used. Example: NP-Hard of course! Applications: Cutting of textile, wood, metal etc. Generalizable to higher dimensions (1D-BPP, 2D-BPP, 3D-BPP,... ). 2
4 First-Fit Algorithm First-Fit: 1. Pick unpacked item a i 2. Run through partially packed bins B 1,..., B k. Put a i into the first bin in which it fits. 3. If a i does not fit in any bin, open a new bin B k+1 Example: 3
5 First-Fit Analysis Theorem: First-Fit is a 2-Approximation Algorithm. Proof If the algorithms uses m bins then at least m 1 must be more than half full; 1 Therefore: mx i=1 m a i > m 1 2 I.e. the space we occupy is more than half the space of all the half full bins. Since P m i=1 a i is a lower bound on OPT we get: m 1 2 < mx a i OPT i=1 so m 1 < 2OPT and m 2OPT. 4
6 Negative Result Theorem: Assume P NP then there is no approximation algorithm for BPP having an approximation guarantee of 3 2 ǫ for any ǫ > 0. Proof: Reduce from Set-Partition problem. Set-partition problem (SPP): Given a set S = a 1,..., a n of numbers. Determine if S can be partition into two sets A and A = S \ A such that X x A x = X x A x Now assume a 3 2 BPP. A A ǫ approximation algorithm A exists for Given an instance S = {a 1,..., a n } for SPP construct a BPP instance with n items and bin sizes 1 2 P n i=1 a i. A will give a 2-bin packing if the SPP instance has yes answer, since ( 3 2 ǫ) 2 = 2 5
7 First Fit Decreasing Bin-Packing Algorithm First-Fit Decreasing: 1. Sort items by size in non-increasing order. 2. Run First-Fit. [Johnson 73] showed that for FFD, FFD(I) 11 9 OPT(I) + 4 = OPT(I) + 4 As OP T(I) increases the approximation guarantee approaches , because 4 becomes insignificant to 9 OPT(I). 6
8 Asymptotic Approximation Guarantee Given a minimization problem Π with instances I, R A (I) = A(I)/OPT(I). Definition: Approximation guarantee: R A = inf{r 1 : R A (I) r I}, So R FFD = 5. Definition: Asymptotic approximation guarantee: RA = inf{r 1 : N > 0, R A (I) r, I w. OPT(I) > N}, So R FFD = A(I)/OPT(I) R A R A OPT(I) 8 7
9 Approximation Schemes For a problem Π with instances I. FPTAS (Fully Polynomial-Time Approximation Scheme) For any ǫ > 0 exists algorithm A ǫ, s.t. A ǫ (I) (1 + ǫ)opt(i). The running time of A ǫ is polynomial in I and 1 ǫ. PTAS (Polynomial-Time Approximation Scheme) For any ǫ > 0 exists algorithm A ǫ, s.t. A ǫ (I) (1 + ǫ)opt(i). The running time of A ǫ is polynomial in I. APTAS (Asymptotic Polynomial-Time Approximation Scheme) For any ǫ > 0 exists algorithm A ǫ and N 0 > 0, s.t. A ǫ (I) (1 + ǫ)opt(i), for OPT(I) > N 0. The running time of A ǫ is polynomial in I. 8
10 Complexity Classes NP APX APTAS PTAS FPTAS P APX: Problems that have a finite approximation factor. 9
11 Asymptotic PTAS For BPP (1) APTAS (Asymptotic Polynomial-Time Approximation Scheme) For any ǫ > 0 exists algorithm A ǫ and N 0 > 0, s.t. A ǫ (I) (1 + ǫ )OPT(I), for OPT(I) > N 0. The running time of A ǫ is polynomial in I. Theorem: For any ǫ, 0 ǫ 1 2 there is a polynomial time algorithm A ǫ which finds a packing using at most (1 +2ǫ)OPT(I) +1 bins. Proof: Next slides Algorithms A ǫ form an APTAS. Choose N 0 > 3 ǫ and ǫ = 1 3 ǫ. A ǫ (I) (1 + 2ǫ)OPT(I) + 1 « ǫ + OPT(I) OPT(I) (1 + 2ǫ N 0 )OPT(I) (1 + 2ǫ 3 + ǫ 3 )OPT(I) = (1 + ǫ )OPT(I). 10
12 Asymptotic PTAS For BPP (2) Lemma 1: Given ǫ > 0 and integer K > 0, consider the restricted BPP a 1,..., a n where a i ǫ and the number of distinct item sizes is K. There is a polynomial time algorithm for this restricted BPP. Proof: M = 1 ǫ is max. number of items in one bin. M + K Number of ways to pack a bin := R M constant n + R Number of feasible packings := P R polynomial in n ( (n+r)r R! ). Enumerate all packings and pick the best. «, «, Note: For M = 5 and K = 5: «5 + 5 = 10! 5 5! 5! = 252 So for n = « (The algorithms have very high running times) 11
13 APTAS for BPP (3) Lemma 2: Given ǫ > 0. Consider restricted BPP a 1,..., a n where a i ǫ. There is a polynomial time algorithm with approximation guarantee (1 + ǫ). Proof: I J J Sort a 1,..., a n by increasing size and partition them into K = 1 ǫ 2 groups having at most Q = nǫ2 items. Construct instance J by rounding up items sizes. By Lemma 1 we can find optimal packing for J (also feasible for I) Construct instance J by rounding down item sizes. OPT(J ) OPT(I). J is packing for J except Q largest items of J so: OPT(J) OPT(J ) + Q OPT(I) + Q Since a i ǫ, OPT(I) nǫ. Therefore Q = nǫ 2 ǫopt(i) OPT(J) (1 + ǫ)opt(i). 12
14 APTAS for BPP (4) Theorem: For any ǫ, 0 ǫ 1 2 there is a polynomial time algorithm A ǫ which fins a packing using at most (1 + 2ǫ)OPT(I) + 1 bins. Proof: Obtain I from I by discarding items smaller than ǫ. (OPT(I ) OPT(I) = OPT). Obtain a packing of I using lemma 2 with at most (1 + ǫ)opt(i ) bins. Pack small items in First-Fit manner. No additional bins we are done. Otherwise, Let M be total number of bins. We know M 1 bins are at least 1 ǫ full, so we use this lower bound: (M 1)(1 ǫ) < OPT and get: M OPT 1 ǫ + 1 (1 + 2ǫ)OPT + 1, for 0 ǫ 1 2 since 1 1 ǫ = 1+2ǫ (1+2ǫ)(1 ǫ) = 1+2ǫ 1+ǫ 2ǫ 2 = 1+2ǫ 1+ǫ(1 2ǫ) 1 + 2ǫ for 0 ǫ
15 BPP Recap First-Fit algorithm is 2-approximation Complexity classes APTAS PTAS FPTAS APTAS for BPP: Consider restricted problem. Enumerate all solutions of restricted problem. Show that non-restricted problems can be solved with approximation factor by changing sizes of items, and place small items greedily. 14
16 2nd Lesson (Randomized Algorithms) Maximum Satisfiability (problem formulation) Randomized 1 2-factor algorithm Self-reducibility Derandomization by self-reducibility and conditional expectation IP formulation LP-relaxation based factor 1 1 e random algorithm Combining the algorithms to get a 3 4-factor random algorithm. 15
17 Maximum Satisfiability (Formulation) Conjunctive normal form: A formula f on boolean variables x 1,..., x n of the form: f = V c C c, where each clause c is a disjunction of literals (boolean variable or its negation). Example: f = (x 1 x 2 ) ( x 3 x 4 ) (x 1 x 2 x 4 ) MAX-SAT: Given set of clauses C on boolean variables x 1,..., x n {0, 1} and weights w c 0 for each c, maximize X c C w c z c, where and z c {0, 1} is 1 iff c is satisfied. Definition: Let size(c) be the number of literals in clause c. MAXk-SAT: The restriction to instances where the number of literals in the clauses is at most k. Note: MAX-SAT and MAXk-SAT are NP-hard for k 2 16
18 A Randomized 1 2-Factor Algorithm Algorithm: Flip-A-Coin Set x i to 1 with probability 1 2 for i = 1,..., n. (Polynomial time of course!) 17
19 Analysis of 1 2-Factor Algorithm Define: W random variable: Weight of satisfied clauses. W c random variable: Weight contributed by clause c. W = P c C W c and E[W c ] = w c Pr[c is satisfied]. Lemma: If size(c) = k then E[W c ] = α k w c, where α k = (1 1 2 k) Proof: c is not satisfied iff all its literals are False. probability of this is ( 1 2 )k. The Since P c C w c OPT and for k 1: k 1 2 E[W] = X c C E[W c ] 1 2 X w c 1 2 OPT. c C we have: Note: Because Pr[c is satisfied] increases as size(c) increases the algorithm behaves best on instances with large clauses. 18
20 Self-Reducibility Given an oracle for the decision version of some NP optimization problem we can: Find the value of an optimal solution by binary searching. Find an optimal solution by self-reduction (not all NPproblems). Self-Reducibility works by repeatedly reducing the problem and using the oracle on the reduced problem to determine properties of an optimal solution. 19
21 Self-Reducibility of MAX-SAT Self-Reducibility of MAX-SAT Given an instance I of MAX-SAT with boolean variables x 1,..., x n and an oracle for MAX-SAT: Calculate OPT(I) by the oracle and binary search. Create instances I 0 where x 1 is fixed to 0, and I 1 where x 1 is fixed to 1 Use the oracle and binary search on I 0 to determine if x 1 is 0 in the optimal solution. I.e. OPT(I 0 ) = OPT(I) x 1 = 0 in optimal solution If x 1 is 0 in an optimal solution continue with x 2 and I 0, Otherwise continue with x 2 and I 1. 20
22 Self-Reducibility Example Let I be: C = {(x 1 ), ( x 1 x 2 ), (x 1 x 2 ), (x 2 x 3 )} c x 1 x 1 x 2 x 1 x 2 x 2 x 3 w c Oracle gives us that OPT(I) = 85. I 0 (x 1 = 0) c unsat. sat. x 2 x 2 x 3 w c Oracle gives us that OPT(I 0 ) = 70 so set x 1 = 1. I 1,0 (x 1 = 1, x 2 = 0) c sat. sat. sat. x 3 w c Oracle gives us that OPT(I 1,0 ) = 85 so set x 2 = 0. I 1,0,0 (x 1 = 1, x 2 = 0, x 3 = 0) c sat. sat. sat. unsat. w c This assignment has value 55, so set x 3 = 1. 21
23 Self-Reducibility Tree A tree T is a self-reducibility tree if its internal nodes corresponds to reduced problems and leafs of sub-trees are solutions to the problem rooted at the sub-tree. MAX-SAT Example: Original problem x = 0 1 x = 1 1 x 1 fixed x = 0 2 x = 1 2 x = 0 2 x = 1 2 x 2 fixed x = 0 3 x = 1 3 x = 0 3 x = 1 x = x = 1 3 x = 0 3 x = 1 3 Solutions Each internal node at level i corresponds to a partial setting of the variables. Each leaf represents a complete truth assignment. 22
24 Derandomization (Self-Reducibility) Let t be the self-reducibility tree of MAX-SAT expanded s.t. each node is labeled with E[W x 1 = a 1,..., x i = a i ] where a 1,..., a i is the truth assignment corresponding to this node. Example: C = {(x 1 ), ( x 1 x 2 ), (x 1 x 2 ), (x 1 x 2 x 3 )} and weights w 1, w 2, w 3, w 4. E[W] = 1 2 w w w w 4 In the node corresponding to I x1 =0 = {(false), (true), (x 2 ), ( x 2 x 3 )}, we have E[W x 1 = 0] = w w w 4 Lemma: The conditional expectation of any node in t can be computed in polynomial time. Proof: Calculate the sum of weights of the clauses satisfied by the partial truth assignment at this node. Add the expected weight of the reduced formula. 23
25 Derandomization (Cond. Expectation) Theorem: We can compute, in polynomial time, a path from the root to a leaf such that the conditional expectation of each node on this path is E[W]. Proof In each node we have: E[W x 1 = a 1,..., x i = a i ] = 1 2 E[W x 1 = a 1,..., x i = a i, x i+1 = 0] E[W x 1 = a 1,..., x i = a i, x i+1 = 1] because both assignments are equally likely. Therefore the child with larger value must have conditional expectation at least as large as its parent. We can determine the conditional expectations in polynomial time by previous lemma. Number of steps is n. Deterministic Algorithm: Start at the root of t and recursively select the child with largest conditional expectation. This yields a deterministic factor 1 2 algorithm which run in polynomial time, since evaluation of each node takes polynomial time, and the depth of the tree is polynomial in n. 24
26 MAX-SAT Integer Program For each clause c C S + c is the set of non-negated variables and S c is the set of negated variables, occuring in c. IP: X maximize w c z c s.t. c C X i S + c y i + X (1 y i ) z c for c C, i S c z c {0, 1} for c C, y i {0, 1} for i {1,..., n}, LP-relaxation: maximize s.t. X w c z c c C X y i + X (1 y i ) z c for c C, i S c + i Sc 0 z c 1 for c C, 0 y i 1 for i {1,..., n}, 25
27 MAX-SAT LP-Based Algorithm Algorithm: LP-Relaxed based random value Solve LP-relaxation to get solution (y, z, OPT LP ). Set x i = 1 with probability y i. (Polynomial time of course!) 26
28 Analysis of LP-Based Algorithm (1) Lemma: If size(c) = k, then E[W c ] β k w c z c, for β k = 1 (1 1 k )k. Proof: Assume c = (x 1... x k ) w.l.o.g. Pr[c satis.] = 1 ky i=1 = 1 1 (1 y i ) 1 P k i=1 (1 y i ) k P k i=1 y i k! k! k «k 1 1 z c, k since a a k k k a 1... a k, and y y k z c from the LP-constraint. Finally we have g(z) = 1 (1 z k )k (1 (1 1 k )k )z = β k z, for z [0, 1]. So Pr[c satis.] β k z c. β k z g (z) k 0 1 z 27
29 Analysis of LP-Based Algorithm (2) β k is a decreasing function of k. If all clauses are of size k: E[W] = X X E[W c ] β k w c z c = β kopt LP β k OPT c C c C Now, for k Z + : β k = 1 (1 1 k )k 1 1 e, so the algorithm has approximation guarantee (1 1 e ). Derandomization: The algorithm can be derandomized like the factor- 1 2 algorithm; In step i determine the conditional expectation when x i is fixed to both 0 and 1 and choose the setting with largest conditional expectation. 28
30 Combining the Algorithms Algorithm: Let b equal 0 or 1 with probability 1 2 each. If b = 0 run the first randomized algorithm. If b = 1 run the second randomized algorithm. Lemma: E[W c ] 3 4 w cz c. Proof: Let k = size(c). We know: E[W c b = 0] = α k w c α k w c z c E[W c b = 1] β k w c z c So, E[W c ] = E[W c b = 0] + E[W c b = 1] 2 w c z α k + β k c. 2 Since α k + β k 3 2 for k Z+, we have E[W c ] 3 4 w cz c This leads to a 3 4-factor algorithm because: E[W] = X c C E[W c ] 3 4 X w c z c = 3 4 OPT LP 3 4 OPT c C 29
31 Derandomizing Everything Algorithm: Run first deterministic algorithm to get truth assignment τ 1 Run second deterministic algorithm to get truth assignment τ 2 Output the better of the two assignments Analysis: The average of the weights of satisfied clauses under τ 1 and τ 2 is 3 4 OPT. If we choose the best of the two assignments we must do at least as good. In other words: We have derandomized algorithm 1 and algorithm 2 which do at least as well as as the randomized algirithms. The combined randomized algorithm ran either of the two algorithms randomly. Running both randomized algorithms must be at least as good. Running both derandomized algorithms must also be at least as good. 30
32 Recap of Maximum Satisfiability Second lesson was about: A simple random algorithm Derandomized by self-reducibility A random algorithm based on LP-relaxation Derandomized Combined the two random algorithms Derandomized by running both derandomized algorithms and choosing the best solution. Hints for exercises: There is a list of hints on the webpage along with the list of exercises. Use each hint to move on when you get stuck! The hints are there because some exercises require a good idea, and it is more important that you understand the material, than get the good idea to solve the exercises. So, look at the hints before you give up or spend too much time thinking about each exercise! But don t look unless you have to. 31
Lecture 6,7 (Sept 27 and 29, 2011 ): Bin Packing, MAX-SAT
,7 CMPUT 675: Approximation Algorithms Fall 2011 Lecture 6,7 (Sept 27 and 29, 2011 ): Bin Pacing, MAX-SAT Lecturer: Mohammad R. Salavatipour Scribe: Weitian Tong 6.1 Bin Pacing Problem Recall the bin pacing
More informationComp487/587 - Boolean Formulas
Comp487/587 - Boolean Formulas 1 Logic and SAT 1.1 What is a Boolean Formula Logic is a way through which we can analyze and reason about simple or complicated events. In particular, we are interested
More information8 Knapsack Problem 8.1 (Knapsack)
8 Knapsack In Chapter 1 we mentioned that some NP-hard optimization problems allow approximability to any required degree. In this chapter, we will formalize this notion and will show that the knapsack
More informationA An Overview of Complexity Theory for the Algorithm Designer
A An Overview of Complexity Theory for the Algorithm Designer A.1 Certificates and the class NP A decision problem is one whose answer is either yes or no. Two examples are: SAT: Given a Boolean formula
More informationApproximations for MAX-SAT Problem
Approximations for MAX-SAT Problem Chihao Zhang BASICS, Shanghai Jiao Tong University Oct. 23, 2012 Chihao Zhang (BASICS@SJTU) Approximations for MAX-SAT Problem Oct. 23, 2012 1 / 16 The Weighted MAX-SAT
More informationApproximations for MAX-SAT Problem
Approximations for MAX-SAT Problem Chihao Zhang BASICS, Shanghai Jiao Tong University Oct. 23, 2012 Chihao Zhang (BASICS@SJTU) Approximations for MAX-SAT Problem Oct. 23, 2012 1 / 16 The Weighted MAX-SAT
More informationApproximation Preserving Reductions
Approximation Preserving Reductions - Memo Summary - AP-reducibility - L-reduction technique - Complete problems - Examples: MAXIMUM CLIQUE, MAXIMUM INDEPENDENT SET, MAXIMUM 2-SAT, MAXIMUM NAE 3-SAT, MAXIMUM
More informationChapter 11. Approximation Algorithms. Slides by Kevin Wayne Pearson-Addison Wesley. All rights reserved.
Chapter 11 Approximation Algorithms Slides by Kevin Wayne. Copyright @ 2005 Pearson-Addison Wesley. All rights reserved. 1 P and NP P: The family of problems that can be solved quickly in polynomial time.
More informationKnapsack. Bag/knapsack of integer capacity B n items item i has size s i and profit/weight w i
Knapsack Bag/knapsack of integer capacity B n items item i has size s i and profit/weight w i Goal: find a subset of items of maximum profit such that the item subset fits in the bag Knapsack X: item set
More informationChapter 11. Approximation Algorithms. Slides by Kevin Wayne Pearson-Addison Wesley. All rights reserved.
Chapter 11 Approximation Algorithms Slides by Kevin Wayne. Copyright @ 2005 Pearson-Addison Wesley. All rights reserved. 1 Approximation Algorithms Q. Suppose I need to solve an NP-hard problem. What should
More informationA Fast Asymptotic Approximation Scheme for Bin Packing with Rejection
A Fast Asymptotic Approximation Scheme for Bin Packing with Rejection p. 1/3 A Fast Asymptotic Approximation Scheme for Bin Packing with Rejection Wolfgang Bein 1, José R. Correa 2, Xin Han 3 1 School
More informationLecture 13 March 7, 2017
CS 224: Advanced Algorithms Spring 2017 Prof. Jelani Nelson Lecture 13 March 7, 2017 Scribe: Hongyao Ma Today PTAS/FPTAS/FPRAS examples PTAS: knapsack FPTAS: knapsack FPRAS: DNF counting Approximation
More informationNotes for Lecture 2. Statement of the PCP Theorem and Constraint Satisfaction
U.C. Berkeley Handout N2 CS294: PCP and Hardness of Approximation January 23, 2006 Professor Luca Trevisan Scribe: Luca Trevisan Notes for Lecture 2 These notes are based on my survey paper [5]. L.T. Statement
More informationBin packing and scheduling
Sanders/van Stee: Approximations- und Online-Algorithmen 1 Bin packing and scheduling Overview Bin packing: problem definition Simple 2-approximation (Next Fit) Better than 3/2 is not possible Asymptotic
More informationNP Completeness and Approximation Algorithms
Chapter 10 NP Completeness and Approximation Algorithms Let C() be a class of problems defined by some property. We are interested in characterizing the hardest problems in the class, so that if we can
More informationNon-Approximability Results (2 nd part) 1/19
Non-Approximability Results (2 nd part) 1/19 Summary - The PCP theorem - Application: Non-approximability of MAXIMUM 3-SAT 2/19 Non deterministic TM - A TM where it is possible to associate more than one
More informationMore on NP and Reductions
Indian Institute of Information Technology Design and Manufacturing, Kancheepuram Chennai 600 127, India An Autonomous Institute under MHRD, Govt of India http://www.iiitdm.ac.in COM 501 Advanced Data
More informationLecture 4. 1 FPTAS - Fully Polynomial Time Approximation Scheme
Theory of Computer Science to Msc Students, Spring 2007 Lecturer: Dorit Aharonov Lecture 4 Scribe: Ram Bouobza & Yair Yarom Revised: Shahar Dobzinsi, March 2007 1 FPTAS - Fully Polynomial Time Approximation
More informationHardness of Approximation
Hardness of Approximation We have seen several methods to find approximation algorithms for NP-hard problems We have also seen a couple of examples where we could show lower bounds on the achievable approxmation
More informationCSCI 1590 Intro to Computational Complexity
CSCI 1590 Intro to Computational Complexity NP-Complete Languages John E. Savage Brown University February 2, 2009 John E. Savage (Brown University) CSCI 1590 Intro to Computational Complexity February
More informationApproximation Basics
Approximation Basics, Concepts, and Examples Xiaofeng Gao Department of Computer Science and Engineering Shanghai Jiao Tong University, P.R.China Fall 2012 Special thanks is given to Dr. Guoqiang Li for
More informationPolynomial kernels for constant-factor approximable problems
1 Polynomial kernels for constant-factor approximable problems Stefan Kratsch November 11, 2010 2 What do these problems have in common? Cluster Edge Deletion, Cluster Edge Editing, Edge Dominating Set,
More information2 COLORING. Given G, nd the minimum number of colors to color G. Given graph G and positive integer k, is X(G) k?
2 COLORING OPTIMIZATION PROBLEM Given G, nd the minimum number of colors to color G. (X(G)?) DECISION PROBLEM Given graph G and positive integer k, is X(G) k? EQUIVALENCE OF OPTIMAIZTION AND DE- CISION
More informationLimits to Approximability: When Algorithms Won't Help You. Note: Contents of today s lecture won t be on the exam
Limits to Approximability: When Algorithms Won't Help You Note: Contents of today s lecture won t be on the exam Outline Limits to Approximability: basic results Detour: Provers, verifiers, and NP Graph
More information4/12/2011. Chapter 8. NP and Computational Intractability. Directed Hamiltonian Cycle. Traveling Salesman Problem. Directed Hamiltonian Cycle
Directed Hamiltonian Cycle Chapter 8 NP and Computational Intractability Claim. G has a Hamiltonian cycle iff G' does. Pf. Suppose G has a directed Hamiltonian cycle Γ. Then G' has an undirected Hamiltonian
More informationApproximation Algorithms
Approximation Algorithms Chapter 26 Semidefinite Programming Zacharias Pitouras 1 Introduction LP place a good lower bound on OPT for NP-hard problems Are there other ways of doing this? Vector programs
More informationCSE 3500 Algorithms and Complexity Fall 2016 Lecture 25: November 29, 2016
CSE 3500 Algorithms and Complexity Fall 2016 Lecture 25: November 29, 2016 Intractable Problems There are many problems for which the best known algorithms take a very long time (e.g., exponential in some
More informationApproximation Algorithms and Hardness of Approximation. IPM, Jan Mohammad R. Salavatipour Department of Computing Science University of Alberta
Approximation Algorithms and Hardness of Approximation IPM, Jan 2006 Mohammad R. Salavatipour Department of Computing Science University of Alberta 1 Introduction For NP-hard optimization problems, we
More informationClasses of Boolean Functions
Classes of Boolean Functions Nader H. Bshouty Eyal Kushilevitz Abstract Here we give classes of Boolean functions that considered in COLT. Classes of Functions Here we introduce the basic classes of functions
More informationChapter 2. Reductions and NP. 2.1 Reductions Continued The Satisfiability Problem (SAT) SAT 3SAT. CS 573: Algorithms, Fall 2013 August 29, 2013
Chapter 2 Reductions and NP CS 573: Algorithms, Fall 2013 August 29, 2013 2.1 Reductions Continued 2.1.1 The Satisfiability Problem SAT 2.1.1.1 Propositional Formulas Definition 2.1.1. Consider a set of
More informationNP and Computational Intractability
NP and Computational Intractability 1 Polynomial-Time Reduction Desiderata'. Suppose we could solve X in polynomial-time. What else could we solve in polynomial time? don't confuse with reduces from Reduction.
More informationCOMP Analysis of Algorithms & Data Structures
COMP 3170 - Analysis of Algorithms & Data Structures Shahin Kamali Computational Complexity CLRS 34.1-34.4 University of Manitoba COMP 3170 - Analysis of Algorithms & Data Structures 1 / 50 Polynomial
More information8. INTRACTABILITY I. Lecture slides by Kevin Wayne Copyright 2005 Pearson-Addison Wesley. Last updated on 2/6/18 2:16 AM
8. INTRACTABILITY I poly-time reductions packing and covering problems constraint satisfaction problems sequencing problems partitioning problems graph coloring numerical problems Lecture slides by Kevin
More informationAlgorithms. Outline! Approximation Algorithms. The class APX. The intelligence behind the hardware. ! Based on
6117CIT - Adv Topics in Computing Sci at Nathan 1 Algorithms The intelligence behind the hardware Outline! Approximation Algorithms The class APX! Some complexity classes, like PTAS and FPTAS! Illustration
More informationApproximation Algorithms
Approximation Algorithms What do you do when a problem is NP-complete? or, when the polynomial time solution is impractically slow? assume input is random, do expected performance. Eg, Hamiltonian path
More informationAlgorithm Design and Analysis
Algorithm Design and Analysis LECTURE 26 Computational Intractability Polynomial Time Reductions Sofya Raskhodnikova S. Raskhodnikova; based on slides by A. Smith and K. Wayne L26.1 What algorithms are
More informationPCPs and Inapproximability Gap-producing and Gap-Preserving Reductions. My T. Thai
PCPs and Inapproximability Gap-producing and Gap-Preserving Reductions My T. Thai 1 1 Hardness of Approximation Consider a maximization problem Π such as MAX-E3SAT. To show that it is NP-hard to approximation
More informationTheory of Computation CS3102 Spring 2014 A tale of computers, math, problem solving, life, love and tragic death
Theory of Computation CS3102 Spring 2014 A tale of computers, math, problem solving, life, love and tragic death Nathan Brunelle Department of Computer Science University of Virginia www.cs.virginia.edu/~njb2b/theory
More informationSAT, NP, NP-Completeness
CS 473: Algorithms, Spring 2018 SAT, NP, NP-Completeness Lecture 22 April 13, 2018 Most slides are courtesy Prof. Chekuri Ruta (UIUC) CS473 1 Spring 2018 1 / 57 Part I Reductions Continued Ruta (UIUC)
More informationCS261: A Second Course in Algorithms Lecture #18: Five Essential Tools for the Analysis of Randomized Algorithms
CS261: A Second Course in Algorithms Lecture #18: Five Essential Tools for the Analysis of Randomized Algorithms Tim Roughgarden March 3, 2016 1 Preamble In CS109 and CS161, you learned some tricks of
More informationLecture Notes CS:5360 Randomized Algorithms Lecture 20 and 21: Nov 6th and 8th, 2018 Scribe: Qianhang Sun
1 Probabilistic Method Lecture Notes CS:5360 Randomized Algorithms Lecture 20 and 21: Nov 6th and 8th, 2018 Scribe: Qianhang Sun Turning the MaxCut proof into an algorithm. { Las Vegas Algorithm Algorithm
More informationMore NP-Complete Problems
CS 473: Algorithms, Spring 2018 More NP-Complete Problems Lecture 23 April 17, 2018 Most slides are courtesy Prof. Chekuri Ruta (UIUC) CS473 1 Spring 2018 1 / 57 Recap NP: languages/problems that have
More informationLimitations of Algorithm Power
Limitations of Algorithm Power Objectives We now move into the third and final major theme for this course. 1. Tools for analyzing algorithms. 2. Design strategies for designing algorithms. 3. Identifying
More informationTRIPARTITE MATCHING, KNAPSACK, Pseudopolinomial Algorithms, Strong NP-completeness
TRIPARTITE MATCHING, KNAPSACK, Pseudopolinomial Algorithms, Strong NP-completeness November 10 2014, Algorithms and Complexity 2. NP problems TRIPARTITE MATCHING: Let B, G, H sets with B = G = H = n N
More informationThis means that we can assume each list ) is
This means that we can assume each list ) is of the form ),, ( )with < and Since the sizes of the items are integers, there are at most +1pairs in each list Furthermore, if we let = be the maximum possible
More informationEssential facts about NP-completeness:
CMPSCI611: NP Completeness Lecture 17 Essential facts about NP-completeness: Any NP-complete problem can be solved by a simple, but exponentially slow algorithm. We don t have polynomial-time solutions
More informationBayesian Networks Factor Graphs the Case-Factor Algorithm and the Junction Tree Algorithm
Bayesian Networks Factor Graphs the Case-Factor Algorithm and the Junction Tree Algorithm 1 Bayesian Networks We will use capital letters for random variables and lower case letters for values of those
More informationGraph. Supply Vertices and Demand Vertices. Supply Vertices. Demand Vertices
Partitioning Graphs of Supply and Demand Generalization of Knapsack Problem Takao Nishizeki Tohoku University Graph Supply Vertices and Demand Vertices Supply Vertices Demand Vertices Graph Each Supply
More informationAPTAS for Bin Packing
APTAS for Bin Packing Bin Packing has an asymptotic PTAS (APTAS) [de la Vega and Leuker, 1980] For every fixed ε > 0 algorithm outputs a solution of size (1+ε)OPT + 1 in time polynomial in n APTAS for
More information6-1 Computational Complexity
6-1 Computational Complexity 6. Computational Complexity Computational models Turing Machines Time complexity Non-determinism, witnesses, and short proofs. Complexity classes: P, NP, conp Polynomial-time
More informationNP-Completeness Part II
NP-Completeness Part II Recap from Last Time NP-Hardness A language L is called NP-hard iff for every L' NP, we have L' P L. A language in L is called NP-complete iff L is NP-hard and L NP. The class NPC
More informationMaximum 3-SAT as QUBO
Maximum 3-SAT as QUBO Michael J. Dinneen 1 Semester 2, 2016 1/15 1 Slides mostly based on Alex Fowler s and Rong (Richard) Wang s notes. Boolean Formula 2/15 A Boolean variable is a variable that can take
More informationSome Algebra Problems (Algorithmic) CSE 417 Introduction to Algorithms Winter Some Problems. A Brief History of Ideas
Some Algebra Problems (Algorithmic) CSE 417 Introduction to Algorithms Winter 2006 NP-Completeness (Chapter 8) Given positive integers a, b, c Question 1: does there exist a positive integer x such that
More informationNP-Complete problems
NP-Complete problems NP-complete problems (NPC): A subset of NP. If any NP-complete problem can be solved in polynomial time, then every problem in NP has a polynomial time solution. NP-complete languages
More informationClosest String and Closest Substring Problems
January 8, 2010 Problem Formulation Problem Statement I Closest String Given a set S = {s 1, s 2,, s n } of strings each length m, find a center string s of length m minimizing d such that for every string
More informationBranching. Teppo Niinimäki. Helsinki October 14, 2011 Seminar: Exact Exponential Algorithms UNIVERSITY OF HELSINKI Department of Computer Science
Branching Teppo Niinimäki Helsinki October 14, 2011 Seminar: Exact Exponential Algorithms UNIVERSITY OF HELSINKI Department of Computer Science 1 For a large number of important computational problems
More information1 Primals and Duals: Zero Sum Games
CS 124 Section #11 Zero Sum Games; NP Completeness 4/15/17 1 Primals and Duals: Zero Sum Games We can represent various situations of conflict in life in terms of matrix games. For example, the game shown
More informationNP Completeness. CS 374: Algorithms & Models of Computation, Spring Lecture 23. November 19, 2015
CS 374: Algorithms & Models of Computation, Spring 2015 NP Completeness Lecture 23 November 19, 2015 Chandra & Lenny (UIUC) CS374 1 Spring 2015 1 / 37 Part I NP-Completeness Chandra & Lenny (UIUC) CS374
More information1 The Knapsack Problem
Comp 260: Advanced Algorithms Prof. Lenore Cowen Tufts University, Spring 2018 Scribe: Tom Magerlein 1 Lecture 4: The Knapsack Problem 1 The Knapsack Problem Suppose we are trying to burgle someone s house.
More informationAn Approximation Algorithm for MAX-2-SAT with Cardinality Constraint
An Approximation Algorithm for MAX-2-SAT with Cardinality Constraint Thomas Hofmeister Informatik 2, Universität Dortmund, 44221 Dortmund, Germany th01@ls2.cs.uni-dortmund.de Abstract. We present a randomized
More informationA brief introduction to Logic. (slides from
A brief introduction to Logic (slides from http://www.decision-procedures.org/) 1 A Brief Introduction to Logic - Outline Propositional Logic :Syntax Propositional Logic :Semantics Satisfiability and validity
More informationNP-problems continued
NP-problems continued Page 1 Since SAT and INDEPENDENT SET can be reduced to each other we might think that there would be some similarities between the two problems. In fact, there is one such similarity.
More informationNon-Deterministic Time
Non-Deterministic Time Master Informatique 2016 1 Non-Deterministic Time Complexity Classes Reminder on DTM vs NDTM [Turing 1936] (q 0, x 0 ) (q 1, x 1 ) Deterministic (q n, x n ) Non-Deterministic (q
More informationIntro to Theory of Computation
Intro to Theory of Computation LECTURE 25 Last time Class NP Today Polynomial-time reductions Adam Smith; Sofya Raskhodnikova 4/18/2016 L25.1 The classes P and NP P is the class of languages decidable
More informationCS6840: Advanced Complexity Theory Mar 29, Lecturer: Jayalal Sarma M.N. Scribe: Dinesh K.
CS684: Advanced Complexity Theory Mar 29, 22 Lecture 46 : Size lower bounds for AC circuits computing Parity Lecturer: Jayalal Sarma M.N. Scribe: Dinesh K. Theme: Circuit Complexity Lecture Plan: Proof
More informationComplexity Theory VU , SS The Polynomial Hierarchy. Reinhard Pichler
Complexity Theory Complexity Theory VU 181.142, SS 2018 6. The Polynomial Hierarchy Reinhard Pichler Institut für Informationssysteme Arbeitsbereich DBAI Technische Universität Wien 15 May, 2018 Reinhard
More informationMTAT Complexity Theory October 20th-21st, Lecture 7
MTAT.07.004 Complexity Theory October 20th-21st, 2011 Lecturer: Peeter Laud Lecture 7 Scribe(s): Riivo Talviste Polynomial hierarchy 1 Turing reducibility From the algorithmics course, we know the notion
More informationOutline. Complexity Theory EXACT TSP. The Class DP. Definition. Problem EXACT TSP. Complexity of EXACT TSP. Proposition VU 181.
Complexity Theory Complexity Theory Outline Complexity Theory VU 181.142, SS 2018 6. The Polynomial Hierarchy Reinhard Pichler Institut für Informationssysteme Arbeitsbereich DBAI Technische Universität
More informationFormal definition of P
Since SAT and INDEPENDENT SET can be reduced to each other we might think that there would be some similarities between the two problems. In fact, there is one such similarity. In SAT we want to know if
More informationPropositional Logic: Models and Proofs
Propositional Logic: Models and Proofs C. R. Ramakrishnan CSE 505 1 Syntax 2 Model Theory 3 Proof Theory and Resolution Compiled at 11:51 on 2016/11/02 Computing with Logic Propositional Logic CSE 505
More informationGeometric Steiner Trees
Geometric Steiner Trees From the book: Optimal Interconnection Trees in the Plane By Marcus Brazil and Martin Zachariasen Part 3: Computational Complexity and the Steiner Tree Problem Marcus Brazil 2015
More informationKeywords. Approximation Complexity Theory: historical remarks. What s PCP?
Keywords The following terms should be well-known to you. 1. P, NP, NP-hard, NP-complete. 2. (polynomial-time) reduction. 3. Cook-Levin theorem. 4. NPO problems. Instances. Solutions. For a long time it
More informationNP Completeness and Approximation Algorithms
Winter School on Optimization Techniques December 15-20, 2016 Organized by ACMU, ISI and IEEE CEDA NP Completeness and Approximation Algorithms Susmita Sur-Kolay Advanced Computing and Microelectronic
More informationLecture 15 - NP Completeness 1
CME 305: Discrete Mathematics and Algorithms Instructor: Professor Aaron Sidford (sidford@stanford.edu) February 29, 2018 Lecture 15 - NP Completeness 1 In the last lecture we discussed how to provide
More informationRandomized Sorting Algorithms Quick sort can be converted to a randomized algorithm by picking the pivot element randomly. In this case we can show th
CSE 3500 Algorithms and Complexity Fall 2016 Lecture 10: September 29, 2016 Quick sort: Average Run Time In the last lecture we started analyzing the expected run time of quick sort. Let X = k 1, k 2,...,
More informationBayesian Networks and Markov Random Fields
Bayesian Networks and Markov Random Fields 1 Bayesian Networks We will use capital letters for random variables and lower case letters for values of those variables. A Bayesian network is a triple V, G,
More informationCSE 421 Dynamic Programming
CSE Dynamic Programming Yin Tat Lee Weighted Interval Scheduling Interval Scheduling Job j starts at s(j) and finishes at f j and has weight w j Two jobs compatible if they don t overlap. Goal: find maximum
More informationa 1 a 2 a 3 a 4 v i c i c(a 1, a 3 ) = 3
AM 221: Advanced Optimization Spring 2016 Prof. Yaron Singer Lecture 17 March 30th 1 Overview In the previous lecture, we saw examples of combinatorial problems: the Maximal Matching problem and the Minimum
More informationAlgorithms and Theory of Computation. Lecture 22: NP-Completeness (2)
Algorithms and Theory of Computation Lecture 22: NP-Completeness (2) Xiaohui Bei MAS 714 November 8, 2018 Nanyang Technological University MAS 714 November 8, 2018 1 / 20 Set Cover Set Cover Input: a set
More informationApproximation algorithms based on LP relaxation
CSE 594: Combinatorial and Graph Algorithms Lecturer: Hung Q. Ngo SUNY at Buffalo, Spring 2005 Last update: March 10, 2005 Approximation algorithms based on LP relaxation There are two fundamental approximation
More informationCS 151 Complexity Theory Spring Solution Set 5
CS 151 Complexity Theory Spring 2017 Solution Set 5 Posted: May 17 Chris Umans 1. We are given a Boolean circuit C on n variables x 1, x 2,..., x n with m, and gates. Our 3-CNF formula will have m auxiliary
More informationComputability and Complexity Theory
Discrete Math for Bioinformatics WS 09/10:, by A Bockmayr/K Reinert, January 27, 2010, 18:39 9001 Computability and Complexity Theory Computability and complexity Computability theory What problems can
More informationLecture 18: More NP-Complete Problems
6.045 Lecture 18: More NP-Complete Problems 1 The Clique Problem a d f c b e g Given a graph G and positive k, does G contain a complete subgraph on k nodes? CLIQUE = { (G,k) G is an undirected graph with
More informationQuantum algorithms (CO 781, Winter 2008) Prof. Andrew Childs, University of Waterloo LECTURE 11: From random walk to quantum walk
Quantum algorithms (CO 781, Winter 2008) Prof. Andrew Childs, University of Waterloo LECTURE 11: From random walk to quantum walk We now turn to a second major topic in quantum algorithms, the concept
More informationHardness of Approximation of Graph Partitioning into Balanced Complete Bipartite Subgraphs
Hardness of Approximation of Graph Partitioning into Balanced Complete Bipartite Subgraphs Hideaki OTSUKI Abstract For a graph G, a biclique edge partition S BP (G) is a collection of complete bipartite
More informationEasy Problems vs. Hard Problems. CSE 421 Introduction to Algorithms Winter Is P a good definition of efficient? The class P
Easy Problems vs. Hard Problems CSE 421 Introduction to Algorithms Winter 2000 NP-Completeness (Chapter 11) Easy - problems whose worst case running time is bounded by some polynomial in the size of the
More informationCOSC 341: Lecture 25 Coping with NP-hardness (2)
1 Introduction Figure 1: Famous cartoon by Garey and Johnson, 1979 We have seen the definition of a constant factor approximation algorithm. The following is something even better. 2 Approximation Schemes
More informationNP-problems continued
NP-problems continued Page 1 Since SAT and INDEPENDENT SET can be reduced to each other we might think that there would be some similarities between the two problems. In fact, there is one such similarity.
More informationLec. 2: Approximation Algorithms for NP-hard Problems (Part II)
Limits of Approximation Algorithms 28 Jan, 2010 (TIFR) Lec. 2: Approximation Algorithms for NP-hard Problems (Part II) Lecturer: Prahladh Harsha Scribe: S. Ajesh Babu We will continue the survey of approximation
More informationNP-Complete Problems. More reductions
NP-Complete Problems More reductions Definitions P: problems that can be solved in polynomial time (typically in n, size of input) on a deterministic Turing machine Any normal computer simulates a DTM
More informationCHAPTER 3 FUNDAMENTALS OF COMPUTATIONAL COMPLEXITY. E. Amaldi Foundations of Operations Research Politecnico di Milano 1
CHAPTER 3 FUNDAMENTALS OF COMPUTATIONAL COMPLEXITY E. Amaldi Foundations of Operations Research Politecnico di Milano 1 Goal: Evaluate the computational requirements (this course s focus: time) to solve
More informationDynamic Programming: Interval Scheduling and Knapsack
Dynamic Programming: Interval Scheduling and Knapsack . Weighted Interval Scheduling Weighted Interval Scheduling Weighted interval scheduling problem. Job j starts at s j, finishes at f j, and has weight
More informationComputational Complexity. IE 496 Lecture 6. Dr. Ted Ralphs
Computational Complexity IE 496 Lecture 6 Dr. Ted Ralphs IE496 Lecture 6 1 Reading for This Lecture N&W Sections I.5.1 and I.5.2 Wolsey Chapter 6 Kozen Lectures 21-25 IE496 Lecture 6 2 Introduction to
More informationGraph Theory and Optimization Computational Complexity (in brief)
Graph Theory and Optimization Computational Complexity (in brief) Nicolas Nisse Inria, France Univ. Nice Sophia Antipolis, CNRS, I3S, UMR 7271, Sophia Antipolis, France September 2015 N. Nisse Graph Theory
More informationComputational Complexity and Intractability: An Introduction to the Theory of NP. Chapter 9
1 Computational Complexity and Intractability: An Introduction to the Theory of NP Chapter 9 2 Objectives Classify problems as tractable or intractable Define decision problems Define the class P Define
More informationNP-Completeness. f(n) \ n n sec sec sec. n sec 24.3 sec 5.2 mins. 2 n sec 17.9 mins 35.
NP-Completeness Reference: Computers and Intractability: A Guide to the Theory of NP-Completeness by Garey and Johnson, W.H. Freeman and Company, 1979. NP-Completeness 1 General Problems, Input Size and
More informationSummer School on Introduction to Algorithms and Optimization Techniques July 4-12, 2017 Organized by ACMU, ISI and IEEE CEDA.
Summer School on Introduction to Algorithms and Optimization Techniques July 4-12, 2017 Organized by ACMU, ISI and IEEE CEDA NP Completeness Susmita Sur-Kolay Advanced Computing and Microelectronics Unit
More informationPropositional Logic. Methods & Tools for Software Engineering (MTSE) Fall Prof. Arie Gurfinkel
Propositional Logic Methods & Tools for Software Engineering (MTSE) Fall 2017 Prof. Arie Gurfinkel References Chpater 1 of Logic for Computer Scientists http://www.springerlink.com/content/978-0-8176-4762-9/
More information20.1 2SAT. CS125 Lecture 20 Fall 2016
CS125 Lecture 20 Fall 2016 20.1 2SAT We show yet another possible way to solve the 2SAT problem. Recall that the input to 2SAT is a logical expression that is the conunction (AND) of a set of clauses,
More information