Intractable Problems Part Two

Similar documents
Intractable Problems Part One

This means that we can assume each list ) is

1. Introduction Recap

a 1 a 2 a 3 a 4 v i c i c(a 1, a 3 ) = 3

Limits to Approximability: When Algorithms Won't Help You. Note: Contents of today s lecture won t be on the exam

Lecture 2: Divide and conquer and Dynamic programming

Lecture 13: Dynamic Programming Part 2 10:00 AM, Feb 23, 2018

1 Review of Vertex Cover

Lecture 11 October 7, 2013

Essential facts about NP-completeness:

Friday Four Square! Today at 4:15PM, Outside Gates

P, NP, NP-Complete, and NPhard

Easy Problems vs. Hard Problems. CSE 421 Introduction to Algorithms Winter Is P a good definition of efficient? The class P

CS 5114: Theory of Algorithms

CSC 8301 Design & Analysis of Algorithms: Lower Bounds

Topics in Complexity Theory

CS 350 Algorithms and Complexity

NP-Completeness Part II

Copyright 2000, Kevin Wayne 1

Data Structures in Java

Complexity Theory Part I

CS/COE

Outline. 1 NP-Completeness Theory. 2 Limitation of Computation. 3 Examples. 4 Decision Problems. 5 Verification Algorithm

Notes for Lecture 21

CSE 431/531: Analysis of Algorithms. Dynamic Programming. Lecturer: Shi Li. Department of Computer Science and Engineering University at Buffalo

Chapter 11. Approximation Algorithms. Slides by Kevin Wayne Pearson-Addison Wesley. All rights reserved.

CS 350 Algorithms and Complexity

Disjoint-Set Forests

Approximation Algorithms

1 The Knapsack Problem

CS173 Lecture B, November 3, 2015

CSE373: Data Structures and Algorithms Lecture 2: Math Review; Algorithm Analysis. Hunter Zahn Summer 2016

Computational Complexity. IE 496 Lecture 6. Dr. Ted Ralphs

CSE 202 Homework 4 Matthias Springer, A

(tree searching technique) (Boolean formulas) satisfying assignment: (X 1, X 2 )

Lecture 18: P & NP. Revised, May 1, CLRS, pp

Dynamic Programming: Interval Scheduling and Knapsack

Dominating Set. Chapter 7

ECS122A Handout on NP-Completeness March 12, 2018

Lecture 17: Trees and Merge Sort 10:00 AM, Oct 15, 2018

CS Analysis of Recursive Algorithms and Brute Force

P, NP, NP-Complete. Ruth Anderson

4/12/2011. Chapter 8. NP and Computational Intractability. Directed Hamiltonian Cycle. Traveling Salesman Problem. Directed Hamiltonian Cycle

Dominating Set. Chapter 26

Dominating Set. Chapter Sequential Greedy Algorithm 294 CHAPTER 26. DOMINATING SET

Lecture 4: NP and computational intractability

CS 301: Complexity of Algorithms (Term I 2008) Alex Tiskin Harald Räcke. Hamiltonian Cycle. 8.5 Sequencing Problems. Directed Hamiltonian Cycle

CSE 105 THEORY OF COMPUTATION

Admin NP-COMPLETE PROBLEMS. Run-time analysis. Tractable vs. intractable problems 5/2/13. What is a tractable problem?

Computational Intractability 2010/4/15. Lecture 2

Divide and Conquer. CSE21 Winter 2017, Day 9 (B00), Day 6 (A00) January 30,

UC Berkeley CS 170: Efficient Algorithms and Intractable Problems Handout 22 Lecturer: David Wagner April 24, Notes 22 for CS 170

Bin packing and scheduling

CSE 417: Algorithms and Computational Complexity

1a. Introduction COMP6741: Parameterized and Exact Computation

Unit 1A: Computational Complexity

Some Algebra Problems (Algorithmic) CSE 417 Introduction to Algorithms Winter Some Problems. A Brief History of Ideas

Limitations of Algorithm Power

Lecture 5. 1 Review (Pairwise Independence and Derandomization)

Computer Science 385 Analysis of Algorithms Siena College Spring Topic Notes: Limitations of Algorithms

Introduction to Computer Science and Programming for Astronomers

Complexity Theory Part II

Turing Machines Part II

Chapter 11. Approximation Algorithms. Slides by Kevin Wayne Pearson-Addison Wesley. All rights reserved.

CS264: Beyond Worst-Case Analysis Lecture #18: Smoothed Complexity and Pseudopolynomial-Time Algorithms

Σ w j. Σ v i KNAPSACK. for i = 1, 2,..., n. and an positive integers K and W. Does there exist a subset S of {1, 2,..., n} such that: and w i

Computational Complexity

CS 5114: Theory of Algorithms. Tractable Problems. Tractable Problems (cont) Decision Problems. Clifford A. Shaffer. Spring 2014

20. Dynamic Programming II

FINAL EXAM PRACTICE PROBLEMS CMSC 451 (Spring 2016)

CSE 105 THEORY OF COMPUTATION

Minimizing the Number of Tardy Jobs

Complexity Theory Part I

Knapsack. Bag/knapsack of integer capacity B n items item i has size s i and profit/weight w i

CS264: Beyond Worst-Case Analysis Lecture #15: Smoothed Complexity and Pseudopolynomial-Time Algorithms

Class Note #20. In today s class, the following four concepts were introduced: decision

Recap from Last Time

CPSC 490 Problem Solving in Computer Science

CSE373: Data Structures and Algorithms Lecture 3: Math Review; Algorithm Analysis. Catie Baker Spring 2015

COP 4531 Complexity & Analysis of Data Structures & Algorithms

More Approximation Algorithms

Algorithm Design and Analysis

9. Submodular function optimization

Announcements. CSE332: Data Abstractions Lecture 2: Math Review; Algorithm Analysis. Today. Mathematical induction. Dan Grossman Spring 2010

Theory of Computation Chapter 9

Today s Outline. CS 561, Lecture 15. Greedy Algorithms. Activity Selection. Greedy Algorithm Intro Activity Selection Knapsack

Intro to Theory of Computation

Design and Analysis of Algorithms

Turing Machines and Time Complexity

MATH 409 LECTURES THE KNAPSACK PROBLEM

INTRO TO COMPUTATIONAL COMPLEXITY

Dynamic Programming. Cormen et. al. IV 15

Introduction to Complexity Theory

Optimisation and Operations Research

NP Complete Problems. COMP 215 Lecture 20

Greedy vs Dynamic Programming Approach

Lecture 20: LP Relaxation and Approximation Algorithms. 1 Introduction. 2 Vertex Cover problem. CSCI-B609: A Theorist s Toolkit, Fall 2016 Nov 8

NP-Completeness. Subhash Suri. May 15, 2018

NP and Computational Intractability

Some Algebra Problems (Algorithmic) CSE 417 Introduction to Algorithms Winter Some Problems. A Brief History of Ideas

Transcription:

Intractable Problems Part Two

Announcements Problem Set Five graded; will be returned at the end of lecture. Extra office hours today after lecture from 4PM 6PM in Clark S250. Reminder: Final project goes out on Monday; we recommend not using a late day on Problem Set Six unless necessary.

Please evaluate this course on Axess. Your feedback really makes a difference.

Outline for Today 0/1 Knapsack An NP-hard problem that isn't as hard as it might seem. Fixed-Parameter Tractability What's the real source of hardness in an NP-hard problem? Finding Long Paths A use case for fixed-parameter tractability.

The 0/1 Knapsack Problem

The 0/1 Knapsack Problem $500 $200 $900 $1,500 $400 45g 20g 53g 25g 35g

The 0/1 Knapsack Problem $500 $200 $900 $1,500 $400 45g 20g 53g 25g 35g

The 0/1 Knapsack Problem $500 $200 $900 $1,500 $400 45g 20g 53g 25g 35g

The 0/1 Knapsack Problem $500 $200 $900 $1,500 $400 45g 20g 53g 25g 35g

The 0/1 Knapsack Problem You are given a list of n items with weights w₁,, wₙ and values v₁,, vₙ. You have a bag (knapsack) that can carry W total weight. Weights are assumed to be integers. Question: What is the maximum value of items that you can fit into the knapsack? This problem is known to be NP-hard.

A Naïve Solution One option: Try all possible subsets of the items and find the feasible set with the largest total value. How many subsets are there? Answer: 2n. Subsets can be generated in O(n) time each. Total runtime is O(2 n n). Slightly better than TSP, but still not particularly good!

A Greedy Solution Sort items by their unit value: vₖ / wₖ. For all items in descending unit value: If that item will fit in the knapsack, add it to the knapsack. Does this algorithm always return an optimal solution? Unfortunately, no; in fact, this algorithm can be arbitrarily bad!

The 0/1 Knapsack Problem $500 $200 $900 $1,500 $400 45g 20g 53g 25g 35g

The 0/1 Knapsack Problem $500 $200 $900 $1,500 $400 45g 20g 53g 25g 35g

The 0/1 Knapsack Problem $500 $200 $900 $1,500 $400 45g 20g 53g 25g 35g

A Recurrence Relation Let OPT(k, X) denote the maximum value that can be made from the first k items without exceeding weight X. Note: OPT(n, W) is the overall answer. Claim: OPT(k, X) satisfies this recurrence: 0 if k=0 OPT (k 1, X) if w OPT(k, X)={ k > X { OPT (k 1, X), )} max otherwise v k +OPT (k 1, X w k

0 if k=0 OPT (k 1, X ) if w OPT(k, X)={ k > X { OPT (k 1, X), )} max otherwise v k +OPT (k 1, X w k Let Let DP DP be be a table of of size size (n (n + 1) 1) (W (W + 1). 1). For For X = 0 to to W + 1: 1: Set Set DP[0][X] = 0 For For k = 1 to to n: n: For For X = 0 to to W: W: If If wₖ wₖ > W, W, set set DP[k][X] = DP[k 1][X]. Else, set set DP[k][X] = max{ DP[k 1][X], vₖ vₖ + DP[k 1][X wₖ] wₖ] } Return DP[n][W].

Um... Wait... Runtime of this algorithm is O(nW) and space complexity is O(nW). This is a polynomial in n and W. This problem is NP-hard. Did we just prove P = NP?

A Note About Input Sizes A polynomial-time algorithm is one that runs in time polynomial in the total number of bits required to write out the input to the problem. How many bits are required to write out the value W? Answer: O(log W). Therefore, O(nW) is exponential in the number of bits required to write out the input. Example: Adding one more bit to the end of the representation of W doubles its size and doubles the runtime. This algorithm is called a pseudopolynomial time algorithm, since it is a polynomial in the numeric value of the input, not the number of bits in the input.

That Said... The runtime of O(nW) is better than our old runtime of O(2 n n) assuming that W = o(2 n ). That's little-o, not big-o. In fact for any fixed W, this algorithm runs in linear time! Although there are exponentially many subsets to test, we can get away with just linear work if W is fixed!

Parameterized Complexity Parameterized complexity is a branch of complexity theory that studies the hardness of problems with respect to different parameters of the input. Often, NP-hard problems are not entirely infeasible as long as some parameter of the problem is fixed. In our case, O(nW) has two parameters the number of elements (n) and weight (W).

Fixed-Parameter Tractability Suppose that the input to a problem P can be characterized by two parameters n and k. P is called fixed-parameter tractable iff there is some algorithm that solves P in time O(f(k) p(n)), where f(k) is an arbitrary function. p(n) is a polynomial in n. Intuitively, for any fixed k, the algorithm runs in a polynomial in n. That polynomial is always the same polynomial regardless of the choice of k.

Example: Finding Long Paths

The Long Path Problem Given a graph G = (V, E) and a number k, we want to determine whether there is a simple path of length k exists in G. Known to be NP-hard by a reduction from finding Hamiltonian paths: a graph has a Hamiltonian path iff it has a simple path of length n 1. Applications in biology to finding protein signaling cascades.

A Naïve Approach To find all simple paths of length k, enumerate all (k + 1)-permutations of nodes in V and check if each is a path. How many such permutations are there? Answer: n! / (n k 1)! Time to process each is O(k) when using an adjacency matrix. Total runtime is O(k n! / (n k 1)!) Decent for small k, unbelievably slow for larger k.

A Better Approach We can use a randomized technique called color-coding to speed this up. Suppose every node in the graph is colored one of k + 1 different colors. A colorful path is a simple path of length k where each node has a different color. Idea: Show how to find colorful paths efficiently, then build a randomized algorithm for finding long paths that uses the colorful path finder as a subroutine.

Finding Colorful Paths: Seem Familiar?

Finding Colorful Paths Using a dynamic programming approach similar to TSP, can find all colorful paths originating at a node s in time O(2 k n 2 ). Can find colorful paths between any pair of nodes in time O(2 k n 3 ) by iterating this process for all possible start nodes. This is fixed-parameter tractable!

Random Colorings Suppose you want to find a simple path of length k in a graph. Randomly color all nodes in the graph one of (k + 1) different colors. If P is a simple path of length k in G, what is the probability that it is a colorful path? Answer: (k + 1)! / (k + 1) k + 1

Stirling's Approximation Stirling's approximation states that n! nn 2 πn e n Therefore, (k + 1)! / (k + 1) k + 1 1 / e k+1. If we randomly color the nodes in G e k+1 times, the probability that any simple path of length k never becomes colorful is at most 1 / e. Doing e k+1 ln n random colorings means we find a simple path of length k with high probability. Total runtime: O(ek 2 k n 3 log n) = O((2e) k n 3 log n). Better than naïve solution in many cases!

Why All This Matters Last lecture: Brute-force search is not necessarily optimal for NP-hard problems. Today: Can often factor out the complexity into a tractable part and intractable part that depend on different parameters. Plus, we got to see DP combined with randomized algorithms!

Next Time Approximation Algorithms FPTAS's and Other Acronyms