CS 241 Analysis of Algorithms Professor Eric Aaron Lecture T Th 9:00am Lecture Meeting Location: OLB 205 Business Grading updates: HW5 back today HW7 due Dec. 10 Reading: Ch. 22.1-22.3, Ch. 25.1-2, Ch. 34.?? 1
Shortest Path Problems Kinds of graph problems based on finding shortest paths (by convention, presume weighted, directed graphs): Single-source shortest paths Various algorithms for cases of it (Bellman-Ford, Dijkstra) Single-destination shortest paths If we have a single-source shortest paths algorithm, how could we solve this? Single-pair shortest path How does this relate to the single-source variant? All-pairs shortest paths We ll talk more about this soon (Note: To represent a (shortest) path in solving such a problem, each vertex is presumed to have a predecessor field, which stores its predecessor on the path being considered.) Properties of Shortest Paths Optimal substructure of shortest paths Is each sub-path of a shortest path itself a shortest path? What s the argument for / counter-argument to that? Can a shortest path in a weighted graph have a cycle? (Be sure to consider graphs with negative edges, which could have negative weight cycles, as well as graphs with positive weight cycles!) Consider a special case: an unweighted graph (or, equivalently, a graph in which every edge has unit weight) What algorithm could solve single-source shortest paths problems for such graphs? 2
All-Pairs Shortest Paths The All-Pairs Shortest Paths Problem: Given weighted graph G=(V, E) (with no negative weight cycles), find the shortest path from u to v for every u, v \in V Solutions can be based on dynamic programming and an adjacency matrix representation of G Recall: Adjacency matrix W contains weight of each edge in E By convention, diagonal of W is all 0s How might we break this down into sub-problems for a recursive solution? All-Pairs Shortest Paths: A Recursive Solution Solve all-pairs shortest path problem in terms of the intermediate vertices that can appear on any shortest path Intermediate vertex of a simple path p = <v 1, v 2,, v z > is any vertex on p other than v 1 or v z For graph G, call vertices V = {1,, n}, and consider subsets V k = {1,, k} of V Then, for any two vertices i, j in V, consider all paths from i to j with intermediate vertices drawn only from V k In particular, consider a shortest path p from i to j with intermediate vertices in V k What s the relationship between p and the set of shortest paths from i to j with intermediate vertices in V k-1? Also, is p a simple path? How do we know, one way or another? 3
All-Pairs Shortest Paths: The Same Recursive Solution We re still considering shortest path p from i to j with intermediate vertices in V k What s the relationship between p and the set of shortest paths from i to j with intermediate vertices in V k-1? Depends on whether or not vertex k is an intermediate vertex on path p If not, then p is also a shortest path (i to j) with intermediate vertices in V k-1 If so, then p can be broken down into sub-paths that are shortest paths with intermediate vertices in V k-1 one sub-path is from (i to k), the other is from (k to j) How do we know we can decompose p that way, i.e., that both subpaths are shortest paths, using only vertices numbered up to k-1? Given this, how could we recursively define the shortest path lengths between all pairs of vertices? By the way, which one s Warshall? Floyd-Warshall Algorithm: Bottom-up All-Pairs Shortest Paths Floyd-Warshall algorithm for all-pairs shortest paths: the bottom-up method based on this decomposition (k ) (k ) Computes matrices D(k) = ( d ij ), where each d ij is the shortest path value from i to j using only intermediate vertices numbered up to k Note: This computes shortest path values, not the paths. See pages 695-697 about computing the paths themselves. What does this algorithm return? (What makes that a useful return value?) What is the running time of this algorithm? 4
A Floyd-Warshall Example (k ) Computes matrices D(k) = ( d ij ), where each is the shortest path value from i to j using only intermediate vertices numbered up to k (k ) d ij What D matrices does it compute for this example graph? And Now For Something Complexly Different It s How hard could it be? -- Jeremy Clarkson, Top Gear (UK) 5
We ve Got Problems Graph Problems Consider the hamiltonian cycle problem: In some sources, Hamiltonian is capitalized, but not in CLRS A hamiltonian cycle of a connected, directed graph G=(V,E) is a simple cycle that contains each vertex in V (though perhaps not every edge in E) The hamiltonian cycle problem: Given connected digraph G, does G contain a hamiltonian cycle? Example: Is there a hamiltonian cycle in this graph? We ve Got Problems Graph Problems Consider the hamiltonian cycle problem: A hamiltonian cycle of a connected, directed graph G=(V,E) is a simple cycle that contains each vertex in V (though perhaps not every edge in E) The hamiltonian cycle problem: Given connected digraph G, does G contain a hamiltonian cycle? Consider the Euler tour problem: In some sources, Hamiltonian is capitalized, but not in CLRS Note that these are decision problems each deciding a yes / no question An Euler tour of a connected, directed graph is a cycle that uses each edge exactly once (though it can visit vertices multiple times) The Euler tour problem: Given connected digraph G, does G contain an Euler tour? How long (what complexity) would it take to: solve each of these problems? verify that a possible solution was correct, if we were given one? And is there a meaningful difference in tractability between the problems? 6
Tractability and Complexity Typically, tractability is framed in terms of decision problems, rather than the related optimization problems Tractability It is generally accepted that a problem is tractable (solvable in practice, not just in theory) if there is a polynomial-time algorithm for it i.e., in class O(n k ) for some constant k If a problem requires an exponential time solution (or worse!), it is considered intractable without an efficient solution Goal: Classify problems by how efficiently they can be solved (Relatively) Fine-detail distinctions: Big-Oh or Theta classes Tractability distinctions based in larger classes: P: problems solvable in polynomial time (hence the name P) NP: problems with solutions verifiable in polynomial time PSPACE : problems solvable in polynomial space (no restriction on time) Etc. The Class NP: A Quick Introduction NP: the class of problems solvable in nondeterministic polynomial time (hence the name NP) Somewhat loosely, that means that a problem is in NP if it can be verified in polynomial time Could think of it as: If we were given a certificate of a solution (essentially, a potential solution), we could check it for correctness (is it actually a solution?) in polynomial time in input size Could think of it as: If given a correct certificate, we could solve the problem in polynomial time Note that if a solution to a problem can be found in polynomial time, that problem can be verified in polynomial time Recall: This is about decision problems, not (related) optimization problems What does this say about the relationship between P and NP? (For more about nondeterminism in computation, see CMPU-240) 7
No joke: This is an exceptionally important slide. P, NP, and NP-Completeness We know P NP, because if a solution can be found in polynomial time, one can be checked in polynomial time Is NP P? Good question. (One of the best around, actually.) Extra credit exercise: Prove or disprove NP P. (You would get an A+ for this course. Oh, and also $1,000,000 at least. Really.) It is generally believed that P NP problems in P are tractable, and NPproblems (not in P) are thought to be intractable Thus, it s important to determine if a problem is in NP, or at least as hard as a problem that is in NP (and not known to be in P) Complexity class NPC: Class of NP-complete problems A problem is NP-complete if a) it is in NP; and b) it is at least as hard as every problem in NP So, if one NP-complete problem is tractable, all problems in NP are tractable! NPC problems are the hardest problems in NP thus, they re presumed intractable We ve Got Problems Decision Problems Problems such as shortest paths, etc. are optimization problems When trying to show a problem is NP-complete, we work with related decision problems answers are only yes or no (1 or 0, respectively) E.g., instead of Given graph G and vertices u, v, what is the shortest path from u to v? We consider Given graph G and vertices u, v, is there a path from u to v of length at most k? Note: the decision problem is not harder than the optimization problem (if we can solve the optimization, we can solve the decision) Thus, if we show a decision problem is very difficult (i.e., in NPC), we ve shown the optimization problem is, also This idea of proving relative difficulty of showing one problem is at least as difficult as another is natural on decision problems, and it is central to how we prove problems are NP-complete! 8
Reductions and Tractability To show that one problem is at least as hard as another, use reductions A reduction transforms any instance β of a problem B (e.g., a graph G, two vertices u and v, and a number k, for the shortest paths problem) into an instance α of another problem A Let s say we can solve (decision) problem A in polynomial time (i.e., it s tractable), and want to show problem B is tractable We use a reduction such that 1. The transformation takes polynomial time; and 2. The answers are the same i.e., the answer to α is yes iff the answer to β is yes Then apply the decision procedure that solves A if A is tractable (i.e., poly-time solvable), B must be tractable All computations we discuss are implicitly presumed to be w.r.t. some conventional model of computation (e.g., a Turing Machine). An algorithm, then, is a computation on such a machine, and time complexity can be measured in terms of computation steps. More Reductions: Using Reductions To Show Intractability Let s say that instead, we wanted to show problem A was intractable If we know problem B has no polynomial time algorithm, then consider a reduction from B to A: Takes an instance β of B Transforms it in polynomial time into an instance α of A such that a decision on α would give us a decision on β Then, if A could be solved in polynomial time, B could also Thus, by contradiction, A must not be solvable in polynomial time More generally: A is at least as hard as B A similar procedure will be used to show NP-completeness: If B is NP-complete, A must be at least as difficult, so A is a candidate for being NP-complete This is a common use of polynomial-time reductions 9