CS173 Lecture B, November 3, 2015

Similar documents
CS173 Lecture B, September 10, 2015

1 Primals and Duals: Zero Sum Games

Preparing for the CS 173 (A) Fall 2018 Midterm 1

CS173 Running Time and Big-O. Tandy Warnow

CS173 Strong Induction and Functions. Tandy Warnow

Introduction to Computational Complexity

PGSS Discrete Math Solutions to Problem Set #4. Note: signifies the end of a problem, and signifies the end of a proof.

Algorithms and Data Structures 2016 Week 5 solutions (Tues 9th - Fri 12th February)

Solution to Proof Questions from September 1st

CS281A/Stat241A Lecture 19

Nondeterministic finite automata

CS 301. Lecture 18 Decidable languages. Stephen Checkoway. April 2, 2018

Announcements Monday, September 18

Lecture 11: Extrema. Nathan Pflueger. 2 October 2013

CS 241 Analysis of Algorithms

1 Reductions and Expressiveness

CS261: Problem Set #3

CPSC 320 (Intermediate Algorithm Design and Analysis). Summer Instructor: Dr. Lior Malka Final Examination, July 24th, 2009

Math 31 Lesson Plan. Day 2: Sets; Binary Operations. Elizabeth Gillaspy. September 23, 2011

CS1800: Mathematical Induction. Professor Kevin Gold

NP-Completeness Review

Problem set 1. (c) Is the Ford-Fulkerson algorithm guaranteed to produce an acyclic maximum flow?

CS103 Handout 08 Spring 2012 April 20, 2012 Problem Set 3

Essential facts about NP-completeness:

University of California Berkeley CS170: Efficient Algorithms and Intractable Problems November 19, 2001 Professor Luca Trevisan. Midterm 2 Solutions

Classes of Problems. CS 461, Lecture 23. NP-Hard. Today s Outline. We can characterize many problems into three classes:

FINAL EXAM PRACTICE PROBLEMS CMSC 451 (Spring 2016)

Dynamic Programming: Matrix chain multiplication (CLRS 15.2)

CSC 1700 Analysis of Algorithms: Warshall s and Floyd s algorithms

CSI 4105 MIDTERM SOLUTION

NP Completeness. CS 374: Algorithms & Models of Computation, Spring Lecture 23. November 19, 2015

Admin NP-COMPLETE PROBLEMS. Run-time analysis. Tractable vs. intractable problems 5/2/13. What is a tractable problem?

15-451/651: Design & Analysis of Algorithms October 31, 2013 Lecture #20 last changed: October 31, 2013

/633 Introduction to Algorithms Lecturer: Michael Dinitz Topic: Asymptotic Analysis, recurrences Date: 9/7/17

Combinatorial Optimization

ECS 20: Discrete Mathematics for Computer Science UC Davis Phillip Rogaway June 12, Final Exam

Lecture 18: More NP-Complete Problems

Analysis of Algorithms I: All-Pairs Shortest Paths

CS 170 Algorithms Spring 2009 David Wagner Final

What if the characteristic equation has complex roots?

NP-Completeness Part II

Algorithms Exam TIN093 /DIT602

Math101, Sections 2 and 3, Spring 2008 Review Sheet for Exam #2:

Proof Techniques (Review of Math 271)

Math 31 Lesson Plan. Day 5: Intro to Groups. Elizabeth Gillaspy. September 28, 2011

Announcements. Problem Set 6 due next Monday, February 25, at 12:50PM. Midterm graded, will be returned at end of lecture.

CSCI3390-Lecture 18: Why is the P =?NP Problem Such a Big Deal?

Topics in Complexity Theory

Sequential Logic (3.1 and is a long difficult section you really should read!)

NP-Completeness Part II

CS103 Handout 22 Fall 2012 November 30, 2012 Problem Set 9

Big-oh stuff. You should know this definition by heart and be able to give it,

Line Integrals and Path Independence

15. LECTURE 15. I can calculate the dot product of two vectors and interpret its meaning. I can find the projection of one vector onto another one.

Midterm 1. Your Exam Room: Name of Person Sitting on Your Left: Name of Person Sitting on Your Right: Name of Person Sitting in Front of You:

Chapter Finding parse trees

1 Matchings in Non-Bipartite Graphs

1.1 Administrative Stuff

Edexcel AS and A Level Mathematics Year 1/AS - Pure Mathematics

COS 341: Discrete Mathematics

Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science Algorithms For Inference Fall 2014

CS 5114: Theory of Algorithms. Tractable Problems. Tractable Problems (cont) Decision Problems. Clifford A. Shaffer. Spring 2014

Quadratic Equations Part I

CS 6505, Complexity and Algorithms Week 7: NP Completeness

Computer Science & Engineering 423/823 Design and Analysis of Algorithms

NP-Completeness I. Lecture Overview Introduction: Reduction and Expressiveness

NP-Hardness reductions

CS 70 Discrete Mathematics and Probability Theory Fall 2016 Seshia and Walrand Midterm 1 Solutions

Computational Complexity. IE 496 Lecture 6. Dr. Ted Ralphs

Data Structures in Java

Lecture 3: Special Matrices

Algebra Exam. Solutions and Grading Guide

MA554 Assessment 1 Cosets and Lagrange s theorem

C if U can. Algebra. Name

Optimization Prof. A. Goswami Department of Mathematics Indian Institute of Technology, Kharagpur. Lecture - 20 Travelling Salesman Problem

Selection and Adversary Arguments. COMP 215 Lecture 19

Discrete Mathematics and Probability Theory Fall 2013 Vazirani Note 1

Chapter 1 Review of Equations and Inequalities

P P P NP-Hard: L is NP-hard if for all L NP, L L. Thus, if we could solve L in polynomial. Cook's Theorem and Reductions

Water tank. Fortunately there are a couple of objectors. Why is it straight? Shouldn t it be a curve?

Quiz 3 Reminder and Midterm Results

Lesson 21 Not So Dramatic Quadratics

Exam EDAF August Thore Husfeldt

Approximation Algorithms

STARTING WITH CONFIDENCE

Advanced topic: Space complexity

Chapter Finding parse trees

Part III: A Simplex pivot

Calculus II. Calculus II tends to be a very difficult course for many students. There are many reasons for this.

CSCI3390-Lecture 17: A sampler of NP-complete problems

Math 381 Midterm Practice Problem Solutions

Reductions. Example 1

Graph fundamentals. Matrices associated with a graph

Intractable Problems Part Two

Math 3361-Modern Algebra Lecture 08 9/26/ Cardinality

MAT137 - Term 2, Week 2

Practice Second Midterm Exam I

CSCI3390-Lecture 14: The class NP

All-Pairs Shortest Paths

Complexity Theory VU , SS The Polynomial Hierarchy. Reinhard Pichler

Transcription:

CS173 Lecture B, November 3, 2015 Tandy Warnow November 3, 2015

CS 173, Lecture B November 3, 2015 Tandy Warnow

Announcements Examlet 7 is a take-home exam, and is due November 10, 11:05 AM, in class. Homework #11 is due Sunday November 8, 10 PM. (Early in other words, because that way you can get feedback on what you got right and wrong, and go to office hours on Monday for additional help.) I will be out of town next Tuesday (exam will be proctored by course staff). I will not hold office hours next week. Office hours this week: Today (Tuesday) 1-3, and Thursday, November 5, from 1-2 PM.

Take-home Examlet This examlet is on the honor code (see front page). Please, don t discuss the examlet with anyone. Course staff should only answer questions that have to do with understanding the examlet problems, not solving them. The examlet is due in class on Tuesday November 10, by 11:05 AM. Late submissions will not be graded. We ll go over the examlet at 11:05! If you won t be able to get to class on time, return the examlet to Elaine Wilson before it is due in class.

Examlet questions 1. Proving that a graph G with maximum vertex degree D can be properly vertex-colored with D + 1 colors. Do this proof by induction on the number of vertices in G. (Hint: draw pictures of graphs) 2. Show that if you can solve the decision problem for CLIQUE you can solve the decision problem for INDEPENDENT SET using a polynomial number of operations, with a polynomial number of calls to the algorithm for CLIQUE. 3. Show that if you can solve the decision problem for CLIQUE then you can construct the largest clique in an input graph using a polynomial number of operations, with a polynomial number of calls to the algorithm for CLIQUE. 4. Design a polynomial time dynamic programming algorithm for all-pairs shortest path, but with a different subproblem formulation.

Assigned Reading this week The assigned reading this week includes material about The All-Pairs Shortest Path problem, and a solution by Floyd and Warshall A 2-approximation algorithm for Minimum Vertex Cover Today I ll present a way of solving All-Pairs Shortest Path that is essentially the same as what you ve read, but simpler in two ways: We assume the graph is undirected and all edges have positive weights. We don t concern ourselves with finding the shortest paths, just their costs.

All Pairs Shortest Path The input is an undirected graph G = (V, E) with positive edge weights on the edges. Given a path P between two vertices v i and v j, the cost (or length) of P is the sum of the weights on the edges in the path. We write this as Cost(P). A shortest path between two vertices is one with minimum cost. We want to find the length of the shortest path between every pair of vertices, and store this in an n n matrix (where n = V ).

All Pairs Shortest Path Input: graph G = (V, E), V = {v 1, v 2,..., v n }, and edge weights given by w : E R +. (Hence w(e) is the weight of edge e.) Output: D, an n n matrix, so that D[i, j] is the length of the shortest path from v i to v j. Note we set D[i, i] = 0 for all i = 1, 2,..., n. For i j, and given a path P from v i to v j, we look at the internal nodes of the path (i.e., everything except the endpoints), and let MAX (P) denote the maximum index of any internal node. If the path is a single edge, then MAX (P) = 0. Thus For P = v 3, v 1, v 2, v 5, v 7, MAX (P) = 5. For P = v 5, v 1, v 2, v 7, MAX (P) = 2. For P = v 1, v 2, MAX (P) = 0 (no internal nodes).

Floyd-Warshall Algorithm Let M[i, j, k] be the length of the shortest path P from v i to v j such that MAX (P) k. If i = j, we set M[i, j, k] = 0. If i j and there is no path between v i and v j satisfying MAX (P) k, then we set M[i, j, k] =. We let k = 0, 1, 2,..., n, and 1 i, j n. Questions: What is M[1, 2, 0]? What is M[1, 2, 2]? What is M[1, 2, 3]? What is M[1, 2, 5]?

Floyd-Warshall Algorithm Remember M[i, j, k] is the length of the shortest path P from v i to v j such that MAX (P) k. Suppose we were able to compute (correctly) all M[i, j, k], for 1 i, j n and 0 k n. Question: How could we compute the length of the shortest path from v i to v j? Answer: it is the same as M[i, j, n]. So once we have computed all entries in the 3-dimensional matrix M, we return the 2-dimensional matrix obtained by setting k = n.

Floyd-Warshall Algorithm, k=0 When k = 0 we are asking about the lengths of paths that have no internal nodes. Case: i = j: Set M[i, i, 0] = 0 Case: i j: M[i, j, 0] is the length of the shortest path P from v i to v j with MAX (P) = 0, or if no such path exists. If the path P exists, it is a single edge e, and its weight is w(e). Hence, M[i, j, 0] = w(v i, v j ) if (v i, v j ) E, and otherwise M[i, j, 0] =.

Floyd-Warshall Algorithm After we compute all entries of the matrix M with k = 0, can we compute the entries with k = 1? If i = j, we set M[i, j, 1] = 0. For i j, consider a shortest path P from v i to v j with MAX (P) 1. What does it look like?

Floyd-Warshall Algorithm Consider a shortest path P from v i to v j with MAX (P) 1. Cases: P is a single edge e, and so Cost(P) = w(e) = M[i, j, 0]. P has an internal vertex, which must be v 1. Hence P has two edges, (v i, v 1 ) and (v 1, v j ). Then Cost(P) = w(v i, v 1 ) + w(v 1, v j ). Note that w(v i, v 1 ) = M[i, 1, 0] and w(v 1, v j ) = M[1, j, 0]. Hence M[i, j, 1] = min{m[i, j, 0], M[i, 1, 0] + M[1, j, 0]}.

Floyd-Warshall Algorithm After we compute all entries of the matrix M with k = 0, 1,..., K 1, can we compute the entries with k = K? Consider a shortest path P from v i to v j with MAX (P) K. Cases: P satisfies MAX (P) K 1. Then Cost(P) = M[i, j, K 1]. P satisfies MAX (P) = K. Hence v K P. Analyzing this is a bit more complicated, but we will show the path P satisfies Cost(P) = M[i, K, K 1] + M[K, j, K 1].

Floyd-Warshall Algorithm Consider a shortest path P from v i to v j with MAX (P) = K. Hence v K P. Write P as the concatenation of two paths, P 1 from v i to v K and P 2 from v K to v j. Do you agree that Cost(P) = Cost(P 1 ) + Cost(P 2 )? Questions: What is MAX (P 1 )? What is MAX (P 2 )?

Floyd-Warshall Algorithm P is a shortest path P from v i to v j with MAX (P) = K. We write P as the concatenation of P 1 from v i to v K and P 2 from v K to v j. Note that MAX (P 1 ) K 1 and MAX (P 2 ) K 1. Hence, Cost(P 1 ) = M[i, K, K 1] Cost(P 2 ) = M[K, j, K 1] Cost(P) = M[i, K, K 1] + M[K, j, K 1].

Floyd-Warshall Algorithm M[i, j, K] = min{m[i, j, K 1], M[i, K, K 1] + M[K, j, K 1]}

Floyd-Warshall Algorithm Algorithm: %Set base cases: M[i, i, 0] := 0 For i j, if (v i, v j ) E then M[i, j, 0] := w(v i, v j ), else M[i, j, 0] := For k = 1 up to n DO: For all i = 1, 2,..., n, M[i, i, k] := 0 For all pairs i, j with i j, M[i, j, k] := min{m[i, j, K 1], M[i, K, K 1]+M[K, j, K 1]} Easy to see this is O(n 3 ) (in fact, Θ(n 3 )).

Floyd-Warshall Algorithm As presented, this algorithm only returns the length (i.e., cost) of the shortest path between every pair of nodes. It does not return the path itself, or directly help you find the path. The assigned reading shows you how to can store information to help you reconstruct the shortest path. This algorithm, as described, was only for undirected graph with positive edge weights. It s a trivial modification to deal with directed graphs or with weights that can be negative. (It just gets strange to think about shortest paths when there are negative cycles...)

Dynamic Programming Algorithms When you are asked to create a DP algorithm, remember to: Define what your variables mean Show the base cases and how you ll compute them Show how you ll compute the other variables, using previously computed variables (this is the recursive definition ) State the order in which you compute the variables (remember - bottom-up) Show where your answer will be (or how you ll compute your answer from your variables) Explain why your algorithm is correct Analyze the running time

Other problems on the take-home examlet Let s work on something similar to what s in the examlet. Suppose you have an algorithm A that solves the decision problem for MATCHING: Input: Graph G = (V, E) and positive integer k Question: E 0 E such that E 0 = k and E 0 is a matching? Hence, given a graph G = (V, E) and a positive integer k, A(G, k) = YES if and only if G has a matching of size k.

Using algorithms for MATCHING Suppose you have an algorithm A that solves the decision problem for MATCHING. We want to use a polynomial number of calls to A (and a polynomial amount of other operations) to find the size of the maximum matching in G find the largest matching in G

Relationship between decision, optimization, and construction problems To solve the optimization problem, we define Algorithm B as follows. The input is graph G = (V, E). If E =, we return 0. Else, we do the following: For k = E down to 1, DO If A(G, k) = YES, then Return(k) It is easy to see that B is correct, that B calls A at most m times that B does at most O(m) additional steps. Hence B satisfies the desired properties.

Relationship between decision, optimization, and construction problems We define Algorithm C to find a maximum matching, as follows. The input is graph G = (V, E). If E =, we return. Otherwise, let E = {e 1, e 2,..., e m }, and let k = B(G). Let G be a copy of G For i = 1 up to m DO Let G be the graph obtained by deleting edge e i (but not the endpoints of e i ) from G. If A(G, k) = YES, then set G := G. Return the edge set E(G ) of G. It is easy to see that C calls B once, calls A at most m times, and does at most O(m) other operations. Hence the running time satisifes the required bounds. What about accuracy?

Finding the largest matching in a graph Let G be a copy of G For i = 1 up to m DO Let G be the graph obtained by deleting edge e i (but not the endpoints of e i ) from G. If A(G, k) = YES, then set G = G. Notes: Return the edge set of G. The edge set returned at the end is a matching (we ll look at this carefully in the next slide). We never reduce the size of the maximum matching when we delete edges. Hence, B(G ) = B(G). Therefore this algorithm returns a maximum matching.

Finding the largest matching in a graph Recall k is the size of a maximum matching in input graph G, with edge set {e 1, e 2,..., e m }, m 1. Let G be a copy of G For i = 1 up to m DO Let G be the graph obtained by deleting edge e i (but not the endpoints of e i ) from G. If A(G, k) = YES, then set G = G. Return the edge set of G. Theorem: The edge set E of G is a matching in G. Proof (by contradiction): If not, then E has at least two edges e i and e j (both from E) that share an endpoint. Let E 0 be a maximum matching in G ; hence E 0 is a maximum matching for G. Note that E 0 cannot include both e i and e j. Suppose (w.l.o.g.) e i E 0. During the algorithm, we checked whether a graph G that did not contain e i had a matching of size k. Since we did not delete e i, this means the answer was NO. But the edge set of that G contains the matching E 0, which means G has a matching of size k, yielding a contradiction.

Reductions We used an algorithm A for decision problem π to solve an optimization or construction problem π on the same input. We also required that we call A at most a polynomial number of times, and that we do at most a polynomial number of other operations. This means that if A runs in polynomial time, then we have a polynomial time algorithm for both π and π. Note that we use two things here: A is polynomial, and the input did not change in size. What we did isn t really a Karp reduction, because Karp reductions are only for decision problems... but the ideas are very related. If you can understand why this works, you will understand why Karp reductions have to satisfy what they satisfy. Just try to understand the ideas. This is not about memorization.

Graph properties Complements of graphs Let G = (V, E) be a graph. Its complement G c contains the same vertex set, but only contains the missing edges. Suppose V 0 is a clique in G. What can you say about V 0 in G c? Suppose V 0 is an independent set in G. What can you say about V 0 in G c?

Graphs Complementary sets Suppose V 0 is an independent set in G. What can you say about G V 0? Suppose V 0 is a clique in G. What happens if you delete V 0 from G? Suppose V 0 is a vertex cover in G. What happens if you delete V 0 from G?

Manipulating Graphs Adding vertices to graphs: Suppose you add a vertex to G and make it adjacent to every vertex in V. What happens to the size of the maximum clique, the size of the maximum independent set, the chromatic number

Preparing for Thursday Try to prove the following theorems: If G = (V, E) is a graph and M E is a matching, then every vertex cover for G has at least M vertices. If G = (V, E) is a graph and M E is a maximal matching, then if we delete all the vertices that are endpoints in any edge in M from G, we either get the empty graph, or we get an independent set. Let M be a maximal matching for graph G, and let V 0 be endpoints of edges in M. Let V 1 be the smallest vertex cover for G. How large can V 0 V 1 be? Note the definition of maximal matching: you can t add any edges to a maximal matching and have it still be a matching. For an example of a maximal but not maximum matching, consider the graph P 4 with four vertices, a, b, c, d, and three edges (a, b), (b, c), (c, d). The middle edge (b, c) is a maximal matching but not a maximum matching. How hard do you think it is to find a maximal matching for a graph?