CS325: Analysis of Algorithms, Fall Final Exam

Similar documents
Greedy Algorithms My T. UF

Algorithm Design Strategies V

Dynamic Programming( Weighted Interval Scheduling)

Outline / Reading. Greedy Method as a fundamental algorithm design technique

CS481: Bioinformatics Algorithms

ECS122A Handout on NP-Completeness March 12, 2018

Algorithms: COMP3121/3821/9101/9801

NATIONAL UNIVERSITY OF SINGAPORE CS3230 DESIGN AND ANALYSIS OF ALGORITHMS SEMESTER II: Time Allowed 2 Hours

Do not turn this page until you have received the signal to start. Please fill out the identification section above. Good Luck!

VIII. NP-completeness

CSI 4105 MIDTERM SOLUTION

CS 170, Fall 1997 Second Midterm Professor Papadimitriou

Essential facts about NP-completeness:

University of New Mexico Department of Computer Science. Final Examination. CS 362 Data Structures and Algorithms Spring, 2007

The Greedy Method. Design and analysis of algorithms Cs The Greedy Method

This means that we can assume each list ) is

Lecture 9. Greedy Algorithm

Announcements. CompSci 102 Discrete Math for Computer Science. Chap. 3.1 Algorithms. Specifying Algorithms

CS 6901 (Applied Algorithms) Lecture 3

Greedy. Outline CS141. Stefano Lonardi, UCR 1. Activity selection Fractional knapsack Huffman encoding Later:

Unit 1A: Computational Complexity

University of Toronto Department of Electrical and Computer Engineering. Final Examination. ECE 345 Algorithms and Data Structures Fall 2016

1. Introduction Recap

FINAL EXAM PRACTICE PROBLEMS CMSC 451 (Spring 2016)

CS 350 Algorithms and Complexity

CS 350 Algorithms and Complexity

Greedy Algorithms. Kleinberg and Tardos, Chapter 4

Computational Complexity. IE 496 Lecture 6. Dr. Ted Ralphs

Limitations of Algorithm Power

CS 161: Design and Analysis of Algorithms

CS Analysis of Recursive Algorithms and Brute Force

CMPSCI 311: Introduction to Algorithms Second Midterm Exam

CS325: Analysis of Algorithms, Fall Midterm

CS60007 Algorithm Design and Analysis 2018 Assignment 1

CS 6783 (Applied Algorithms) Lecture 3

University of New Mexico Department of Computer Science. Final Examination. CS 561 Data Structures and Algorithms Fall, 2013

(tree searching technique) (Boolean formulas) satisfying assignment: (X 1, X 2 )

Computational Complexity

CSE 421 Dynamic Programming

Maximum sum contiguous subsequence Longest common subsequence Matrix chain multiplication All pair shortest path Kna. Dynamic Programming

4. How to prove a problem is NPC

University of California Berkeley CS170: Efficient Algorithms and Intractable Problems November 19, 2001 Professor Luca Trevisan. Midterm 2 Solutions

Computer Science 385 Analysis of Algorithms Siena College Spring Topic Notes: Limitations of Algorithms

Question Paper Code :

CSC 421: Algorithm Design & Analysis. Spring 2018

Data Structures in Java

Lecture 18: P & NP. Revised, May 1, CLRS, pp

CS583 Lecture 11. Many slides here are based on D. Luebke slides. Review: Dynamic Programming

Assignment 4. CSci 3110: Introduction to Algorithms. Sample Solutions

Theory Bridge Exam Example Questions

Agenda. What is a complexity class? What are the important complexity classes? How do you prove an algorithm is in a certain class

1 Basic Definitions. 2 Proof By Contradiction. 3 Exchange Argument

a 1 a 2 a 3 a 4 v i c i c(a 1, a 3 ) = 3

NOTE: You have 2 hours, please plan your time. Problems are not ordered by difficulty.

Final exam study sheet for CS3719 Turing machines and decidability.

UNIVERSITY OF CALIFORNIA, RIVERSIDE

CMSC 441: Algorithms. NP Completeness

Q = Set of states, IE661: Scheduling Theory (Fall 2003) Primer to Complexity Theory Satyaki Ghosh Dastidar

Approximation Algorithms

Recoverable Robustness in Scheduling Problems

2. (25%) Suppose you have an array of 1234 records in which only a few are out of order and they are not very far from their correct positions. Which

CS781 Lecture 3 January 27, 2011

CS 561, Lecture: Greedy Algorithms. Jared Saia University of New Mexico

Dynamic Programming: Interval Scheduling and Knapsack

CSE 421 Weighted Interval Scheduling, Knapsack, RNA Secondary Structure

Today s Outline. CS 561, Lecture 15. Greedy Algorithms. Activity Selection. Greedy Algorithm Intro Activity Selection Knapsack

CSE101: Design and Analysis of Algorithms. Ragesh Jaiswal, CSE, UCSD

CS/COE

CS 580: Algorithm Design and Analysis

Data Structures in Java

Dynamic Programming. Cormen et. al. IV 15

BBM402-Lecture 11: The Class NP

CS 580: Algorithm Design and Analysis

CSE 431/531: Analysis of Algorithms. Dynamic Programming. Lecturer: Shi Li. Department of Computer Science and Engineering University at Buffalo

Algorithms. NP -Complete Problems. Dong Kyue Kim Hanyang University

INTRO TO COMPUTATIONAL COMPLEXITY

from notes written mostly by Dr. Matt Stallmann: All Rights Reserved

ECOM Discrete Mathematics

1 More finite deterministic automata

Computational Complexity and Intractability: An Introduction to the Theory of NP. Chapter 9

HW8. Due: November 1, 2018

Lecture 2: Divide and conquer and Dynamic programming

Lecture 18: More NP-Complete Problems

Algorithms Exam TIN093 /DIT602

Solutions to the Midterm Practice Problems

Geometric Steiner Trees

Organization Team Team ID#

Now just show p k+1 small vs opt. Some machine gets k=m jobs of size at least p k+1 Can also think of it as a family of (1 + )-approximation algorithm

Comp487/587 - Boolean Formulas

IS 709/809: Computational Methods in IS Research Fall Exam Review

1.1 P, NP, and NP-complete

Central Algorithmic Techniques. Iterative Algorithms

Algorithm Design and Analysis

Measurement and Data Core Guide Grade 1

1. Problem I calculated these out by hand. It was tedious, but kind of fun, in a a computer should really be doing this sort of way.

McGill University Faculty of Science. Solutions to Practice Final Examination Math 240 Discrete Structures 1. Time: 3 hours Marked out of 60

What is Dynamic Programming

Determine the size of an instance of the minimum spanning tree problem.

Introduction to Bin Packing Problems

Transcription:

CS: Analysis of Algorithms, Fall 0 Final Exam I don t know policy: you may write I don t know and nothing else to answer a question and receive percent of the total points for that problem whereas a completely wrong answer will receive zero. There are 8 problems in this exam. Problem Problem Problem Problem Problem Problem Problem Problem 8

Problem. [ pts] Which of the following job scheduling algorithms always construct an optimal solution? Mark an algorithm T if it always constructs an optimal solution, and F, otherwise. Provide counterexamples for those that you mark F. (a) Choose the job x that ends last, discard jobs that conflict with x, and recurse. (b) Choose the job x that starts last, discard all jobs that conflict with x, and recurse. (c) Choose the job x with shortest duration, discard all jobs that conflict with x, and recurse. (d) If no jobs conflict, choose them all. Otherwise, discard the job with longest duration and recurse. (a) F (b) T, this is exactly the same algorithm we saw in class (Why?) (c) F (d) F Problem. [ pts] Suppose you are a simple shopkeeper living in a country with n different types of coins, with values = c[] < c[] <... < c[n]. (In the U.S., for example, n = and the values are (penny), (nickel), 0 (dime), (quarter), 0 (half dollar) and 00 (dollar) cents.) You would like to use the smallest number of coins whenever you give a customer change. In the United States, there is a simple greedy algorithm that always results in the smallest number of coins: subtract the largest coin and recursively give change for the remainder. For example, if the change is 8 cents, the greedy algorithm returns dime, nickel, and pennies. Show that there is a set of coin values for which the greedy algorithm does not always give the smallest number of coins. Consider coin values c[] =, c[] =, and c[] =. To give change for eight, the greedy algorithm uses four coins:,,,. But, it is possible to give change for eight with two coins,.

Problem. [ pts] (a) Consider the string S = ddcabcdd, composed of characters, a, b, c, and d. Calculate the frequencies of these four characters, draw the Huffman tree, and deduce Huffman codes for them. (b) Now consider the string S, obtained by concatenating copies of S. Huffman codes for a, b, c, and d for this longer string? What are the (c) Suppose we have an alphabet of k > characters, c,..., c k. Also, suppose we have a string T of length l that contains characters c,..., c k with following frequencies: f(c ) =, f(c ) =, f(c ) =, f(c ) =,..., f(c k ) = k. What is the length of the Huffman code for each c i, i k? (a) f(a) =, f(b) =, f(c) =, and f(d) =. (You can also compute the true frequency by dividing all these numbers by the length of S, which is 8). Here is the Huffman tree: d c b a We deduce the codes from the tree by assigning one to right branches and zero to left branches. Thus, we have: code(a) =, code(b) = 0, code(c) = 0, and code(d) = 0. (b) Since we are not changing frequencies by considering multiple copies of the same string, the same codes work here. (c) Similar to (a), for any < i k, code(c i ) =... 0, where the number of s is k i, and code(c ) =..., with k number of s

Problem. [ pts] Let G = (V, E) be a simple graph with n vertices. Suppose the weight of every edge of G is one. (a) What is the weight of a minimum spanning tree of G? (b) Suppose we change the weight of two edges of G to /. What is the weight of the minimum spanning tree of G? (c) Suppose we change the weight of three edges of G to /. What is the minimum and maximum possible weights for the the minimum spanning tree of G? (d) Suppose we change the weight of k < V edges of G to /. What is the minimum and maximum possible weights for the the minimum spanning tree of G? (a) Any spanning tree of G has exactly V edges, since all the weights are one its total weight is V. (b) To answer this question, consider running Kruskal on G, it will first pick the two lighter edges as they cannot form a cycle (the graph is simple), then V other edge each with weight one. Therefore, the total value is V + (/) = V. (c) Use the similar approach as (b). Kruskal first selects two light edges. There are two possibilities for the third edge: (i) it forms a cycle with the first two edges or (ii) it does not form a cycle with the first two edges not. In case (i), Kruskal selects two edges of weight / and V edges of weight, therefore, the total weight is V + (/) = V. In case (ii), Kruskal selects three edges of weight / and V edges of weight, therefore, the total weight is V + (/) = V /. (d) Similar to (c), the minimum weight happens when the k light edges do not form any cycle. In this case, MST contains k edges of weight / and n k edges of weight. Therefore, its weight is k/ + n k = n k/. The maximum weight happens if the k edges span as few vertices as possible. So, let l be the smallest number such that ( l ) k. We can pack all k light edges between l vertices. Consequently, the MST has l edges of weight / and n (l ) edges of weight. Therefore, its weight is n l/ /.

Problem. [ pts] Suppose you can see the edges that are chosen by an MST algorithm in the middle of its running. You want to identify which MST algorithm is running. For each of the following figures, respond Borvka, Kruskal, Jarnik (Prim), or None. (Dashed segments are graph edges and solid segments are MST edges found so far. The numbers are the edge weights.) 0 0 0 0 From left to right: Jarnik, Kruskal, Borvka, None. Problem. [ pts] Recall Borvka s minimum spanning tree algorithm from class, which iteratively add all the safe edges until it obtains a spanning tree. Here is a high-level pseudo code. : procedure Borvka(V, E) : F (V, ) : while F is not connected do : Add all safe edges to F : return F (a) Construct a graph with eight vertices for which line is executed exactly three times. (b) Construct a graph with eight vertices for which line is executed exactly once. (a) Borvka needs three iterations for the following graph. (b) Borvka needs one iteration for the following graph.

Problem. problem. [ pts] Consider the decision version of the Knapsack problem, and the subset sum Knapsack. The input is composed of two arrays s[... n], and v[... n] of positive integers, which specify the sizes and values of n items, respectively, a target value V, and a knapsack size S. The output is YES if it is possible to fit a subset of items of total value V in the knapsack, and it is NO, otherwise. Subset sum. The input is a set X of n integers and a target integer t. The output is YES if there exists a subset of X, whose elements sum to t, and NO, otherwise. Now, answer the following questions. (a) Describe a polynomial time reduction from subset sum to Knapsack. (b) Show that your reduction is correct; prove that the output of the Knapsack instance is YES if and only if the output of the subset sum instance is YES. (c) Explain why this reduction implies that Knapsack is NP-hard. (d) Prove that Knapsack is NP-Complete. (a) Let X = {x,..., x n }, t be the subset sum instance. We compose the Knapsak instance by setting s[i] = v[i] = x i, S = V = t. (b) We need to prove that there is a subset of X whose elements sum to t if and only of there is a subset of items in the Knapsack of total value V = t and total size S = t. Suppose, there is a subset X whose elements sum to t, then the set of corresponding items in the Knapsack problem has total size and value S = V = t. On the other hand, suppose, there is a subset of items in the Knapsack of total value V and total size S. The corresponding subset of items must sum to S = V = t. (c) Subset sum is an NP-hard problem: a polynomial time algorithm for subset sum would imply P = NP. Our reduction shows that a polynomial time algorithm for Knapsack would imply a polynomial time algorithm for subset sum, thus, it would imply P = NP. (d) Since Knapsack is NP-hard, as shown in (a). We need to show it is in NP to show it is NP-complete. But, it is straight forward to verify if the elements of a given subset X sum to t in polynomial time.

Problem 8. [ pts] For each of the following statements, respond True, False, or Unknown. (a) P = NP. U (b) If P NP, then the satisfiability problem (SAT) cannot be solved in polynomial time. T (c) If a problem is NP-complete then it is NP-hard. T (d) If there is a polynomial time reduction from the satisfiability problem (SAT) to problem X, then X is NP-hard. T (e) If there is a polynomial time reduction from problem X to the satisfiability problem (SAT), then X is NP-hard. F (f) No problem in NP can be solved in polynomial time. F (g) If there is a polynomial time algorithm to solve problem A then A is in P. T (h) If there is a polynomial time algorithm to solve problem A then A is in NP. T