Homework Assignment 1 Solutions

Similar documents
3.1 Asymptotic notation

Computational Complexity

Big O 2/14/13. Administrative. Does it terminate? David Kauchak cs302 Spring 2013

CS473 - Algorithms I

Big-O Notation and Complexity Analysis

Review for Midterm 1. Andreas Klappenecker

Data Structures and Algorithms Running time and growth functions January 18, 2018

Review 1. Andreas Klappenecker

IS 709/809: Computational Methods in IS Research Fall Exam Review

MA008/MIIZ01 Design and Analysis of Algorithms Lecture Notes 2

Minimum spanning tree

Omega notation. Transitivity etc.

Discrete Wiskunde II. Lecture 5: Shortest Paths & Spanning Trees

Algorithms, CSE, OSU. Introduction, complexity of algorithms, asymptotic growth of functions. Instructor: Anastasios Sidiropoulos

More Asymptotic Analysis Spring 2018 Discussion 8: March 6, 2018

Greedy Algorithms. CSE 101: Design and Analysis of Algorithms Lecture 10

On my honor I affirm that I have neither given nor received inappropriate aid in the completion of this exercise.

CMPSCI611: The Matroid Theorem Lecture 5

CSC Design and Analysis of Algorithms. Lecture 1

CSE 101. Algorithm Design and Analysis Miles Jones Office 4208 CSE Building Lecture 1: Introduction

When we use asymptotic notation within an expression, the asymptotic notation is shorthand for an unspecified function satisfying the relation:

Lecture 1: Asymptotics, Recurrences, Elementary Sorting

CIS 121 Data Structures and Algorithms with Java Spring Big-Oh Notation Monday, January 22/Tuesday, January 23

How many hours would you estimate that you spent on this assignment?

COMP Analysis of Algorithms & Data Structures

with the size of the input in the limit, as the size of the misused.

COMP Analysis of Algorithms & Data Structures

1 Some loose ends from last time

CSCE 222 Discrete Structures for Computing. Review for Exam 2. Dr. Hyunyoung Lee !!!

Combinatorial Optimization

P, NP, NP-Complete, and NPhard

10.3 Matroids and approximation

Running Time Evaluation

Chapter 0: Preliminaries

CSCE 222 Discrete Structures for Computing. Review for the Final. Hyunyoung Lee

Review 3. Andreas Klappenecker

Introduction to Algorithms

Analysis of Algorithms

Discrete Optimization 2010 Lecture 2 Matroids & Shortest Paths

When we use asymptotic notation within an expression, the asymptotic notation is shorthand for an unspecified function satisfying the relation:

Asymptotic Analysis 1

Grade 11/12 Math Circles Fall Nov. 5 Recurrences, Part 2

6.2 Deeper Properties of Continuous Functions

(1) Which of the following are propositions? If it is a proposition, determine its truth value: A propositional function, but not a proposition.

CS 4407 Algorithms Lecture 3: Iterative and Divide and Conquer Algorithms

The Growth of Functions and Big-O Notation

Divide and Conquer CPE 349. Theresa Migler-VonDollen

a. Do you think the function is linear or non-linear? Explain using what you know about powers of variables.

Basics and Random Graphs con0nued

Lecture 4. 1 Estimating the number of connected components and minimum spanning tree

Lecture 2: Asymptotic Notation CSCI Algorithms I

Part 1: You are given the following system of two equations: x + 2y = 16 3x 4y = 2

Discrete Optimization 2010 Lecture 1 Introduction / Algorithms & Spanning Trees

Algorithm efficiency can be measured in terms of: Time Space Other resources such as processors, network packets, etc.

Algorithms, Design and Analysis. Order of growth. Table 2.1. Big-oh. Asymptotic growth rate. Types of formulas for basic operation count

Unit 1A: Computational Complexity

5.5 Deeper Properties of Continuous Functions

Answer the following questions: Q1: ( 15 points) : A) Choose the correct answer of the following questions: نموذج اإلجابة

Proof methods and greedy algorithms

Agenda. We ve discussed

MA008/MIIZ01 Design and Analysis of Algorithms Lecture Notes 3

Lecture 1: Asymptotic Complexity. 1 These slides include material originally prepared by Dr.Ron Cytron, Dr. Jeremy Buhler, and Dr. Steve Cole.

CS 4407 Algorithms Lecture 2: Iterative and Divide and Conquer Algorithms

Ring Sums, Bridges and Fundamental Sets

The Time Complexity of an Algorithm

Principles of Algorithm Analysis

i=1 i B[i] B[i] + A[i, j]; c n for j n downto i + 1 do c n i=1 (n i) C[i] C[i] + A[i, j]; c n

Analysis of Algorithm Efficiency. Dr. Yingwu Zhu

11.6: Ratio and Root Tests Page 1. absolutely convergent, conditionally convergent, or divergent?

2 GRAPH AND NETWORK OPTIMIZATION. E. Amaldi Introduction to Operations Research Politecnico Milano 1

Homework 1 Solutions

CS Non-recursive and Recursive Algorithm Analysis

Written Homework #1: Analysis of Algorithms

Asymptotic Analysis Cont'd

Module 1: Analyzing the Efficiency of Algorithms

COMPUTER ALGORITHMS. Athasit Surarerks.

2. A vertex in G is central if its greatest distance from any other vertex is as small as possible. This distance is the radius of G.

Algorithm Design and Analysis

COMP 9024, Class notes, 11s2, Class 1

Definition: A "system" of equations is a set or collection of equations that you deal with all together at once.

CSE 591 Homework 3 Sample Solutions. Problem 1

CS599: Convex and Combinatorial Optimization Fall 2013 Lecture 17: Combinatorial Problems as Linear Programs III. Instructor: Shaddin Dughmi

EECS 477: Introduction to algorithms. Lecture 5

The Time Complexity of an Algorithm

Lecture 2. Fundamentals of the Analysis of Algorithm Efficiency

Time complexity analysis

Advanced Algorithmics (6EAP)

2. ALGORITHM ANALYSIS

Cpt S 223. School of EECS, WSU

Growth of Functions (CLRS 2.3,3)

Module 1: Analyzing the Efficiency of Algorithms

Multimedia Databases - 68A6 Final Term - exercises

NATIONAL UNIVERSITY OF SINGAPORE CS3230 DESIGN AND ANALYSIS OF ALGORITHMS SEMESTER II: Time Allowed 2 Hours

Exam Practice Problems

ICS 252 Introduction to Computer Design

CS 61B Asymptotic Analysis Fall 2017

Algorithms and Data Structures (COMP 251) Midterm Solutions

Notes on MapReduce Algorithms

Greedy Algorithms My T. UF

Preliminaries. Graphs. E : set of edges (arcs) (Undirected) Graph : (i, j) = (j, i) (edges) V = {1, 2, 3, 4, 5}, E = {(1, 3), (3, 2), (2, 4)}

Transcription:

MTAT.03.286: Advanced Methods in Algorithms Homework Assignment 1 Solutions University of Tartu 1 Big-O notation For each of the following, indicate whether f(n) = O(g(n)), f(n) = Ω(g(n)), or f(n) = Θ(g(n)). Justify your answer. f(n) = 100 n and g(n) = n 1.1 ; f(n) = n 3 + 2n 2 + 10 and g(n) = (log 2 n) 5 ; f(n) = 2 n and g(n) = n n ; f(n) = n log 2 log 2 n and g(n) = 2 (log 2 n) log 2 n. 1.1 f(n) = 100 n and g(n) = n 1.1 This is easiest to approach by definition. There exists c = 100 and n 0 = 1 such that n n 0 we have 100 n 100 n 1.1. Hence f(n) = O(g(n)). On the other hand, f(n) Ω(g(n)). Assume, by contradiction, that for some value n 0 and c we have f(n) c g(n) for all n n 0. This can not happen, because for any c exists n 1 such that c n 0.1 1 > 1, hence, for all n n 1 we have f(n) < c g(n). Hence also f(n) Θ(g(n)). 1.2 f(n) = n 3 + 2n 2 + 10 and g(n) = (log 2 n) 5 The answer f(n) = Ω(g(n)) is easiest to see by computing the it with L Hospital rule consecutively because of the indetermination. f(n) n g(n) = n 3 + 2n 2 + 10 (n 3 + 2n 2 + 10) n log 2 n) 5 = n ((log 2 n) 5 ) = ln 2 5 (ln 2) 2 20 (ln 2) 5 120 (9n 3 + 8n 2 ) (ln 2)3 n ((log 2 n) 3 ) = 60 (243n 3 + 64n 2 ) = n 1 3n 3 + 4n 2 n (log 2 n) 4 = ln 2 5 (27n 3 + 16n 2 ) (ln 2)4 n ((log 2 n) 2 ) = 120 (3n 3 + 4n 2 ) n ((log 2 n) 4 ) = (81n 3 + 32n 2 ) n ((log 2 n)) = 1

1.3 f(n) = 2 n and g(n) = n n The answer is again f(n) = Ω(g(n)). f(n) n g(n) = 2 n n n = n n (2 n n ) n = 2 n Where we need to find n n = for the last claim. This can be easily seen with an exchange of variables with x = n where x as n. Finally, we can apply the L Hospital rule: 2 x x x = 2 x ln 2 = x 1 1.4 f(n) = n log 2 log 2 n and g(n) = 2 (log 2 n) log 2 n In this case f(n) = Θ(g(n)) as both definitions for big-o and Ω are satisfied by c = 1 2 for any n 0. This is true because the functions are actually equal up to constant factor 2. n log 2 log 2 n = (2 log 2 n ) log 2 log 2 n = 2 log 2 n log 2 log 2 n = (2 log 2 log 2 n ) log 2 n = (log 2 n) log 2 n 2 Two spanning trees Let T 1 (V, E 1 ) and T 2 (V, E 2 ) be two spanning trees of the (undirected, connected, finite) graph G(V, E). Prove that for every edge e E 1 \ E 2 there exists an edge e E 2 \ E 1 such that each of the edge sets (E 1 {e }) \ {e} and (E 2 {e}) \ {e } defines a spanning tree. Firstly, we can make some observations. For example E 1 = E 2 = V 1 because they are trees of the same vertices. Hence also (E 1 {e }) \ {e} = (E 2 {e}) \ {e } = V 1. Hence, by definitions these are spanning trees if they are connected. In addition, clearly e e. By definition (E 1 \ {e}) has V 2 edges and is disconnected. Assume that e = {a, b} for a, b V. Then we can consider the two sets of disconnected vertices in A V and A V such that a A, b B and vertices in A or B are connected and there is no edge between A or B. Consider a circuit C in E 2 {e}. By definition, there exists an edge p C such that p = {a, b } e for a A and b B because the circuit contains vertices from both A and B, hence there are at least two such edges (and only one of them is e). By definition also p e and p / E 1 as E 1 \ {e} has no edge between A and B. Now, consider (E 2 \ {e}) p which is connected because p connects A and B. In addition, it has V 1 edges, therefore it is a spanning tree. Differently, consider (E 2 \ {p}) {e}. We know that e and p belong to one circuit in T 2 so (E 2 \ {p}) {e} is also connected and has V 1 edges. Therefore it is also a spanning tree. This is a contradiction as there exists such e = p. 2

3 Analogous weight functions Let G(V, E) be an undirected, connected, finite graph. Let w : E R + and w : E R + be two weight functions, such that e 1, e 2 E : w(e 1 ) w(e 2 ) w (e 1 ) w (e 2 ). Prove that T is a minimum spanning tree of G with respect to w if and only if T is a minimum spanning tree of G with respect to w. The following presents two ideas for solving this problem. 3.1 Idea 1: Analysing the edges in the tree Assume by contradiction that there is a minimum spanning tree T 1 (V, E 1 ) of G with respect to w that is not minimum with respect to w. Let T 2 (V, E 2 ) where E 1 E 2 be some minimum spanning tree of G with respect to w. T 1 and T 2 are both spanning trees of G, hence we can apply the result of Exercise 1. Hence, for each e E 1 \ E 2 there exist e E 2 \ E 1 such that sets of edges E 1 = (E 1 {e }) \ {e} and E 2 = (E 2 {e}) \ {e } also define spanning trees. We use the extended notation of the weight function as w(e) = e E w(e). We have three possibilities: 1. w(e) < w(e ) and w (e) < w (e ). Then w (E 2 ) = w (E 2 ) + w (e) w(e ) < w (E 2 ) which is a contradiction because T (V, E 2 ) was a minimum spanning tree with respect to w. 2. w(e ) < w(e) and w (e ) < w (e). Then w(e 1 ) = w(e 1 ) + w(e ) w(e) < w(e 1 ) which is a contradiction because T (V, E 1 ) was a minimum spanning tree with respect to w. 3. w(e) = w(e ) and w (e) = w (e ). In this case we can define T 2 (V, E 2 ) as a new minimum spanning tree of G with respect to w because w (E 2 ) = w (E 2 ). Now, there are two possibilities: (a) E 2 = E 1 then we have T 1 = T 2 which is a contradiction with the initial assumption. (b) E 2 E 1 Then we start this analysis again with the same T 1, but use T 2 instead of T 2. Either the edge e that we pick will give a contradiction for the conditions 1, 2 or 3a or we can do another change for the three T 2 that we substitute for T 2 and so on. The following claim clarifies why this always terminates and at some point we end the process in one of the contradicting branches. Claim: The process of substituting T 2 with T 2 in previous branch 3b will terminate with at most V 1 steps. Reasoning: Initially we have E 1 E 2 = R and for each e that we pick in the analysis e / R because e / E 2 and e / R because e / E 1. However, for E 1 E 2 = R {e}. The maximum 3

possible size of R is clearly max( R ) = E 1 = E 2 = V 1. Each iteration will increase the size of R exactly by one, therefore there will be a step where we add some p so that R = E 1 and therefore T 2 = T 1 as in branch 3a. Therefore, if T is a minimum spanning tree for G with respect to w then it is also a minimum spanning tree with respect to w. The other direction changes the roles of w and w in the previous analysis, but is otherwise the same. Therefore, T is a minimum spanning tree of G with respect to w if and only if it is a minimum spanning tree with respect to w. 3.2 Idea 2: Kruskal algorithm For every minimum spanning tree T of G there exists a run of Kruskal algorithm that finds this T. (Note that we did not prove this in class so that requires a separate proof.) These runs are defined by the different order of sorting. Let T be a minimum spanning tree of G with respect to w. Now, consider a run of the Kruskal algorithm that finds T. Without lessening of generality assume that the initial sorting yields w(e 1 ) w(e 2 )... w(e E ) where i < j implies w(e i ) w(e i ). Starting Kruskal algorithm with edges ordered as [e 1, e 2, e 3,..., e E ] yields T. However, by definition this is also a valid ordering for w because w(e 1 ) w(e 2 )... w(e E ) implies w (e 1 ) w (e 2 )... w (e E ). The rest of the algorithm is independent of the weight function, therefore it would give the same T for w and w because we can use the same sorter order. By correctness of the Kruskal algorithm we know that T is therefore a minimum spanning tree for both of these cases. The other direction where T being a MST for w implies that it is also a spanning tree for w is analogous. 4 Power of the Kruskal algorithm Let G(V, E) be an undirected, connected, finite graph with weight function w : E R +. Let T be a minimum spanning tree of G. Show that there exists a run of Kruskals algorithm that finds T (for suitable ordering of edges). Assume, by contradiction, that there exists a minimum spanning tree T 1 (V, E 1 ) of G such that no run of the Kruskal algorithm finds this tree. Consider also a run of Kruskal algorithm that finds the spanning tree T 2 (V, E 2 ). For simplicity (and without lessening of generality), assume that the weight function is such that w(e i ) w(e j ) if i < j and the sorting order that produces T 2 is by the order of the indices. Assume that e i is the first such edge that differs between E 1 and E 2. Hence, it is the first edge where the algorithm makes a different choice for T 2 than T 1. There are two cases: 1. Assume, that e i is added to E 2 by the algorithm, but that it does not belong to E 1. The set E 1 {e i } would contain a circuit. From the fact that e i is chosen to E 2 we know that it does 4

not form a cycle with only elements with the smaller index than i as there is no circuit in E 2. Hence the circuit in E 1 {e i } must contain at least one edge with a larger index than i. Therefore, there exists e j where j > i and e j E 1 but e j / E 2 that is on the circuit. In addition, assume that e j is the edge with the least weight among the suitable edges on the circuit. There are two cases: (a) w(e i ) = w(e j ). In such case we could consider a different run of the Kruskal algorithm where we swap the places of e i and e j in the sorting. This algorithm would produce T 3 (V, E 3 ) with the condition that all edges e k with k < i and e j are either in both minimum spanning trees T 3 and T 1 or in neither. Clearly, using e j instead of e i would not introduce a circuit because there is no circuit in E 1, hence it is chosen to E 3 by the algorithm. We could continue the analysis by finding the first edge that differs between T 3 and T 1 if E 1 E 3. However, if E 1 = E 3 then we have found a run of Kruskal algorithm that finds T 1, which is a contradiction with the initial assumption. (b) w(e i ) < w(e j ). We could consider a graph T 3 (V, (E 1 {e i }) \ {e j }). This is a spanning tree because it has V 1 edges and it is connected because e i and e j formed a circuit in E 1. The weight of T 3, however, is less than T 1 because w(e i ) < w(e j ) which is a contradiction because T 1 was a minimum spanning tree. Therefore, this case can not happen. 2. Assume that e i is a part of E 1, but is not chosen to E 2 by the Kruskal algorithm. The fact that e i is not chosen to E 2 means that there is a circuit with e i and edges previously added to E 2. However, the previous edges of E 2 are also part of E 1 and the fact that e i E 1 contradicts the possibility of a circuit. Hence, it is not possible to find such an edge and this case can not happen. Hence, there always exists a run of Kruskal algorithm that finds a minimum spanning tree T. The main summary of the proof is that always when the Kruskal algorithm finds a different spanning tree then the difference is in the edges with the same weight and we can reorder them in the initial sorting to produce the necessary tree. 5 Uniqueness of the minimum spanning tree Let G(V, E) be an undirected, connected, finite graph with weight function w : E R +. It is known that the weights of the edges in E are all different. Show that G has a unique minimum spanning tree. Assume by contradiction, that there are two minimum spanning trees T 1 (V, E 1 ) and T 2 (V, E 2 ). We use the extended notation of the weight as w(e) = e E w(e). 5

By definition w(e 1 ) = w(e 2 ) and is minimal possible for a spanning tree of G. We also know that the trees are different, hence E 1 E 2. Therefore we can use the proposition of Exercise 2 to find the edges e E 1 \ E 2 and e E 1 \ E 1 as defined there e e. Hence, set of edges E 1 = (E 1 {e }) \ {e} and E 2 = (E 2 {e}) \ {e } also define a spanning tree. Now, we can compute the weight of these trees as follows. w(e 1) = w(e 1 ) w(e) + w(e ) w(e 2) = w(e 2 ) w(e ) + w(e) From the precondition of the theorem we know that w(e) w(e ). Therefore we have two cases: 1. if w(e) < w(e ) then w(e 2 ) = w(e 2) w(e )+w(e) < w(e 2 ) which is a contradiction, because then T (V, E 2 ) would be a spanning tree with smaller weight than the minimum spanning tree of G. 2. if w(e ) < w(e), then w(e 1 ) = w(e 1) w(e) + w(e ) < w(e 1 ), which is again a contradiction because T 1 (V, E 1 ) is a minimum spanning tree of G. Therefore, it is impossible for any two different minimum spanning trees to exist when all the edges have different weights. 6