CS383, Algorithms Spring 2009 HW1 Solutions

Similar documents
Quiz 1 Solutions. Problem 2. Asymptotics & Recurrences [20 points] (3 parts)

MA008/MIIZ01 Design and Analysis of Algorithms Lecture Notes 3

CSE373: Data Structures and Algorithms Lecture 3: Math Review; Algorithm Analysis. Catie Baker Spring 2015

CSE373: Data Structures and Algorithms Lecture 2: Math Review; Algorithm Analysis. Hunter Zahn Summer 2016

Announcements. CSE332: Data Abstractions Lecture 2: Math Review; Algorithm Analysis. Today. Mathematical induction. Dan Grossman Spring 2010

Reading 10 : Asymptotic Analysis

Computational Complexity - Pseudocode and Recursions

Solving recurrences. Frequently showing up when analysing divide&conquer algorithms or, more generally, recursive algorithms.

CS243, Logic and Computation Nondeterministic finite automata

Algorithm efficiency analysis

CSE332: Data Structures & Parallelism Lecture 2: Algorithm Analysis. Ruth Anderson Winter 2018

CSE332: Data Structures & Parallelism Lecture 2: Algorithm Analysis. Ruth Anderson Winter 2018

COL106: Data Structures and Algorithms (IIT Delhi, Semester-II )

CSE332: Data Structures & Parallelism Lecture 2: Algorithm Analysis. Ruth Anderson Winter 2019

Analysis of Algorithms I: Asymptotic Notation, Induction, and MergeSort

How many hours would you estimate that you spent on this assignment?

Lecture 17: Trees and Merge Sort 10:00 AM, Oct 15, 2018

CS 310 Advanced Data Structures and Algorithms

Lecture 2. Fundamentals of the Analysis of Algorithm Efficiency

Introduction. An Introduction to Algorithms and Data Structures

Analysis of Algorithm Efficiency. Dr. Yingwu Zhu

Divide and Conquer. Arash Rafiey. 27 October, 2016

In-Class Soln 1. CS 361, Lecture 4. Today s Outline. In-Class Soln 2

CSCI Honor seminar in algorithms Homework 2 Solution

Lecture 3. Big-O notation, more recurrences!!

Why do we need math in a data structures course?

Homework 1 Submission

Algorithm Analysis Recurrence Relation. Chung-Ang University, Jaesung Lee

Chapter Summary. Mathematical Induction Strong Induction Well-Ordering Recursive Definitions Structural Induction Recursive Algorithms

1 Substitution method

Proof Techniques (Review of Math 271)

This chapter covers asymptotic analysis of function growth and big-o notation.

Algorithms, Design and Analysis. Order of growth. Table 2.1. Big-oh. Asymptotic growth rate. Types of formulas for basic operation count

Algorithms and Their Complexity

Fall 2017 November 10, Written Homework 5

CS Non-recursive and Recursive Algorithm Analysis

When we use asymptotic notation within an expression, the asymptotic notation is shorthand for an unspecified function satisfying the relation:

Solutions. Problem 1: Suppose a polynomial in n of degree d has the form

/633 Introduction to Algorithms Lecturer: Michael Dinitz Topic: Dynamic Programming II Date: 10/12/17

Exercise Sheet #1 Solutions, Computer Science, A M. Hallett, K. Smith a k n b k = n b + k. a k n b k c 1 n b k. a k n b.

Induction and Recursion

Asymptotic Notation. such that t(n) cf(n) for all n n 0. for some positive real constant c and integer threshold n 0

4.4 Recurrences and Big-O

Today s Outline. CS 362, Lecture 4. Annihilator Method. Lookup Table. Annihilators for recurrences with non-homogeneous terms Transformations

CS2223 Algorithms D Term 2009 Exam 3 Solutions

Math Circle: Recursion and Induction

1 Sequences and Summation

CMPSCI 311: Introduction to Algorithms Second Midterm Exam

Divide and Conquer CPE 349. Theresa Migler-VonDollen

Announcements. CompSci 102 Discrete Math for Computer Science. Chap. 3.1 Algorithms. Specifying Algorithms

Quick Sort Notes , Spring 2010

Lecture 6 September 21, 2016

CMSC Discrete Mathematics SOLUTIONS TO SECOND MIDTERM EXAM November, 2005

Advanced Analysis of Algorithms - Midterm (Solutions)

Introduction to Divide and Conquer

Algorithms CMSC The Method of Reverse Inequalities: Evaluation of Recurrent Inequalities

Introduction to Algorithms and Asymptotic analysis

3.1 Asymptotic notation

Outline. 1 Introduction. 3 Quicksort. 4 Analysis. 5 References. Idea. 1 Choose an element x and reorder the array as follows:

Induction and recursion. Chapter 5

PRIME NUMBERS YANKI LEKILI

Input Decidable Language -- Program Halts on all Input Encoding of Input -- Natural Numbers Encoded in Binary or Decimal, Not Unary

Problem Set 2 Solutions

Mathematical Fundamentals

CS483 Design and Analysis of Algorithms

Computer Sciences Department

Mergesort and Recurrences (CLRS 2.3, 4.4)

CSE 373: Data Structures and Algorithms Pep Talk; Algorithm Analysis. Riley Porter Winter 2017

Discrete Mathematics and Probability Theory Fall 2018 Alistair Sinclair and Yun Song Note 6

Announcements. CompSci 230 Discrete Math for Computer Science. The Growth of Functions. Section 3.2

Asymptotic Analysis 1

Growth of Functions (CLRS 2.3,3)

Midterm 1 for CS 170

Discrete Mathematics and Probability Theory Summer 2014 James Cook Note 5

CS 4407 Algorithms Lecture 2: Iterative and Divide and Conquer Algorithms

1. Problem I calculated these out by hand. It was tedious, but kind of fun, in a a computer should really be doing this sort of way.

Lecture 7: Dynamic Programming I: Optimal BSTs

When we use asymptotic notation within an expression, the asymptotic notation is shorthand for an unspecified function satisfying the relation:

CS 161 Summer 2009 Homework #2 Sample Solutions

Algebra 1. Standard 1: Operations With Real Numbers Students simplify and compare expressions. They use rational exponents and simplify square roots.

Problem Set 5 Solutions

Divide and Conquer. CSE21 Winter 2017, Day 9 (B00), Day 6 (A00) January 30,

CS173 Running Time and Big-O. Tandy Warnow

Cpt S 223. School of EECS, WSU

Algorithms. Jordi Planes. Escola Politècnica Superior Universitat de Lleida

data structures and algorithms lecture 2

General Comments on Proofs by Mathematical Induction

INDIAN INSTITUTE OF TECHNOLOGY KHARAGPUR Design and Analysis of Algorithms Department of Mathematics MA21007/Assignment-2/Due: 18 Aug.

CIS 121 Data Structures and Algorithms with Java Spring Big-Oh Notation Monday, January 22/Tuesday, January 23

Big-oh stuff. You should know this definition by heart and be able to give it,

Data structures Exercise 1 solution. Question 1. Let s start by writing all the functions in big O notation:

We are going to discuss what it means for a sequence to converge in three stages: First, we define what it means for a sequence to converge to zero

COMP 250 Fall Midterm examination

Recurrences COMP 215

P, NP, NP-Complete, and NPhard

Mathematical Background. Unsigned binary numbers. Powers of 2. Logs and exponents. Mathematical Background. Today, we will review:

CSCI 3110 Assignment 6 Solutions

More Approximation Algorithms

Algorithms Exam TIN093 /DIT602

The next sequence of lectures in on the topic of Arithmetic Algorithms. We shall build up to an understanding of the RSA public-key cryptosystem.

Transcription:

Prof. Sergio A. Alvarez http://www.cs.bc.edu/ alvarez/ 21 Campanella Way, room 569 alvarez@cs.bc.edu Computer Science Department voice: (617) 552-4333 Boston College fax: (617) 552-6790 Chestnut Hill, MA 02467 USA CS383, Algorithms Spring 2009 HW1 s 1. Arrange the functions defined below in order of their asymptotic growth rates, from smallest to largest. Include the sorted list, and explanations that justify your answer. Logarithms are in base 2. n! denotes the factorial of n (product of all integers between 1 and n, inclusive). (a) n n (b) n log n (c) n 2 (d) 2 n (e) n log n (f) n! (g) (log n) n Asymptotically, the right order is: Justifications follow. n log n < n 2 < n log n < 2 n < (log n) n < n! < n n n log n = O(n 2 ) because log n = O(n) (multiply both sides by n). n 2 = O(n log n ) since both sides have the same base but the exponent log n on the right is greater than the exponent 2 on the left for all values of n greater than 2 2 = 4. n log n = O(2 n ) by taking log on both sides: the left side becomes (log n) 2, while the right becomes n, which is asymptotically larger than (log n) 2 by L Hôpital s rule (see my notes on asymptotic running time). 2 n = O((log n) n ) by taking log on both sides also: the left side becomes n, while the right becomes n log log n (where log log n means the composite function log(log n)). (log n) n = O(n!) by task 4, since log log n = O(log n) by the substitution log n = u. Finally, n! = O(n n ) since n! n n by direct comparison of the n factors on both sides: n! = 1 2 3 n n n n n = n n. 1

2. The first two Fibonacci numbers are F 0 = 0 and F 1 = 1. Subsequent Fibonacci numbers satisfy the following recurrence relation: F n = F n 1 + F n 2 (a) Formally describe the computational task of computing the n-th Fibonacci number F n, by precisely stating what the inputs and desired outputs are (and how the outputs should be related to the inputs). Note any restrictions on the input and output values. Inputs: An integer n 0. Output: The n-th Fibonacci number F n. (b) Translate the above recursive definition of the Fibonacci numbers into an algorithm (in pseudocode) for computing F n. The algorithm should follow the given recurrence relation as closely as possible. The pseudocode for the algorithm follows. function recursivefibonacci(n) if n=0 return 0 if n=1 return 1 return recursivefibonacci(n-1) + recursivefibonacci(n-2) (c) How efficient is the algorithm described in the preceding subtask? Does it run in polynomial time? Exponential time? Justify your answer. The running time of the above algorithm is exponential in the input size, n. This is easy to say but harder to actually understand. To see that F (n) (shorthand) requires exponential time, examine the recursion tree for that invocation. The number of nodes in the tree provides a lower bound for the running time, since each node represents a function that call that takes at least one step. The recursion tree for T (n) consists of a root node from which two subtrees hang: one of the subtrees is the recursion tree for T (n 1) and the other is the recursion tree for T (n 2). It follows that the number of nodes in the whole tree is at least as large as F n, the n-th Fibonacci number. Why is F n exponential in n? Argue along the lines of task 3 (exercise). (d) (Optional) Can you describe a more efficient algorithm for the computation of F n? Explain in detail. 2

My suggestion is to proceed by analogy with the more efficient combinatorial coefficient calculator that we discussed in class, using a suitable table or array and a judicious calculation order that avoids the redundancy inherent in straightforward reliance on the recursive definition. I ll let you work out the details (they re in the book, too). 3. Consider the sequence X n defined by the base cases and recurrence relation below: X 1 = 1, X 2 = 2, X n = 4 3 X n 1 + 1 3 X n 2 if n > 2 In this task you will examine the growth rate of X n as a function of n. (a) First some apparent good news. Use mathematical induction (clearly stating the basis, induction hypothesis, and inductive step; see my notes on induction) to prove that X n 2 n for all n 0 Thus, the sequence X n grows slower than the exponential function 2 n. (Basis) X 1 = 1 < 2 = 2 1 and X 2 = 2 < 4 = 2 2. (Induction Hypothesis) X k 2 k for all values of k n. (Inductive Step) Assuming that the induction hypothesis holds, we will show that X n+1 2 n+1. Start from the definition of X n+1 in terms of the previous two terms: X n+1 = 4 3 X n + 1 3 X n 1 By the induction hypothesis, X n 2 n and X n 1 2 n 1. Adding these bounds, we get X n+1 4 3 2n + 1 3 2n 1 Since 2 n = 22 n 1, the sum of the terms on the right equals 32 n 1, while 2 n+1 equals 42 n 1. Hence, X n+1 3 2 n 1 < 2 n+1 This is what we wanted to prove. We re done! (b) Now prove the sobering fact that the growth rate of X n is exponential nonetheless: X n 1.4 n for all n 4 Again use induction. Hint: 1.4 2 is less than 2. 3

(Basis) X 4 = 14/3 > 4 = 2 2 (1.4 2 ) 2 = 1.4 4 because 2 1.4 2. (Induction Hypothesis) X k 1.4 k for all values of k between 4 and some n that is no smaller than 6. (Inductive Step) Assuming that the induction hypothesis holds, we show that X n+1 1.4 n+1. From the definition of X n+1 : X n+1 = 4 3 X n + 1 3 X n 1 while by the induction hypothesis we have the bounds: X n 1.4 n and X n 1 1.4 n 1. Add the bounds to get: X n+1 4 3 1.4n + 1 3 1.4n 1 2 1.4 n 1 1.4 2 1.4 n 1 = 1.4 n+1, where we again used the fact that 2 1.4 2. This completes the proof. (c) By the above, we know that X n is between 1.4 n and 2 n. Could we be so lucky that X n is actually equal to b n for some yet-to-be found base b? Assuming that this is the case, use the recurrence relation defining X n to find an equation satisfied by b. Can you solve this equation to find the value of b? Explain in detail. As stated, let s assume that X n = b n for some unknown b. Using the recurrence relation that defines X n, we then have: b n = 4 3 bn 1 + 1 3 bn 2 This is true for all n 3. Pick n = 3 for simplicity (things work just fine for larger values, too, but this is easiest). We get the following quadratic equation for b: b 2 = 4 3 b + 1 3 1 This equation is solvable! Solving, we find two roots: b = 1 ( 2 ± ) 7 3 Which one should we pick? We can t really pick either. You can check that using either value b 1 or b 2, the powers b n i indeed satisfy the given recurrence equation. Since the recurrence is linear, any linear combination A b n 1 + B bn 2 of the two power sequences also satisfies the recurrence! The solution is to pick the right coefficients A and B. Can you do this and finish up this line of reasoning? 4. In this task you will capture the asymptotic growth rate of log(n!), where n! denotes the factorial of n. (a) Explain why n! < n n. 4

We did this in the first task, above. (b) Is it true that n! > (n/2) n? Justify your answer. Frustratingly, this is not true. Values as small as n = 6 fail, and the degree to which the bound fails increases as n grows. (c) Using the upper bound for n! above, and a suitable modification of the exponent in the would-be lower bound, find a simple big Θ expression for log(n!). Explain. The desired lower bound is n! > (n/2) n/2. This follows by examining the factors in the product that defines n!: the last n/2 or so terms are larger than n/2, and the rest are larger than 1. Combining the upper and lower bounds, and taking logs, we get the following inequalities for large enough values of n: n/2 log n/2 < log n! < n log n The log n/2 term is the same as log n 1 (since log 2 = 1). This shows that log n! is sandwiched between two terms that are both Θ(n log n). Hence, log n = Θ(n log n). 5. (Optional programming task) (a) Implement the following (heuristic) primality test based on Fermat s little theorem as a function in either C++ or Java: function isfermatprime(n) pick a random integer a between 1 and n-1 if a^(n-1) is congruent to 1 (mod n), return true; otherwise return false Submit your source code. (b) Use your implementation above, together with a second function that correctly decides primality, to write a program that determines the error rate of the random Fermat test among d-digit numbers, for each d between 1 and 7. Submit your documented source code, as well as the output of your program. (c) Modify the Fermat primality tester function so that it uses multiple test integers (a) instead of one. How does the error rate depend on the number of test integers? 5