Mid-term Exam Answers and Final Exam Study Guide CIS 675 Summer 2010 Midterm Problem 1: Recall that for two functions g : N N + and h : N N +, h = Θ(g) iff for some positive integer N and positive real numbers c 1 and c 2, for every n N, c 1 g(n) h(n) c 2 g(n). Also, h = O(g) iff for some positive integer N and positive real number c, for every n N, h(n) c g(n). Let f : N N +. (I.e. Let f map nonnegative integers to positive integers) where f(n) = (((cos nπ) + 1)n) 2 + 1. Part 1: Show that f(n) = O(n 2 ) Hint: Give the upper scaling constant (the constant c in the Big-O definition) and threshold in your argument. Answer: First, let s understand f(n). For [n = 0, 1, 2,... ], [cos nπ = 1, 1, 1, 1,... ]. Therefore, [((cos nπ)+1) = 2, 0, 2, 0... ] as [n = 0, 1, 2,... ], and [f(n) = (2 0) 2 + 1, (0 1) 2 + 1, (2 2) 2 + 1, (0 3) 2 + 1,... ]; i.e. [f(n) = (2 0) 2 + 1, (0 1) 2 + 1, (2 2) 2 + 1, (0 3) 2 + 1,... ]. In other words, { 4n 2 + 1 if n is even f(n) = 1 if n is odd
Therefore, the alternation in the ratio f(n) n 2 = { 4n 2 n 2 = 4 if n is even if n is odd 1 n 2 f(n) prevents lim from existing; i.e the limit is said to be undefined. To show n n 2 that f(n) = O(n 2 ) we must find a threshold and upper scaling constant as is required by the definition of Big-O. For all even n 1, f(n) = 4n 2 +1 5n 2. For all odd n, f(n) = 1 5n 2. Thus, we can take c = 5 and N = 1 in the definition of Big-O. Part 2: Show that f(n) Θ(n 2 ) Hint: Show that every real number fails as a lower scaling constant (the constant c 1 in the Θ-definition). Answer: Consider any possible threshold N. Let c 1 be a positive real number. Suppose for all n N, c 1 n 2 f(n) Studyguide Problem 1: The next major step in the argument is: For all odd n N c 1 1 n 2 Explain how this step in the argument follows from one or more previous major steps. Studyguide Problem 2: The next major step in the argument is: For all odd n N c 1 = 0
Explain how this step in the argument follows from one or more previous major steps. Contradiction. Therefore, no positive real number can serve as a lower scaling constant to meet the criteria for f(n) = Θ(n 2 ). Midterm Problem 2: The natural logarithm (in fact, all logarithms with bases greater than 1) is a convex function. That means, for any nonnegative real numbers a 1 and a 2, where a 1 + a 2 = 1, a 1 ln x 1 + a 2 ln x 2 ln(a 1 x 1 + a 2 x 2 ) (You don t have to prove that.) In general, any nonnegative real number valued function f defined on an interval of real numbers [a, b], where [a, b] = {x R a x b} f is said to be convex iff for any x 1, x 2 [a, b], a 1 f(x 1 ) + a 2 f(x 2 ) f(a 1 x 1 + a 2 x 2 ) Part 1: Use the convexity of ln to prove that n ln n (n 1) ln(n + 1) Hint: Let x 1 = 1 and x 2 = n+1. Let a 2 = n 1. (On the midterm I originally n said to let x 1 = n.) Answer: Studyguide Problem 3: Apply the substitutions indicated in the hint to part 1 to the inequality that defines the convexity property for the ln function
and simplify the right hand side so that it becomes ln n. Do not use the fact that ln 1 = 0 to simplify the inequality on the left hand side. Studyguide Problem 4: Multiply through the inquality by n to obtain the result called for in part 1. Studyguide Problem 5: Redo this problem using a real number valued convex function f that is convex on all intervals [1, r] for every r > 1. Studyguide Problem 6: Explain why the result you are asked to prove in part 1 depends only the convexity of ln and no other properties of ln. Part 2: Use mathematical induction and the result of part 1 (whether or not you succeeded in proving part 1) to prove that for all n 1, ln n! = ln 1 + ln 2 +... + ln n 1 2 n ln n Answer: There are two steps to this ordinary induction proof: the base step, where in this case, n = 1, and the induction step. Base step (n = 1): It must be proved that the result holds for the case where n = 1. Studyguide Problem 7: Write the mathematical statement that must be proved where n = 1. Both sides of the inequality that is the answer to studyguide problem 7 evaluate to 0. Therefore the inequality holds.
Induction step: We must prove that for every n 1, if ln n! 1 2 n ln n, then ln(n + 1)! 1 (n 1) ln(n + 1) 2 To prove this conditional statement, assume the if part, also known as the hypothesis, of the conditional statement. That is, assume ln n! 1 2 n ln n In the context of an induction proof, this assumption is known as the induction hypothesis. Studyguide Problem 8: Explain why the assumption of the induction hypothesis that was just made does not assume what we are trying to prove. It follows that ln(n + 1)! = ln n! + ln(n + 1) [Studyguide Problem 9: Why?] 1 n ln n + ln(n + 1) [Studyguide Problem 10: Why?] 2 1 (n 1) ln(n + 1) + ln(n + 1) [Studyguide Problem 11: Why?] 2 = 1 (n + 1) ln(n + 1) [Studyguide Problem 12: Why?] 2 Therefore, by the principle of mathematical induction, for every n 1, ln n! 1 2 n ln n Studyguide Problem 13: State the principle of mathematical induction. Midterm Problem 3: Consider
k = 0; a = 1; while (a <= n) { a = 2*a; k = k + a; } Use Θ-notation to express, in terms of the value n of n, both the number of times the loop executes, and the value of k upon termination. Answer: To get a feel for what happens in the while-loop, do the following: Studyguide Problem 14: Write down the list of pairs of values (a, k) each time the condition (a <= n) is checked during execution of the loop. It will help if you write the values of a and k in base 2. Think of n, the value of n, as large. It might also help if your list of the pairs of values is written vertically. Studyguide Problem 15: Argue that each time the loop condition (a <= n) is checked, a is a power of 2 and (a, k) = (a, 2a 2). The loop terminates when the value of a is the least power of 2 that equals or exceeds the value of n. Studyguide Problem 16: Argue that the loop terminates in Θ(ln n) steps. Argue that the value of k upon termination is also Θ(ln n). Midterm Problem 4: Part 1: Using Θ-notation, express the number of bits in n! in terms of n. Answer: The number of bits in a nonnegative integer m is Θ(ln m). There-
fore the number of bits in n! is Θ(ln n!). Via midterm problem 2 we saw (eventually) that ln n! = Θ(n ln n). Therefore, we could, optionally, but usefully, simplify Θ(ln n!) to Θ(n ln n). Part 2: Analyze the runtime of the following algorithm: k = n; a = 1; while (k > 1) { a = a*k; k = k - 1; } Answer: The loop is executed n 1 times, as the value of k ranges successively from the value n of n down to 1, but the loop is not executed when the value of n is 1. The time cost of one execution of the loop is the time taken to check whether (k > 1) is true, which is Θ(1), plus the cost of executing a = a*k;, which is the time cost of performing the multiplication a*k plus the time cost of assigning the result of the multiplication to a, which is Θ(1), plus the time cost of performing the decrement of k which is Θ(1). The time taken to perform multiplication of two integers dominates the other time costs. If we use a naive divide-and-conquer algorithm to perform a multiplication, the time cost is Θ(m 2 ) where m is the number of bits in the larger of the two integers to be multiplied. Using Gauss s trick we could reduce the exponent to log 2 3. Using an adaptation of the Fourier transform to perform the multiplication, we can reduce the time complexity to Θ(m ln m). Studyguide Problem 17: Show that whenever the condition (k > 1) to n! enter the loop is checked, the value of a is. This relation among the (n k)!
values of n, a and k is a loop invariant that implies that upon termination of the loop the value of a is n!. The total time taken is then Θ(ln n ln ln n) + Θ((ln n + ln(n 1)) ln(ln n + ln(n 1))) +... + Θ((ln n + ln(n 1) +... + ln 2) ln(ln n + ln(n 1) +... + ln 2)) Note: Using the time complexity Θ((ln n) 2 ) of naive divide-and-conquer multiplication, I accepted Θ(n(ln n) 2 ) as an estimate of the time complexity for full credit, or equivalently, Θ(n m 2 ), where m is the number of bits in n. Midterm Problem 5: Let p be a prime number. Assume that there is a probability distribution on {1,..., (p 1)} such that the probability of drawing any single number from this set is the same as drawing any other number from this set. That is, assume there is a uniform distribution on {1,..., (p 1)}. Part 1: Let a and b be elements of the set {1,..., (p 1)}. What is the probability that b a 1 (mod p)? Answer: a and b may be the same element. In mathematical writing, two different symbols may refer to the same entity unless otherwise specified. For each a in {1,..., (p 1)} there is exactly one element b in {1,..., (p 1)} such that b a 1 (mod p). Therefore the probability of a and b being in the right relationship is the one chance among the number of elements 1 in {1,..., (p 1)}; i.e.. [See the reasoning in Dasgupta et.al., p. p 1 46 ( internet edition ) concerning the proof of a probabilistic property of universal hash function choices.] Part 2: Let a, b, c be randomly selected from {1,..., (p 1)}. What is the probability that a b c (mod p)?
Answer: The answer is again 1 p 1. Studyguide Problem 18: Why? The reasoning is almost exactly the same as for part 1. Midterm Problem 6: The following result is known as the Master Theorem for recurrence relations. Let T : N N such that T (n) = a T ( n b ) + O(nd ) for some real number constants a > 0, b > 0, and d 0. Then O(n d ) if d > log b a T (n) = O(n d ln n) if d = log b a O(n log b a ) if d < log b a Solve the following recurrence: T (n) = 49 T (n/25) + n 3/2 ln n Answer: First, the ceiling notation in the equation for T (n) can be ignored in the Master Theorem. It s included in the statement of the theorem for cases where it might be needed. Studyguide Problem 19: Show that for any real number ɛ > 0 and d 1 n d ln n = O(n d+ɛ ) Arguments making use of limits will work here. Studyguide Problem 20: Show that n d ln n O(n d+ɛ )
Again, arguments making use of limits will work. Studyguide Problem 21: Follow the instructions of the Master Theorem to solve T (n) = 49 T (n/25) + n 3/2+ɛ To follow these instructions, 3 2 + ɛ must be compared with log 25 49. Show, by reasoning, not be using a calculator, that 25 3/2 > 49. Therefore, for each ɛ > 0, as well as for ɛ = 0, d + ɛ > log 25 49 Studyguide Problem 22: Verify that, by the Master Theorem, that T (n) = O(n 3/2 ln n) solves T (n) = 49 T (n/25) + n 3/2+ɛ Note: An argument such as the one above earned full credit. The argument that T (n) = O(n 3/2 ) that was shown in class also earned full credit. But the idea with recurrences is to get the fastest growing function that satisfies the recurrence relation. So: Studyguide Problem 23: Show that T (n) = O(n 3/2 ln n) solves T (n) = 49 T (n/25) + n 3/2 ln n
using the more powerful version of the Master Theorem given at http://people.csail.mit.edu/thies/6.046-web/master.pdf Studyguide Problem 24: The Fast Fourier Transform, (FFT) is given in Das Gupta et.al. as function FFT(A, ω) Input: Coefficient representation of a polynomial A(x) of degree n 1, where n is a power of 2 ω, an n th root of unity Output: Value representation [A(ω 0 ),..., A(ω n 1 )] if ω = 1: return A(1) express A(x) in the form A e (x 2 ) + xa o (x 2 ) call FFT(A e, ω 2 ) to evaluate A e at even powers of ω call FFT(A o, ω 2 ) to evaluate A o at even powers of ω for j = 0 to n 1: compute A(ω j ) = A e (ω 2j ) + ω j A o (ω 2j ) return A(ω 0 ),..., A(ω n 1 ) Apply the FFT to the polynomial 3 + 4x + 6x 2 + 2x 3.