Divide and Conquer. Chapter Overview. 1.2 Divide and Conquer

Size: px
Start display at page:

Download "Divide and Conquer. Chapter Overview. 1.2 Divide and Conquer"

Transcription

1 Chapter 1 Divide and Conquer 1.1 Overview This chapter introduces Divide and Conquer, which is a technique for designing recursive algorithms that are (sometimes) asymptotically efficient. It also presents several algorithms that have been designed using it, along with their analysis. While the algorithms are interesting by themselves (and some textbooks include exercises on this topic which ask students to trace execution of them on given inputs), they re intended as examples in these notes, rather than subjects for study by themselves. Thus, CPSC 413 students may be asked to design similar algorithms using Divide and Conquer when given appropriate time and guidance: It s frequently not obvious how one should break a problem down into subproblems in order to solve it quickly. Students might also be expected to analyze these algorithms. However, students won t be expected to memorize the algorithms that are given here as examples, and they won t be asked to trace the execution of these algorithms, on particular inputs, in assignments or tests for this course in Much of the material presented in Chapter 8 in Part I will be useful for the analysis of Divide and Conquer algorithms. The versions of the Master Theorem presented in this chapter will be particularly useful here. A set of exercises is given at the end of this chapter and can be used to assess the algorithm design and analysis skills mentioned above. 1.2 Divide and Conquer As described by Cormen, Leiserson, and Rivest [5] (among others), an algorithm uses Divide and Conquer if it solves a problem (given some instance of the problem as input) by decomposing the given instance into several instances of the same problem, so that these can be solved by a recursive application of the same algorithm; recursively solving the derived instances; and then combining their solutions in order to obtain a solution for the original problem. This technique can sometimes be used to design algorithms that use polynomial time (that is, a polynomial number of operations in the input size) in the worst case. However, the algorithms that are designed using this technique aren t always this efficient. Consider, for example, the algorithm for the computation of the Fibonacci numbers that was discussed in Section in Part I. This algorithm computes the n th Fibonacci number by calling itself recursively, twice, to compute the (n 1) th and (n 2) th Fibonacci numbers, and then 7

2 performing (at most) a constant number of additional operations. This algorithm requires more (( than polynomial time we ve seen already that the number ) n ) of operations it uses is in Θ Thus this algorithm would use time that s exponential in the size of input even if we insisted that its input (n) was given in unary as a string of n ones. The problem here is that, taken altogether, an exponential number of smaller instances of the problem are eventually created and recursively solved when it attempts to compute the n th Fibonacci number. So, this is an example of an algorithm that uses Divide and Conquer. However, it isn t an example of an efficient algorithm, since it uses time that s more than polynomial in the size of its input in the worst case. It turns out that an algorithm using Divide and Conquer will be asymptotically efficient, provided that the following additional conditions are satisfied: 1. The algorithm generates (at most) a constant number of smaller instances of the given problem, which are to be recursively solved, when processing any given instance. 2. The size of each new instance that the algorithm generates is smaller than the size of the originally given instance by at least a constant factor that s less than one. 3. The time required by the algorithm to generate the new instances, and to solve the original instance using the solutions of the derived ones, is at most a polynomial function of the input size. The above algorithm for computing the n th Fibonacci number satisfies the first of these conditions, since it generates at most two smaller instances to be solved when asked to compute F n not counting the even smaller instances that it generates when it tries to solve these instances recursively. It also satisfies the third condition if integer addition and subtraction are assumed to have unit cost. However, it fails to satisfy the second condition, because it calls itself recursively to compute F n 1 (and F n 2 ) when it s asked to generate F n and the recursively derived input, n 1, is not smaller than n by a constant factor. These conditions aren t strictly necessary it s possible to design an algorithm using Divide and Conquer that violates one or more of them and that is asymptotically efficient anyway. However, many of the efficient algorithms that have been designed using Divide and Conquer do satisfy them, and we ll consider several such algorithms in the rest of this chapter. It will turn out that the Master Theorem from Chapter 8 in Part I will be extremely useful for the analysis of this kind of algorithm. At least, this will be useful when all the recursively derived instances of the problem have roughly the same size. You ll need to rely on other techniques, such as the substitution method and the iteration method that were also introduced in Chapter 8 in Part I, when the sizes of the recursively derived instances aren t all the same. 1.3 Binary Search Consider again the binary search algorithm that was presented in Section in Part I. We ve seen already that the algorithm uses a number T (n) of operations, in the worst case, that is given by the recurrence 2 if n 0, T (n) = ( ) T n ifn 1. 8

3 Here, n is the number of elements in the array to be searched. This recurrence isn t quite in the form that the Master Theorem considers. However, since n 1 2 is pretty close to n 2, one might guess or conjecture that T (n) Θ(U(n)), where c if n 1, U(n) = U ( n 2 ) +5 ifn 2, where c is some positive constant. The Master Theorem can be used to prove that U(n) Θ(log 2 n), so the above guess is equivalent to a conjecture that T (n) Θ(log 2 n), and the substitution method (introduced in Section 8.4 in Part I) could be used to confirm that this is the case. Alternatively (if you wish to avoid using the substitution method), you could note that application of the recurrence confirms that T (1) = 7, so this can be rewritten as You could next note that 2 if n 0, T (n) = 7 if n =1, ( T n 1 2 )+5 ifn 2. n 1 2 = n 2 for every integer n (prove this by considering the cases that n is even and that n is odd separately), so that the above recurrence can be rewritten as 2 if n 0, T (n) = 7 if n =1, T ( n 2 ) +5 ifn 2. Now it follows that 7 if n 1, T (n) T ( n 2 ) +5 ifn 2, and 2 if n 1, T (n) T ( n 2 ) +5 ifn 2; these recurrences can be analyzed using the Master Theorem, so that we can conclude from them that T (n) Θ(log 2 n) as well, without having to resort to other techniques. Binary search is discussed in many texts on algorithm design and analysis, including Brassard and Bratley [2] (in Section 7.3), Horowitz, Sahni, and Rajasekaran [6] (in Section 3.2), and Neapolitan and Naimipour [9] (in Section 2.1). 1.4 Merge Sort On way to sort an array of length n is to break it into two pieces, of lengths n 2 and n 2, sort these subarrays recursively, and then merge the contents of the subarrays together to produce a sorted array provided that n 2; the problem is trivial (because the given array is already sorted to start with) otherwise. It s easy to merge two sorted arrays together, to produce a larger sorted array, using a number of comparisons that s linear in the sum of the lengths of the input arrays. In particular, all you need to do is maintain an output array that s initially empty, as well as pointers into the input 9

4 arrays that initially point to the front elements. As long the pointers point to elements of the input arrays (so that you haven t fallen off the end of one or the other of the input arrays), it s sufficient to compare the elements that the two pointers point to, append the smaller of these two elements onto the end of the output array, and then advance the pointer for the list from which this element was taken, to point to the next element after the one that s just been appended to the output. As soon as you reach the end of one of the two input arrays (by adding its last element to the output), all you need to do is append the remaining elements of the other array to the output in order to finish. Since only a constant number of operations are performed each time an element is added to the output array, and at least one operation is performed each time this happens, it s clear that both the best and worst case running times for this merge operation are linear in the input size. Now let T (n) be the number of steps used in the worst case by the recursive sorting algorithm that s described above, when given an array of length n. It follows by the above analysis of the cost of a merge that T (n) satisfies the following recurrences: and c1 if n 1, T (n) T ( n 2 ) + T ( n 2 ) + d 1 n if n 2, c2 if n 1, T (n) T ( n 2 ) + T ( n 2 ) + d 2 n if n 2, for positive constants c 1, c 2, d 1, and d 2. Since these recurrences involve two applications of the function T on inputs that are both approximately half as large as the original inputs size, you might guess at this point that T (n) Θ(U(n)) where c if n 1, U(n) = 2U ( n 2 ) + dn if n 2, where c and d are positive constants, use the Master Theorem to establish that U(n) Θ(n log 2 n), and then use the substitution method to prove that T (n) Θ(n log 2 n) as well. Alternatively you could note (or prove, using a straightforward induction) that T (n) is a nondecreasing function, so that it also satisfies the recurrences and c1 if n 1, T (n) 2T ( n 2 ) + d 1 n if n 2, c2 if n 1, T (n) 2T ( n 2 ) + d 2 n if n 2, if T (n) satisfies the recurrences that were originally given for it. Now you can apply the Master Theorem more directly to these recurrences and then argue from the first that T (n) O(n log 2 n) 10

5 and from the second that T (n) Ω(n log 2 n), establishing again that T (n) Θ(n log 2 n) (this time, without having to use the substitution method to confirm a guess). See Horowitz, Sahni, and Rajasekaran [6] (Section 3.4), Neapolitan and Naimipour [9] (Section 2.2), or Sedgewick [11] (Chapter 8) for additional information about this algorithm. 1.5 Asymptotically Fast Integer Multiplication Next consider the problem of multiplying two nonnegative integers together. Let s call the input integers x and y, their product (the output) z, and let s try to assess the cost of performing this operation as a function of the maximum of the lengths of decimal representations of x and y. The case x = y = 0 is trivial (z = 0 as well, so this instance of the problem could be solved by reading the input and confirming that both input integers are zero, and then writing zero out as the answer), so we ll ignore this case from now on, and we ll assume that at least one of x or y is positive. Thus, we ll consider the input size to be the natural number n, where either or 10 n 1 x<10 n and 0 y<10 n 10 n 1 y<10 n and 0 x<10 n or both. We ll count operations on digits as having unit cost. Note, by the way, that if we were to use the unit cost criterion (as previously defined) instead, then we d be considering the input size to be 2 in all cases. We d probably also be charging 1 as the cost of this computation, and this wouldn t give a very useful analysis. The standard integer multiplication algorithm that you learned in public school can be analyzed without too much difficulty, and it shouldn t be too difficult to write this down in pseudocode (perhaps, assuming that x and y are given by arrays of their digits and that the output z is to be represented this way too); if you do this then you should discover that the algorithm uses Θ(n 2 ) operations on digits in the worst case. It isn t too hard to see that the algorithm can t use more that O(n 2 ) of these operations under any circumstances, and it also shouldn t be too hard to see that Ω(n 2 ) of these operations are used if the decimal representations of both x and y have length n. Here is one more property of this standard algorithm that should be noted, because it will be useful later on: If implemented reasonably carefully, this algorithm uses only a linear amount (Θ(n)) of work space, provided that you assume that each decimal digit has constant size. Here are a few more observations that will be useful later on: 1. It s possible to compute the sum x + y from x and y using only a linear number of operations on digits and only a constant amount of additional work space, if you don t count the space needed to write down the output: Simply use the grade school method for addition. You only need a constant number of operations to compute each digit of the sum, and you only need to remember one extra digit namely, the carry digit that was computed in the previous step at any point in the computation. 11

6 2. You can implement subtraction, computing the x y on inputs x and y (assuming, if you like, that x y, so that the answer is always nonnegative, or adding a bit to the output to represent the sign of the answer, otherwise) using asymptotically the same time and storage space as you d need to implement addition, for the same inputs. 3. If integers are given by their decimal representations then the operations of multiplying or performing division with remainder by a power of ten is also quite inexpensive An Ineffective Approach Let m = n 2. Since 0 <x,y<10n, it s clear that there exist nonnegative integers x L, x U, y L, and y U such that 0 x L,x U,y L,y U <10 m, and x =10 m x U +x L and y =10 m y U +y L. Furthermore, if x and y are given by their decimal representations, then you can use these to extract decimal representations of all four of x L, x U, y L and y U using only O(n) operations on digits. (Why?) In fact, the lengths of x U and y U will be at most n 2, which is equal to m 1ifnis odd. On the other hand, at least one of these two integers will have length n 2, since x and y s decimal representations would both have length less than n otherwise. Now, it should be clear that if z = x y then z =10 2m x U y U +10 m (x L y U +x U y L )+x L y L Based on this, you might compute z using the following steps, assuming n 2 (you d just go ahead and compute the product of a single pair of digits if n = 1): 1. Decompose the inputs to obtain (decimal representations of) x L, x U, y L, y U. 2. Recursively compute the product z 1 = x L y L. 3. Recursively compute the product z 2 = x L y U. 4. Recursively compute the product z 3 = x U y L. 5. Recursively compute the product z 4 = x U y U. 6. Return z =10 2m z m (z 2 +z 3 )+z 1. The number of operations on digits, T (n), used by this algorithm in the worst case, can be seen to satisfy a recurrence of the form c if n 1, T (n) 3T ( n 2 ) + T ( n 2 ) + dn if n 2, where c and d are positive constants (note, again, that the first recursive multiplication in the above algorithm uses slightly smaller inputs than each of the last three, if n is odd). If you can prove (or are allowed to assume) that T (n) is a nondecreasing function then you may conclude that c if n 1, T (n) 4T ( n 2 ) + dn if n 2, 12

7 which makes it easier to apply the master theorem. This can be used to establish that T (n) O(n 2 ). It isn t quite as easy as it might seem to establish a lower bound, in all cases, that has this form. On the other hand, it is easy to argue that if n is a power of two then ĉ if n 1, T (n) 4T ( n 2 ) + dn if n 2, for some positive constant ĉ, by considering the input x = y =10 n 1 (with decimal representations consisting of strings of nines) in this special case. This, and the fact (or assumption) that T (n) is nondecreasing, can be used to establish that T (n) Ω(n 2 ) as well. Thus, T (n) Θ(n 2 ), so that this algorithm has the same asymptotic behaviour of the standard one. A more careful analysis will confirm that the hidden multiplicative constant in the running time for this algorithm is larger than the corresponding constant for the standard algorithm at least in the special case that n is a power of two. The new algorithm isn t significantly better for other values of n. So, while the two algorithms have the same asymptotic cost, this new algorithm is always slower (although, by only a constant factor) than the simpler one. To make matters worse, the new algorithm requires quadratic storage space as well as time, so the new algorithm is of no practical interest A More Effective Approach The Algorithm Now, let u = x U x L and v = y U y L and note that the absolute values of u and v are nonnegative integers such that 0 u, v <10 m for m as above. Since u v = x U y U x U y L x L y U + x L y L it is reasonably easy to confirm that z = x y =10 2m x U y U +10 m (x U y U +x L y L u v)+x L y L. While, at first glance, this might not look any better than the expressions given above, it is the basis for the correctness of a recursive integer multiplication algorithm, in which you perform the following steps when n 2: 1. Decompose the inputs to obtain (decimal representations) of x L, x U, y L, and y U. 2. Use integer subtraction to compute u = x U x L and v = y U y L. Then compute (and remember) the signs of u and v, and compute u and v as well. 3. Recursively compute z 1 = x U y U. 4. Recursively compute u v and use this, along with the signs of u and v, to recover the product z 2 = u v. 5. Recursively compute z 3 = x L y L. 6. Compute z =10 2m z m (z 1 +z 3 z 2 )+z 3. 13

8 Analysis After proving (or, if you are allowed to, assuming) that the number of steps T (n) used by this algorithm is nondecreasing, you can argue that this running time satisfies the recurrence c if n 1, T (n) 3T ( n 2 ) + dn if n 2, for positive constants c and d, because the algorithm uses only three recursive multiplications of integers that are approximately half as large as the original inputs, along with additional operations that can be performed in linear time. The master theorem can be used to find a closed form for a recurrence, and this can be used to establish that T (n) O(n log 2 3 ). Since log 2 3 < 1.6, this implies that T (n) O(n 1.6 ). This is very similar to the first sub-quadratic integer multiplication algorithm, Karatsuba s Algorithm, which was discovered by Karatsuba and Ofman [7] in the early 1960 s. The fact that n-digit integer multiplication could be performed using o(n 2 ) operations on digits was extremely surprising at the time. There is good news, and bad news, concerning this approach. Here is the good news: While these algorithms aren t quite efficient enough to replace standard multiplication for single- or double precision integer computations, they are efficient enough to be considered to be efficient (and practical ) when used to multiply integers that are slightly larger. The threshold (between input sizes on which the standard algorithm is superior, and sizes on which the asymptotically efficient algorithm is the better of the two) is low enough so that, in the 1990 s, it is reasonably common to find that integer multiplication algorithms using time in O(n log 2 3 ) are being implemented and used for extended-precision computations. Here is the bad news: The storage space required by the asymptotically faster algorithm is (also) in Θ(n log 2 3 ), so that the standard algorithm is preferable if storage space is the resource bottleneck to be concerned about, rather than running time. Neapolitan and Naimipour [9] also discuss the above algorithm (in Section 2.6) More About the Complexity of Integer Arithmetic This topic is beyond the scope of this course, so this subsection is not required reading. Algorithms for integer multiplication that are asymptotically even faster do exist and have been known for some time. Indeed, one of the exercises at the end of this chapter involves the derivation of one such algorithm. The asymptotically fastest algorithm that is currently known is the algorithm of Schönhage and Strassen [10], which is based on the fast Fourier transform and can be used to multiply two n-digit integers together using O(n(log n)(log log n)) operations on digits. Unlike the above Karatsuba-like algorithms this is currently only considered to be of theoretical interest. That is, implementations of it aren t common and it isn t widely used in practice. The fast Fourier transform over fields is discussed in Chapter 32 of Cormen, Leiserson, and Rivest [5], and the generalization of this to computations over rings (that s needed for Schönhage and Strassen s algorithm) is discussed in exercises there. A more extensive discussion of this topic, 14

9 which includes Schönhage and Strassen s algorithm and its analysis, appears in Chapter 7 of Aho, Hopcroft and Ullman [1]. Divide and Conquer can also be used to design asymptotically efficient algorithms for related integer computations, including integer division with remainder and the computation of the greatest common divisor of two integers. These algorithms and their analysis can be found in Chapter 8 of Aho, Hopcroft and Ullman [1]. Finally, Knuth [8] includes a much more extensive discussion of algorithms for integer arithmetic than any of the above. 1.6 Asymptotically Fast Matrix Multiplication This last example will be skipped if there isn t time for it. Consider now the problem of computing the product of two n n matrices. To simplify the analysis we ll consider field operations or operations on scalars to have unit cost. The standard matrix multiplication algorithm, which you may have learned in high school, can be used to perform this computation using approximately 2n 3 (more precisely, 2n 3 n 2 ) of these operations: It computes each of the n 2 entries of the product matrix at a time by taking the inner product of a pair of vectors, using n multiplications and n 1 additions of scalars for each. In contrast, matrix addition and subtraction seem to be much cheaper, since you add and subtract matrices componentwise; exactly n 2 scalar operations are needed to either add or subtract two n n matrices together. A recursive algorithm can be developed using Divide and Conquer for matrix multiplication as well. To simplify the description of this algorithm, let s suppose henceforth that n isapowerof two we ll remove this assumption later. Now note that if X and Y are two n n matrices, then you can write them as [ ] [ ] X1,1 X X = 1,2 Y1,1 Y and Y = 1,2 X 2,1 X 2,2 Y 2,1 Y 2,2 where X i,j and Y i,j are n 2 n 2 matrix matrices for all i and j. In this case the product of X and Y is a [ ] Z1,1 Z Z = 1,2 Z 2,1 Z 2,2 where Z i,j = X i,1 Y 1,j + X i,2 Y 2,j for 1 i, j 2. It isn t too difficult to write a recursive matrix multiplication algorithm, using this approach, that uses T (n) operations on scalars to multiply a pair of n n matrices, where 1 if n =1 T (n) = 8T ( n 2) +f(n) if n 2, for an asymptotically positive function f(n) Θ(n 2 ). Unfortunately, an analysis of this recurrence proves that T (n) Θ(n 3 ). So, asymptotically, this is no better than the standard algorithm, and a more careful analysis will confirm that it has no practical interest (in that you should expect it to be consistently slower than the standard algorithm, and it requires additional storage space as well). 15

10 1.6.1 Strassen s Algorithm In the late 1960 s, Strassen [13] described an algorithm for the multiplication of two n n matrices using seven recursive multiplications of n 2 n 2 matrices and Θ(n2 ) additional operations on scalars, rather than eight. Several such algorithms are now known, and one of them is presented below. Suppose again that n is a power of two, and consider the matrices X, Y, Z, and X i,j, Y i,j and Z i,j for 1 i, j 2 mentioned above. The next seven n 2 n 2 matrices can be computed from these using exactly ten additions or subtractions of n 2 n 2 matrices and and exactly seven multiplications of n 2 n 2 matrices: P =(X 1,1 +X 2,2 ) (Y 1,1 +Y 2,2 ); Q =(X 2,1 +X 2,2 ) Y 1,1 ; R=X 1,1 (Y 1,2 Y 2,2 ); S = X 2,2 (Y 2,1 Y 1,1 ); T =(X 1,1 +X 1,2 ) Y 2,2 ; U=(X 2,1 X 1,1 ) (Y 1,1 +Y 1,2 ); V =(X 1,2 X 2,2 ) (Y 2,1 +Y 2,2 ). (1.1) The above expressions indicate how these seven matrices should be computed as part of an asymptotically fast matrix multiplication algorithm, in that they show which matrix additions and subtractions are needed to form seven pairs of n 2 n 2 matrices whose products should be recursively computed. However, since matrix multiplication and addition satisfy the usual distributive laws (even though matrix multiplication isn t commutative), these matrices also satisfy the following equations. P = X 1,1 Y 1,1 + X 1,1 Y 2,2 + X 2,2 Y 1,1 + X 2,2 Y 2,2 ; Q = X 2,1 Y 1,1 + X 2,2 Y 1,1 ; R = X 1,1 Y 1,2 X 1,1 Y 2,2 ; S = X 2,2 Y 2,1 X 2,2 Y 1,1 ; T = X 1,1 Y 2,2 + X 1,2 Y 2,2 ; U = X 2,1 Y 1,1 + X 2,1 Y 1,2 X 1,1 Y 1,1 X 1,1 Y 1,2 ; V = X 1,2 Y 2,1 + X 1,2 Y 2,2 X 2,2 Y 2,1 X 2,2 Y 2,2. While it s tedious, the above equations can be used to confirm that the following identities are satisfied too: Z 1,1 = P + S T + V ; Z 1,2 = R + T ; Z 2,1 = Q + S; Z 2,2 = P + R Q + U. (1.2) It follows that the product matrix Z can be computed from P, Q, R, S, T, U, and V using an additional eight additions and subtractions of n 2 n 2 matrices (and no more matrix multiplications). In total, then, the computation of Z from X and Y (based on equations 1.1 and 1.2 above) uses eighteen additions or subtractions of n 2 n 2 (which require a total of 9 2 n2 additions or subtractions of scalars), as well as seven multiplications of n 2 n 2 matrices (each of which should be performed recursively). 16

11 Let T (n) be the number of operations on scalars used by the Divide and Conquer algorithm for n n matrix multiplication that s been sketched. Then, if n is a power of two, then 1 if n =1, T (n) = 7T ( ) n n2 if n 2. The Master Theorem can be used to solve this recurrence and to establish that T (n) Θ(n log 2 7 ). Since log 2 7 < 2.81, this implies that T (n) O(n 2.81 ) (indeed, T (n) o(n 2.81 )), and therefore that T (n) o(n 3 ). It follows that this algorithm is asymptotically faster than standard matrix multiplication. Now suppose that n is not a power of two and that you want to compute the product of two n n matrices X and Y, as above. Set ˆn =2 log 2 n, so that ˆn is a power of two such that n ˆn <2n, and consider the following two ˆn ˆn matrices, ˆX = [ ] X and Ŷ = [ ] Y 0, 0 0 which have X and Y, respectively, as their top left n n submatrices, and which don t have any nonzero entries anywhere else. It should be clear that the product of ˆX and Ŷ is a matrix whose top left n n submatrix is XY, so that you can multiply X by Y by forming and multiplying together ˆX and Ŷ instead. Since ˆn is a power of two, the above asymptotically fast matrix multiplication algorithm can be used to multiply ˆX by Ŷ. Since ˆn <2n, the resulting algorithm still uses O(nlog 2 7 ) operations to multiply X and Y together. To my knowledge, this algorithm is not widely considered to be of practical interest, in part because it is not clear that it is numerically stable, and in part because it does not perform well on small- or moderately-sized inputs. However, this opinion may be changing (at least, when computations are exact, so that numerical stability isn t an issue). You can find more information about Divide and Conquer algorithms for matrix multiplication with the above asymptotic cost in Aho, Hopcroft and Ullman [1] (Chapter 6), Brassard and Bratley [2] (Section 7.6), Cormen, Leiserson, and Rivest [5] (Section 31.2), Horowitz, Sahni, and Rajasekaran [6] (Section 3.7), or Neapolitan and Naimipour [9] (Section 2.5). Cormen, Leiserson, and Rivest also attempt to describe how one could go about deriving equations like equations 1.1 and 1.2, so you might want to look at their presentation in order to see this More About the Complexity of Matrix Multiplication This topic is beyond the scope of CPSC 413, so this subsection is not required reading. Asymptotically faster matrix multiplication algorithms are also known to exist, and Brassard and Bratley [2] include a brief discussion of the history of research on the complexity of matrix multiplication. The most recent result that has improved the theoretical upper bound on the complexity of matrix multiplication is that of Coppersmith and Winograd [4], who show that it is possible to multiply two n n matrices together using O(n α ) operations on scalars, for a constant α<2.39. However at the moment (and, probably, for the foreseeable future), it seems highly unlikely that this result will ever be of more than theoretical interest; indeed, the only matrix 17

12 multiplication algorithms that are currently considered to be practical are the standard algorithm and, possibly, the algorithms with complexity Θ(n log 2 7 ) discussed above. The problem of solving a nonsingular n n system of linear equations can be proved to have the same asymptotic complexity as matrix multiplication, so this problem can also be solved (at least, theoretically) in sub-cubic time. It s known that one can also compute various matrix factorizations of matrices (including an LUP factorization of a nonsingular matrix), and one can compute the rank of a matrix, at this cost. Both Aho, Hopcroft, and Ullman [1] (Chapter 6) and Cormen, Leiserson, and Rivest [5] (Chapter 31) present at least some of this material with Aho, Hopcroft, and Ullman providing more of it. Finally, Bürgisser, Clausen, and Shokrollahi [3] includes far more information about the complexity of matrix multiplication and related problems. 1.7 A Bit About Implementations This is beyond the scope of CPSC 413 (so that this section is also optional). However, at least one thing can be noted: While the above Divide and Conquer algorithms have been presented as purely recursive, in that a recursive approach is being described as used even when n is very small (namely, when n = 2), this is certainly not how Divide and Conquer algorithms should be implemented. For example, one way to write an algorithm with almost the same performance as the standard integer multiplication algorithm on small inputs, but with the same asymptotic complexity as the Karatsuba-like algorithms that were described above, would be to start by comparing n to some pre-determined threshold value, k. If it was found that n k then the standard algorithm would be used, and Karatsuba s algorithm would be used otherwise. The threshold would be chosen using a more careful theoretical analysis, experimental techniques (including profiling of code, etc.) or both, and the best choice of the threshold might depend on such things as the skill of the programmer, the programming language, operating system, hardware, and so on. It would likely be even better, though, to write a recursive algorithm A with the structure if n k then Perform the multiplication using the standard algorithm else Proceed as with a Karatsuba-like algorithm, calling algorithm A when it is necessary to multiply smaller integers recursively. end if The difference between this algorithm and the previous one is that if n is large then it will behave initially like the algorithm based on Divide and Conquer that has already been described forming smaller and smaller instances of the problem that are to solved recursively. At some point, though, the smaller instances would have size less than or equal to k, and the standard algorithm would then be applied to solve all of these, even though the recursive approach was used to form these smaller problem instances at the beginning. This algorithm has a small amount of additional overhead that the previous version lacks input sizes are being compared to the threshold value k more often. In spite of that, it should be at least plausible (since k is a constant, so that comparing it to the input size will be inexpensive) that you could profitably choose a larger threshold value when using this version of a hybrid algorithm than you could for the previous one, and also that a careful implementation of this version of the algorithm could prove to be superior to the previous ones (Why?). 18

13 Something more to think about: What would be a good choice of the threshold value, k, for each of the hybrid algorithms described above? Another implementation detail, that is more specific to the problem of integer multiplication, has to do with the fact that the above integer multiplication algorithms were presented as if decimal representations of the input and output integers were being used. Similar algorithms can be obtained in which you a produce a base B representation of the output integer z from base B representations of the inputs, for any constant B 2 that you want. So, you can produce Karatsuba-like multiplication algorithms that work with binary, or hexadecimal, representations of integers. You can also choose B to be much larger, so that only one digit, or a pair of digits, can be fit into a word of machine memory. This can lead to an algorithm that is more efficient than it would be if B = 10 (or if B has some other small value) while making more efficient use of storage space at the same time. (The choice such that a pair of digits fit into a word should be considered, because it allows an arbitrary product of a pair of digits to fit into a word of memory as well, and this might simplify the implementation of an integer multiplication algorithm). 1.8 Additional References Brassard and Bratley [2], Horowitz, Sahni, and Rajasekaran [6], and Neapolitan and Naimipour [9] all include chapters on Divide and Conquer. Each includes one or more additional examples, and several include a bit more information about how you d choose threshold values when implementing these algorithms. 1.9 Exercises Clearly, most of these exercises are too long to be used on tests in this course. This will be true for many of the exercises included for the algorithm design topics in the next two chapters as well. However, you will certainly be well prepared for tests if you re able to solve these problems without too much trouble. Hints for some of these exercises are given in the section after this one. Solutions for the first two of these exercises can be found in Subsection Suppose you re given a positive integer n and that you wish to construct a binary search tree storing the values 1, 2,..., n that s as balanced as possible. Once you ve decided which value to store at the root, you will have no choice about which values to include in the left subtree and which values to include in the right. In order to make your tree as balanced as possible, you should store n+1 2 at the root if n is odd, and you should store either n 2 or n at the root if n is even. In the latter case, it doesn t matter which you choose, so you might as well choose the smaller element n 2. Note, also, that if you have a balanced binary search tree storing the values 1, 2,...,n, then it s easy to turn this into a balanced binary search tree storing k +1,k+2,...,k+n for any given integer k all you need to do is add k to the values stored at each node of the given tree. (a) Based on this, design a Divide and Conquer algorithm to construct a balanced binary search tree storing the values 1, 2,...,n when you re given n as input. 19

14 (b) Next, write down a recurrence for the time used by your algorithm on input n, assuming that it takes constant time to create a node or a pointer to one, to change the value stored at a node (by adding some value to it), or to divide a given value by two. (c) Finally, find a function f(n) in closed form such that your algorithm uses Θ(f(n)) of the above steps in the worst case. 2. Now change your algorithm so that it takes a second input, k, and produces a balanced binary search tree storing k + 1,k + 2,...,k + n instead. Your new algorithm should use the same number of recursive calls as the old one, but it should do less additional work (since you can change the value of the second parameter, k, when you recurse). Once again, form and solve a recurrence for the worst case running time of your algorithm. You should discover that the new algorithm requires substantially fewer operations in the worst case than the original one did. 3. If n 1 and 0 i n then the binomial coefficient ( n i) satisfies the following equation: 1 if i =0ori=n, ( ) n = i ( n 1 i 1 ) + ( n 1 i ) if 1 i n 1. Suppose, for the purposes of this question, that this is all you know about this value in particular, you should pretend that you don t know any other expression for ( n i). (a) Design a Divide and Conquer algorithm that computes ( n i) on inputs n and i, assuming that 0 i n (you may return the output 0 if i is out of range). (b) Then write down a recurrence for the time required by your algorithm on inputs n and i, as a function of n alone, in the worst case. You should assume that it s possible to add two integers together or to compare two integers in constant time when you generate this recurrence. (c) Finally, find a function f(n) in closed form such that your algorithm uses time O(f(n)) on inputs n and i in the worst case. 4. Recall that the Fibonacci numbers F 0,F 1,F 2,... are defined by the recurrence 0 if n =0, F n = 1 if n =1, F n 1 +F n 2 if n 2, and note that this implies that whenever n 2. F n = F 1 F n + F 0 F n 1 = F 2 F n 1 + F 1 F n 2 (a) Prove by induction on i that if i 1 and n is any integer such that n i, then F n = F i F n i+1 + F i 1 F n i. 20

15 (b) Use the result from part (a) to show that if n 1 and l = n 2 then F n = Fl (F l 1 + F l+1 )=F l (F l +2F l 1 ) if n is even (so that n =2l), Fl 2 + Fl+1 2 l +2F l F l 1 +Fl 1 2 if n is odd (so that n =2l+ 1). (c) Use the above result to design a Divide and Conquer algorithm for computation of the n th Fibonacci number F n on input n, such that the number of arithmetic operations (additions, multiplications, subtractions, and divisions with remainder of pairs of integers) is in O(n), and prove that your algorithm does have this worst case running time (if integer arithmetic is assumed to have unit cost). 5. Of course, it isn t realistic to assume that integer arithmetic can be performed in constant time if the integers can be arbitrarily large: We should probably be counting the number of operations on digits (or operations on integers of some larger fixed size) that are performed by an algorithm instead, when considering the algorithm s running time. It s possible to add or subtract two m-digit integers together using Θ(m) operations on digits. At this point, we ve also seen or know about at least three different algorithms for integer multiplication, which could be used to multiply two m-digit integers together: (a) Standard multiplication (which you learned to use in public school) uses Θ(m 2 ) operations on digits. (b) The Karatsuba-like multiplication algorithm given in this chapter of the notes uses Θ(m log 2 3 ) operations on digits. (c) The Schönhage-Strassen multiplication algorithm that is described in [10], [1], or [8] uses Θ(m(log 2 m)(log 2 log 2 m)) operations on digits. It s possible to perform integer division with remainder on m-digit integers using O(m 2 ) operations on digits. One can do even better than this as well, but you won t need to in order to solve this problem. Finally, you should note that the length of a decimal representation of F m is in Θ(m). This is implied by the results that are proved or left as exercises in Chapter 2 in Part I, and you may use this fact without proving it here. Perform a more careful analysis of the algorithm you designed for the previous exercise, to show that it s possible to compute F n on input n at the following costs: (a) If standard multiplication is used for integer multiplication, then F n can be computed from n using O(n 2 ) operations on digits. (b) If a Karatsuba-like multiplication algorithm is used for integer multiplication, then F n can be computed from n using O(n log 2 3 ) operations on digits. (c) If the Schönhage-Strassen algorithm is used for integer multiplication, then F n can be computed from n using O(n(log 2 n) 2 (log 2 log 2 n)) operations on digits. You don t need to know anything about these multiplication algorithms, except for the fact that they use the number of operations listed above, in order to solve this problem. On the other hand, it definitely won t be sufficient just to multiply the number of integer arithmetic operations that your algorithm uses by the cost (number of operations on digits) for the most expensive arithmetic operation a more careful analysis will be needed! 21

16 6. Suppose h(x) =h 4 x 4 +h 3 x 3 +h 2 x 2 +h 1 x+h 0 is an integer polynomial with degree at most four (so that h 0,h 1,...,h 4 Z). Suppose as well that v 2, v 1, v 0, v 1, and v 2 are defined as follows. v 2 = h( 2)=16h 4 8h 3 +4h 2 2h 1 +h 0 ; v 1 =h( 1) = h 4 h 3 + h 2 h 1 + h 0 ; v 0 = h(0) = h 0 ; v 1 = h(1) = h 4 + h 3 + h 2 + h 1 + h 0 ; v 2 = h(2)=16h 4 +8h 3 +4h 2 +2h 1 +h 0. In other words, v h 4 v h 3 v 0 = h 2. v h 1 v h 0 (a) Confirm that the following identities are satisfied as well (or, at least, explain how you could do this): h 0 = v 0 ; h 1 = 1 12 v v v v 2 = 1 12 (v 2 8v 1 +8v 1 v 2 ); h 2 = 1 24 v v v v v 2 = 1 24 ( v 2 +16v 1 30v 0 +16v 1 v 2 ); h 3 = 1 12 v v v v 2 = 1 12 ( v 2 +2v 1 2v 1 +v 2 ); h 4 = 1 24 v v v v v 2 = 1 24 (v 2 4v 1 +6v 0 4v 1 +v 2 ). That is, 1 h v 2 h v 1 h 2 = v h v 1 h v 2 Note that these equations imply that the coefficients of any polynomial h(x) with degree at most four can be recovered from the polynomial s values at 2, 1, 0, 1, and 2. (b) Suppose α(x) and β(x) are both integer polynomials with degree at most two, and that γ(x) =α(x) β(x) is their product, so that γ(x) is an integer polynomial with degree at most four. Explain how you could compute the coefficients of γ(x) from α( 2),α( 1),...,α(2) and β( 2),β( 1),...,β(2) without computing the coefficients of α(x) orβ(x) first. Hint: Note that γ(i) = α(i) β(i) for any integer i. (c) Now note that if a and b are two nonnegative integers whose decimal representations have length at most n (so that 0 a, b < 10 n ), and if m = n/3, then a = a m + a 1 10 m + a 0 = α(10 m ) 22

17 and b = b m + b 1 10 m + b 0 = β(10 m ) where a 0,a 1,a 2,b 0,b 1,b 2 are nonnegative integers such that 0 a i,b i < 10 m for i between 1 and 3, α(x) =a 2 x 2 +a 1 x+a 0 and β(x) =b 2 x 2 +b 1 x+b 0, and so that a b = γ(10 m )ifγ(x)=α(x) β(x). Use this observation and the results from parts (a) and (b) to design another Divide and Conquer algorithm for nonnegative integer multiplication. As well as calling itself recursively, you algorithm might add or subtract integers together; multiply an integer by a small integer constant (between 4 and 4); or perform exact division by a small integer constant (between 1 and 24), where the division is exact because you ll always dividing one integer k by another integer l, such that k is an integer multiple of l in other words, when the remainder will always be zero. For the rest of this question, you should assume (correctly) that all three of the above operations can be performed using a number of operations that s linear in the length of the decimal representation(s) of the integer(s) you started with. Here s a hint to make sure you re on the right track: Your algorithm should form and recursively solve exactly five small integer multiplication problems in order to multiply a by b whenever n 3. There s no need to recurse at all if n 2. (d) Write down a recurrence for the number T (n) of operations on digits used by your algorithm and prove that T (n) O(n log 3 5 ) O(n 1.47 ). 7. Consider the Selection problem, in which you re given as inputs an array A of integers, where A has length n (that is, n integers, which aren t necessarily distinct, are stored in array locations A[1],A[2],...,A[n]), and an integer k such that 1 k n, and whose output should be an integer x that is the k th smallest element stored in A so that A[i] =xfor some integer i between 1 and n, A[j] x for (at least) k integers j between 1 and n, and A[k] x for (at least) n k + 1 integers l between 1 and n. If the integers stored in A are distinct then x and the above array index i will be unique. If A stores multiple copies of one or more of the same integers then x will still be unique, but the array index i might not be. The median of the above array A is the value x that satisfies the above conditions when k = n 2, so that half the integers stored in A are less than or equal to x. The Median Finding problem has the array A as input and returns the median of A as output. 23

18 Note that it s easy to write an algorithm for the Median Finding problem if you already know an algorithm for the Selection problem all you d need to do is set k = n 2, execute the algorithm for Selection on inputs A and k, and then return the output that the Selection algorithm generates. In this question, you ll be asked to design a Divide and Conquer algorithm for Selection. We ll define T (n) to be the number of comparisons of (pairs of) integers stored in A that this algorithm uses, when the input array A has length n, in the worst case. If n<20 then the k th largest element stored in A can be found (so that an instance of the Selection problem involving A can be solved) by sorting A and returning the entry that s in position k in the resulting sorted array. Clearly this uses at most some constant number of comparisons of integers (since n<10). So we ll consider both of the Selection and Median Finding problems to be solved for the case n<20, and we ll assume that n 20 from now on. In the first stage of the algorithm (for the case n 20), the array will be split into n 5 subarrays, where each subarray has length at most five the first subarray includes the elements A[1],A[2],...,A[5], the second subarray includes A[6],A[7],...,A[10], and so on. Then, for 1 i n 5, the median x i of the i th of these subarrays will be computed, and written into the i th location, B[i], of a new array B (which has length n 5 ). Note that this implies that B[i] A[5i 4],A[5i 3],...,A[5i]} if 1 i n 5 and that B[i] A[5 n/5 4],A[5 n/5 3],...,A[n]} if i = n 5, and that B[i] is less than or equal to at least three of A[5i 4],A[5i 3],...,A[5i], and is also greater than or equal to at least three of these values, whenever i n 5. (a) Argue that the above array B can be constructed from A using at most cn comparisons, for some constant c that s independent of n, in the worst case. In the second stage of the algorithm, the algorithm is recursively applied to find the median y of the array B. Note that y also belongs to the array A. (b) Prove that there are at least 3 10n 5 integers i between 1 and n such that A[i] y and that there are also at least 3 10n 5 integers j between 1 and n such that A[j] y. In the third stage of the algorithm, y is compared to each of the elements stored in A, in order to compute the values n L, which is the number of integers i such that 1 i n and A[i] <y, n E, which is the number of integers j such that 1 j n and A[j] =y, and n G = n n L n E, which is the number of integers l such that A[l] >y, and to create two arrays, C, and D, with lengths n L and n G respectively, such that if k n L then the k th smallest element in A is equal to the k th smallest element in C, if n L <k n L +n E then the k th smallest element in A is equal to y, and if n L + n E <k nthen the k th smallest element in A is equal to the (k n L n E ) th smallest element in D. 24

19 (c) Argue that this third stage of the algorithm can also be performed using a number of comparisons that is at most linear in n (in the worst case). (d) Prove that n L 7 10 n +5<nand n G 7 10n +5<nwhenever n 20. This should be easy, if you ve managed to answer all the previous parts of this question. In the fourth and final stage of the algorithm, either an instance of the Selection problem including the array C or D is formed (without performing any more comparisons of array elements), and this is recursively solved to discover the k th smallest element of A, or the value y is returned as this element (depending on how k, n L, and n L + n E are related). (e) Complete the above sketch in order to write (pseudocode for) a Divide and Conquer algorithm for the Selection problem, that has all the properties mentioned above. Note that this will be a deterministic algorithm. (f) Prove that the number of comparisons used by this algorithm to solve the Selection problem is in Θ(n) in the worst case. 8. Recall (probably, from CPSC 331) that Quicksort is a sorting algorithm that uses Θ(n 2 ) comparisons to sort an array of length n in the worst case, but that only uses O(n log 2 n) operations most of the time (or in the average case ). Use the deterministic algorithm for Selection you designed to solve the previous problem, and modify the Quicksort algorithm, in order to produce a deterministic sorting algorithm that uses O(n log 2 n) comparisons in the worst case instead of just most of the time. Unfortunately, this new algorithm will probably only be of theoretical interest it ll be likely that Heap Sort or Merge Sort (or both) are faster algorithms than the one you produce in this way Hints for Selected Exercises Exercise #4(a): Remember to use induction on i, instead of on n. Exercise #5: You ll need to form and solve recurrences in order to answer this question. You ll be able to use the Master Theorem to solve some of the recurrences you obtain, but not all of them. Exercise #6(a): There s a reason why the equations were restated using vectors and matrices! Exercise #6(d): Forming and solving the recurrence you need here will be complicated by the fact that the integers used in recursively derived instances are just slightly larger than you d need them to be, in order for the resulting recurrence to be easy to solve using the Master Theorem. Under these circumstances, you should consider two approaches (and you should probably consider them in the order in which they re given here). You can examine the easy-to-solve recurrence you d get if the integers were a little bit smaller, solve this recurrence using the Master Theorem, and then use the substitution method to prove that the recurrence you started with has the same solution. Alternatively, you could try to apply the techniques that have been given to simplify recurrences, in Chapter 8 in Part I, in order to avoid using the substitution method at all. 25

Introduction to Algorithms

Introduction to Algorithms Lecture 1 Introduction to Algorithms 1.1 Overview The purpose of this lecture is to give a brief overview of the topic of Algorithms and the kind of thinking it involves: why we focus on the subjects that

More information

1 Divide and Conquer (September 3)

1 Divide and Conquer (September 3) The control of a large force is the same principle as the control of a few men: it is merely a question of dividing up their numbers. Sun Zi, The Art of War (c. 400 C.E.), translated by Lionel Giles (1910)

More information

CMPSCI611: Three Divide-and-Conquer Examples Lecture 2

CMPSCI611: Three Divide-and-Conquer Examples Lecture 2 CMPSCI611: Three Divide-and-Conquer Examples Lecture 2 Last lecture we presented and analyzed Mergesort, a simple divide-and-conquer algorithm. We then stated and proved the Master Theorem, which gives

More information

Introduction to Divide and Conquer

Introduction to Divide and Conquer Introduction to Divide and Conquer Sorting with O(n log n) comparisons and integer multiplication faster than O(n 2 ) Periklis A. Papakonstantinou York University Consider a problem that admits a straightforward

More information

CPSC 518 Introduction to Computer Algebra Asymptotically Fast Integer Multiplication

CPSC 518 Introduction to Computer Algebra Asymptotically Fast Integer Multiplication CPSC 518 Introduction to Computer Algebra Asymptotically Fast Integer Multiplication 1 Introduction We have now seen that the Fast Fourier Transform can be applied to perform polynomial multiplication

More information

5 + 9(10) + 3(100) + 0(1000) + 2(10000) =

5 + 9(10) + 3(100) + 0(1000) + 2(10000) = Chapter 5 Analyzing Algorithms So far we have been proving statements about databases, mathematics and arithmetic, or sequences of numbers. Though these types of statements are common in computer science,

More information

Asymptotic Analysis and Recurrences

Asymptotic Analysis and Recurrences Appendix A Asymptotic Analysis and Recurrences A.1 Overview We discuss the notion of asymptotic analysis and introduce O, Ω, Θ, and o notation. We then turn to the topic of recurrences, discussing several

More information

Solving recurrences. Frequently showing up when analysing divide&conquer algorithms or, more generally, recursive algorithms.

Solving recurrences. Frequently showing up when analysing divide&conquer algorithms or, more generally, recursive algorithms. Solving recurrences Frequently showing up when analysing divide&conquer algorithms or, more generally, recursive algorithms Example: Merge-Sort(A, p, r) 1: if p < r then 2: q (p + r)/2 3: Merge-Sort(A,

More information

CPSC 518 Introduction to Computer Algebra Schönhage and Strassen s Algorithm for Integer Multiplication

CPSC 518 Introduction to Computer Algebra Schönhage and Strassen s Algorithm for Integer Multiplication CPSC 518 Introduction to Computer Algebra Schönhage and Strassen s Algorithm for Integer Multiplication March, 2006 1 Introduction We have now seen that the Fast Fourier Transform can be applied to perform

More information

Proof Techniques (Review of Math 271)

Proof Techniques (Review of Math 271) Chapter 2 Proof Techniques (Review of Math 271) 2.1 Overview This chapter reviews proof techniques that were probably introduced in Math 271 and that may also have been used in a different way in Phil

More information

Divide and Conquer. Andreas Klappenecker. [based on slides by Prof. Welch]

Divide and Conquer. Andreas Klappenecker. [based on slides by Prof. Welch] Divide and Conquer Andreas Klappenecker [based on slides by Prof. Welch] Divide and Conquer Paradigm An important general technique for designing algorithms: divide problem into subproblems recursively

More information

Class Note #14. In this class, we studied an algorithm for integer multiplication, which. 2 ) to θ(n

Class Note #14. In this class, we studied an algorithm for integer multiplication, which. 2 ) to θ(n Class Note #14 Date: 03/01/2006 [Overall Information] In this class, we studied an algorithm for integer multiplication, which improved the running time from θ(n 2 ) to θ(n 1.59 ). We then used some of

More information

CS/COE 1501 cs.pitt.edu/~bill/1501/ Integer Multiplication

CS/COE 1501 cs.pitt.edu/~bill/1501/ Integer Multiplication CS/COE 1501 cs.pitt.edu/~bill/1501/ Integer Multiplication Integer multiplication Say we have 5 baskets with 8 apples in each How do we determine how many apples we have? Count them all? That would take

More information

CSCI 3110 Assignment 6 Solutions

CSCI 3110 Assignment 6 Solutions CSCI 3110 Assignment 6 Solutions December 5, 2012 2.4 (4 pts) Suppose you are choosing between the following three algorithms: 1. Algorithm A solves problems by dividing them into five subproblems of half

More information

/633 Introduction to Algorithms Lecturer: Michael Dinitz Topic: Asymptotic Analysis, recurrences Date: 9/7/17

/633 Introduction to Algorithms Lecturer: Michael Dinitz Topic: Asymptotic Analysis, recurrences Date: 9/7/17 601.433/633 Introduction to Algorithms Lecturer: Michael Dinitz Topic: Asymptotic Analysis, recurrences Date: 9/7/17 2.1 Notes Homework 1 will be released today, and is due a week from today by the beginning

More information

Divide and Conquer. Maximum/minimum. Median finding. CS125 Lecture 4 Fall 2016

Divide and Conquer. Maximum/minimum. Median finding. CS125 Lecture 4 Fall 2016 CS125 Lecture 4 Fall 2016 Divide and Conquer We have seen one general paradigm for finding algorithms: the greedy approach. We now consider another general paradigm, known as divide and conquer. We have

More information

Divide and Conquer. Arash Rafiey. 27 October, 2016

Divide and Conquer. Arash Rafiey. 27 October, 2016 27 October, 2016 Divide the problem into a number of subproblems Divide the problem into a number of subproblems Conquer the subproblems by solving them recursively or if they are small, there must be

More information

Quiz 1 Solutions. Problem 2. Asymptotics & Recurrences [20 points] (3 parts)

Quiz 1 Solutions. Problem 2. Asymptotics & Recurrences [20 points] (3 parts) Introduction to Algorithms October 13, 2010 Massachusetts Institute of Technology 6.006 Fall 2010 Professors Konstantinos Daskalakis and Patrick Jaillet Quiz 1 Solutions Quiz 1 Solutions Problem 1. We

More information

When we use asymptotic notation within an expression, the asymptotic notation is shorthand for an unspecified function satisfying the relation:

When we use asymptotic notation within an expression, the asymptotic notation is shorthand for an unspecified function satisfying the relation: CS 124 Section #1 Big-Oh, the Master Theorem, and MergeSort 1/29/2018 1 Big-Oh Notation 1.1 Definition Big-Oh notation is a way to describe the rate of growth of functions. In CS, we use it to describe

More information

b + O(n d ) where a 1, b > 1, then O(n d log n) if a = b d d ) if a < b d O(n log b a ) if a > b d

b + O(n d ) where a 1, b > 1, then O(n d log n) if a = b d d ) if a < b d O(n log b a ) if a > b d CS161, Lecture 4 Median, Selection, and the Substitution Method Scribe: Albert Chen and Juliana Cook (2015), Sam Kim (2016), Gregory Valiant (2017) Date: January 23, 2017 1 Introduction Last lecture, we

More information

Discrete Mathematics U. Waterloo ECE 103, Spring 2010 Ashwin Nayak May 17, 2010 Recursion

Discrete Mathematics U. Waterloo ECE 103, Spring 2010 Ashwin Nayak May 17, 2010 Recursion Discrete Mathematics U. Waterloo ECE 103, Spring 2010 Ashwin Nayak May 17, 2010 Recursion During the past week, we learnt about inductive reasoning, in which we broke down a problem of size n, into one

More information

Quick Sort Notes , Spring 2010

Quick Sort Notes , Spring 2010 Quick Sort Notes 18.310, Spring 2010 0.1 Randomized Median Finding In a previous lecture, we discussed the problem of finding the median of a list of m elements, or more generally the element of rank m.

More information

Lecture 7: More Arithmetic and Fun With Primes

Lecture 7: More Arithmetic and Fun With Primes IAS/PCMI Summer Session 2000 Clay Mathematics Undergraduate Program Advanced Course on Computational Complexity Lecture 7: More Arithmetic and Fun With Primes David Mix Barrington and Alexis Maciel July

More information

Algorithm Design and Analysis

Algorithm Design and Analysis Algorithm Design and Analysis LECTURE 9 Divide and Conquer Merge sort Counting Inversions Binary Search Exponentiation Solving Recurrences Recursion Tree Method Master Theorem Sofya Raskhodnikova S. Raskhodnikova;

More information

Outline. 1 Introduction. Merging and MergeSort. 3 Analysis. 4 Reference

Outline. 1 Introduction. Merging and MergeSort. 3 Analysis. 4 Reference Outline Computer Science 331 Sort Mike Jacobson Department of Computer Science University of Calgary Lecture #25 1 Introduction 2 Merging and 3 4 Reference Mike Jacobson (University of Calgary) Computer

More information

Intermediate Algebra. Gregg Waterman Oregon Institute of Technology

Intermediate Algebra. Gregg Waterman Oregon Institute of Technology Intermediate Algebra Gregg Waterman Oregon Institute of Technology c 2017 Gregg Waterman This work is licensed under the Creative Commons Attribution 4.0 International license. The essence of the license

More information

Computational Complexity - Pseudocode and Recursions

Computational Complexity - Pseudocode and Recursions Computational Complexity - Pseudocode and Recursions Nicholas Mainardi 1 Dipartimento di Elettronica e Informazione Politecnico di Milano nicholas.mainardi@polimi.it June 6, 2018 1 Partly Based on Alessandro

More information

Outline. 1 Introduction. 3 Quicksort. 4 Analysis. 5 References. Idea. 1 Choose an element x and reorder the array as follows:

Outline. 1 Introduction. 3 Quicksort. 4 Analysis. 5 References. Idea. 1 Choose an element x and reorder the array as follows: Outline Computer Science 331 Quicksort Mike Jacobson Department of Computer Science University of Calgary Lecture #28 1 Introduction 2 Randomized 3 Quicksort Deterministic Quicksort Randomized Quicksort

More information

1.1 Administrative Stuff

1.1 Administrative Stuff 601.433 / 601.633 Introduction to Algorithms Lecturer: Michael Dinitz Topic: Introduction, Karatsuba/Strassen Date: 9/4/18 1.1 Administrative Stuff Welcome to Algorithms! In this class you will learn the

More information

MA008/MIIZ01 Design and Analysis of Algorithms Lecture Notes 3

MA008/MIIZ01 Design and Analysis of Algorithms Lecture Notes 3 MA008 p.1/37 MA008/MIIZ01 Design and Analysis of Algorithms Lecture Notes 3 Dr. Markus Hagenbuchner markus@uow.edu.au. MA008 p.2/37 Exercise 1 (from LN 2) Asymptotic Notation When constants appear in exponents

More information

Chapter 5 Divide and Conquer

Chapter 5 Divide and Conquer CMPT 705: Design and Analysis of Algorithms Spring 008 Chapter 5 Divide and Conquer Lecturer: Binay Bhattacharya Scribe: Chris Nell 5.1 Introduction Given a problem P with input size n, P (n), we define

More information

Chapter 2. Recurrence Relations. Divide and Conquer. Divide and Conquer Strategy. Another Example: Merge Sort. Merge Sort Example. Merge Sort Example

Chapter 2. Recurrence Relations. Divide and Conquer. Divide and Conquer Strategy. Another Example: Merge Sort. Merge Sort Example. Merge Sort Example Recurrence Relations Chapter 2 Divide and Conquer Equation or an inequality that describes a function by its values on smaller inputs. Recurrence relations arise when we analyze the running time of iterative

More information

Midterm Exam. CS 3110: Design and Analysis of Algorithms. June 20, Group 1 Group 2 Group 3

Midterm Exam. CS 3110: Design and Analysis of Algorithms. June 20, Group 1 Group 2 Group 3 Banner ID: Name: Midterm Exam CS 3110: Design and Analysis of Algorithms June 20, 2006 Group 1 Group 2 Group 3 Question 1.1 Question 2.1 Question 3.1 Question 1.2 Question 2.2 Question 3.2 Question 3.3

More information

CSCI Honor seminar in algorithms Homework 2 Solution

CSCI Honor seminar in algorithms Homework 2 Solution CSCI 493.55 Honor seminar in algorithms Homework 2 Solution Saad Mneimneh Visiting Professor Hunter College of CUNY Problem 1: Rabin-Karp string matching Consider a binary string s of length n and another

More information

COMP 382: Reasoning about algorithms

COMP 382: Reasoning about algorithms Fall 2014 Unit 4: Basics of complexity analysis Correctness and efficiency So far, we have talked about correctness and termination of algorithms What about efficiency? Running time of an algorithm For

More information

Lecture 17: Trees and Merge Sort 10:00 AM, Oct 15, 2018

Lecture 17: Trees and Merge Sort 10:00 AM, Oct 15, 2018 CS17 Integrated Introduction to Computer Science Klein Contents Lecture 17: Trees and Merge Sort 10:00 AM, Oct 15, 2018 1 Tree definitions 1 2 Analysis of mergesort using a binary tree 1 3 Analysis of

More information

Computational Complexity. This lecture. Notes. Lecture 02 - Basic Complexity Analysis. Tom Kelsey & Susmit Sarkar. Notes

Computational Complexity. This lecture. Notes. Lecture 02 - Basic Complexity Analysis. Tom Kelsey & Susmit Sarkar. Notes Computational Complexity Lecture 02 - Basic Complexity Analysis Tom Kelsey & Susmit Sarkar School of Computer Science University of St Andrews http://www.cs.st-andrews.ac.uk/~tom/ twk@st-andrews.ac.uk

More information

Divide and Conquer: Polynomial Multiplication Version of October 1 / 7, 24201

Divide and Conquer: Polynomial Multiplication Version of October 1 / 7, 24201 Divide and Conquer: Polynomial Multiplication Version of October 7, 2014 Divide and Conquer: Polynomial Multiplication Version of October 1 / 7, 24201 Outline Outline: Introduction The polynomial multiplication

More information

Asymptotic Notation. such that t(n) cf(n) for all n n 0. for some positive real constant c and integer threshold n 0

Asymptotic Notation. such that t(n) cf(n) for all n n 0. for some positive real constant c and integer threshold n 0 Asymptotic Notation Asymptotic notation deals with the behaviour of a function in the limit, that is, for sufficiently large values of its parameter. Often, when analysing the run time of an algorithm,

More information

LECTURE NOTES ON DESIGN AND ANALYSIS OF ALGORITHMS

LECTURE NOTES ON DESIGN AND ANALYSIS OF ALGORITHMS G.PULLAIAH COLLEGE OF ENGINEERING AND TECHNOLOGY LECTURE NOTES ON DESIGN AND ANALYSIS OF ALGORITHMS Department of Computer Science and Engineering 1 UNIT 1 Basic Concepts Algorithm An Algorithm is a finite

More information

CS 161 Summer 2009 Homework #2 Sample Solutions

CS 161 Summer 2009 Homework #2 Sample Solutions CS 161 Summer 2009 Homework #2 Sample Solutions Regrade Policy: If you believe an error has been made in the grading of your homework, you may resubmit it for a regrade. If the error consists of more than

More information

/633 Introduction to Algorithms Lecturer: Michael Dinitz Topic: Dynamic Programming II Date: 10/12/17

/633 Introduction to Algorithms Lecturer: Michael Dinitz Topic: Dynamic Programming II Date: 10/12/17 601.433/633 Introduction to Algorithms Lecturer: Michael Dinitz Topic: Dynamic Programming II Date: 10/12/17 12.1 Introduction Today we re going to do a couple more examples of dynamic programming. While

More information

CS 577 Introduction to Algorithms: Strassen s Algorithm and the Master Theorem

CS 577 Introduction to Algorithms: Strassen s Algorithm and the Master Theorem CS 577 Introduction to Algorithms: Jin-Yi Cai University of Wisconsin Madison In the last class, we described InsertionSort and showed that its worst-case running time is Θ(n 2 ). Check Figure 2.2 for

More information

Quadratic Equations Part I

Quadratic Equations Part I Quadratic Equations Part I Before proceeding with this section we should note that the topic of solving quadratic equations will be covered in two sections. This is done for the benefit of those viewing

More information

Lecture 2: Divide and conquer and Dynamic programming

Lecture 2: Divide and conquer and Dynamic programming Chapter 2 Lecture 2: Divide and conquer and Dynamic programming 2.1 Divide and Conquer Idea: - divide the problem into subproblems in linear time - solve subproblems recursively - combine the results in

More information

CS 470/570 Divide-and-Conquer. Format of Divide-and-Conquer algorithms: Master Recurrence Theorem (simpler version)

CS 470/570 Divide-and-Conquer. Format of Divide-and-Conquer algorithms: Master Recurrence Theorem (simpler version) CS 470/570 Divide-and-Conquer Format of Divide-and-Conquer algorithms: Divide: Split the array or list into smaller pieces Conquer: Solve the same problem recursively on smaller pieces Combine: Build the

More information

Data Structures and Algorithms Chapter 3

Data Structures and Algorithms Chapter 3 1 Data Structures and Algorithms Chapter 3 Werner Nutt 2 Acknowledgments The course follows the book Introduction to Algorithms, by Cormen, Leiserson, Rivest and Stein, MIT Press [CLRST]. Many examples

More information

Compute the Fourier transform on the first register to get x {0,1} n x 0.

Compute the Fourier transform on the first register to get x {0,1} n x 0. CS 94 Recursive Fourier Sampling, Simon s Algorithm /5/009 Spring 009 Lecture 3 1 Review Recall that we can write any classical circuit x f(x) as a reversible circuit R f. We can view R f as a unitary

More information

Speedy Maths. David McQuillan

Speedy Maths. David McQuillan Speedy Maths David McQuillan Basic Arithmetic What one needs to be able to do Addition and Subtraction Multiplication and Division Comparison For a number of order 2 n n ~ 100 is general multi precision

More information

Algebra. Here are a couple of warnings to my students who may be here to get a copy of what happened on a day that you missed.

Algebra. Here are a couple of warnings to my students who may be here to get a copy of what happened on a day that you missed. This document was written and copyrighted by Paul Dawkins. Use of this document and its online version is governed by the Terms and Conditions of Use located at. The online version of this document is

More information

CS361 Homework #3 Solutions

CS361 Homework #3 Solutions CS6 Homework # Solutions. Suppose I have a hash table with 5 locations. I would like to know how many items I can store in it before it becomes fairly likely that I have a collision, i.e., that two items

More information

Lecture 4. Quicksort

Lecture 4. Quicksort Lecture 4. Quicksort T. H. Cormen, C. E. Leiserson and R. L. Rivest Introduction to Algorithms, 3rd Edition, MIT Press, 2009 Sungkyunkwan University Hyunseung Choo choo@skku.edu Copyright 2000-2018 Networking

More information

Chapter 5. Divide and Conquer CLRS 4.3. Slides by Kevin Wayne. Copyright 2005 Pearson-Addison Wesley. All rights reserved.

Chapter 5. Divide and Conquer CLRS 4.3. Slides by Kevin Wayne. Copyright 2005 Pearson-Addison Wesley. All rights reserved. Chapter 5 Divide and Conquer CLRS 4.3 Slides by Kevin Wayne. Copyright 25 Pearson-Addison Wesley. All rights reserved. Divide-and-Conquer Divide-and-conquer. Break up problem into several parts. Solve

More information

A design paradigm. Divide and conquer: (When) does decomposing a problem into smaller parts help? 09/09/ EECS 3101

A design paradigm. Divide and conquer: (When) does decomposing a problem into smaller parts help? 09/09/ EECS 3101 A design paradigm Divide and conquer: (When) does decomposing a problem into smaller parts help? 09/09/17 112 Multiplying complex numbers (from Jeff Edmonds slides) INPUT: Two pairs of integers, (a,b),

More information

CPSC 413 Lecture Notes Part I

CPSC 413 Lecture Notes Part I CPSC 413 Lecture Notes Part I Department of Computer Science Fall, 1998 2 Contents I Introduction 7 1 Introduction to CPSC 413 9 1.1 Overview... 9 1.2 Two Motivating Problems... 9 1.3 About This Course...

More information

Reductions, Recursion and Divide and Conquer

Reductions, Recursion and Divide and Conquer Chapter 5 Reductions, Recursion and Divide and Conquer CS 473: Fundamental Algorithms, Fall 2011 September 13, 2011 5.1 Reductions and Recursion 5.1.0.1 Reduction Reducing problem A to problem B: (A) Algorithm

More information

CISC 4090: Theory of Computation Chapter 1 Regular Languages. Section 1.1: Finite Automata. What is a computer? Finite automata

CISC 4090: Theory of Computation Chapter 1 Regular Languages. Section 1.1: Finite Automata. What is a computer? Finite automata CISC 4090: Theory of Computation Chapter Regular Languages Xiaolan Zhang, adapted from slides by Prof. Werschulz Section.: Finite Automata Fordham University Department of Computer and Information Sciences

More information

Chapter 4 Divide-and-Conquer

Chapter 4 Divide-and-Conquer Chapter 4 Divide-and-Conquer 1 About this lecture (1) Recall the divide-and-conquer paradigm, which we used for merge sort: Divide the problem into a number of subproblems that are smaller instances of

More information

1 Closest Pair of Points on the Plane

1 Closest Pair of Points on the Plane CS 31: Algorithms (Spring 2019): Lecture 5 Date: 4th April, 2019 Topic: Divide and Conquer 3: Closest Pair of Points on a Plane Disclaimer: These notes have not gone through scrutiny and in all probability

More information

Fall 2017 November 10, Written Homework 5

Fall 2017 November 10, Written Homework 5 CS1800 Discrete Structures Profs. Aslam, Gold, & Pavlu Fall 2017 November 10, 2017 Assigned: Mon Nov 13 2017 Due: Wed Nov 29 2017 Instructions: Written Homework 5 The assignment has to be uploaded to blackboard

More information

Divide and Conquer Algorithms

Divide and Conquer Algorithms Divide and Conquer Algorithms Introduction There exist many problems that can be solved using a divide-and-conquer algorithm. A divide-andconquer algorithm A follows these general guidelines. Divide Algorithm

More information

Section 0.6: Factoring from Precalculus Prerequisites a.k.a. Chapter 0 by Carl Stitz, PhD, and Jeff Zeager, PhD, is available under a Creative

Section 0.6: Factoring from Precalculus Prerequisites a.k.a. Chapter 0 by Carl Stitz, PhD, and Jeff Zeager, PhD, is available under a Creative Section 0.6: Factoring from Precalculus Prerequisites a.k.a. Chapter 0 by Carl Stitz, PhD, and Jeff Zeager, PhD, is available under a Creative Commons Attribution-NonCommercial-ShareAlike.0 license. 201,

More information

Divide and Conquer CPE 349. Theresa Migler-VonDollen

Divide and Conquer CPE 349. Theresa Migler-VonDollen Divide and Conquer CPE 349 Theresa Migler-VonDollen Divide and Conquer Divide and Conquer is a strategy that solves a problem by: 1 Breaking the problem into subproblems that are themselves smaller instances

More information

V. Adamchik 1. Recurrences. Victor Adamchik Fall of 2005

V. Adamchik 1. Recurrences. Victor Adamchik Fall of 2005 V. Adamchi Recurrences Victor Adamchi Fall of 00 Plan Multiple roots. More on multiple roots. Inhomogeneous equations 3. Divide-and-conquer recurrences In the previous lecture we have showed that if the

More information

Lecture 12 : Recurrences DRAFT

Lecture 12 : Recurrences DRAFT CS/Math 240: Introduction to Discrete Mathematics 3/1/2011 Lecture 12 : Recurrences Instructor: Dieter van Melkebeek Scribe: Dalibor Zelený DRAFT Last few classes we talked about program correctness. We

More information

Topic Contents. Factoring Methods. Unit 3: Factoring Methods. Finding the square root of a number

Topic Contents. Factoring Methods. Unit 3: Factoring Methods. Finding the square root of a number Topic Contents Factoring Methods Unit 3 The smallest divisor of an integer The GCD of two numbers Generating prime numbers Computing prime factors of an integer Generating pseudo random numbers Raising

More information

Introduction to Algorithms 6.046J/18.401J/SMA5503

Introduction to Algorithms 6.046J/18.401J/SMA5503 Introduction to Algorithms 6.046J/8.40J/SMA5503 Lecture 3 Prof. Piotr Indyk The divide-and-conquer design paradigm. Divide the problem (instance) into subproblems. 2. Conquer the subproblems by solving

More information

Generating Function Notes , Fall 2005, Prof. Peter Shor

Generating Function Notes , Fall 2005, Prof. Peter Shor Counting Change Generating Function Notes 80, Fall 00, Prof Peter Shor In this lecture, I m going to talk about generating functions We ve already seen an example of generating functions Recall when we

More information

CPSC 320 Sample Final Examination December 2013

CPSC 320 Sample Final Examination December 2013 CPSC 320 Sample Final Examination December 2013 [10] 1. Answer each of the following questions with true or false. Give a short justification for each of your answers. [5] a. 6 n O(5 n ) lim n + This is

More information

Divide and Conquer Algorithms. CSE 101: Design and Analysis of Algorithms Lecture 14

Divide and Conquer Algorithms. CSE 101: Design and Analysis of Algorithms Lecture 14 Divide and Conquer Algorithms CSE 101: Design and Analysis of Algorithms Lecture 14 CSE 101: Design and analysis of algorithms Divide and conquer algorithms Reading: Sections 2.3 and 2.4 Homework 6 will

More information

Data Structures and Algorithms Chapter 2

Data Structures and Algorithms Chapter 2 1 Data Structures and Algorithms Chapter 2 Werner Nutt 2 Acknowledgments The course follows the book Introduction to Algorithms, by Cormen, Leiserson, Rivest and Stein, MIT Press [CLRST]. Many examples

More information

When we use asymptotic notation within an expression, the asymptotic notation is shorthand for an unspecified function satisfying the relation:

When we use asymptotic notation within an expression, the asymptotic notation is shorthand for an unspecified function satisfying the relation: CS 124 Section #1 Big-Oh, the Master Theorem, and MergeSort 1/29/2018 1 Big-Oh Notation 1.1 Definition Big-Oh notation is a way to describe the rate of growth of functions. In CS, we use it to describe

More information

Divide-and-conquer: Order Statistics. Curs: Fall 2017

Divide-and-conquer: Order Statistics. Curs: Fall 2017 Divide-and-conquer: Order Statistics Curs: Fall 2017 The divide-and-conquer strategy. 1. Break the problem into smaller subproblems, 2. recursively solve each problem, 3. appropriately combine their answers.

More information

Algorithms and Their Complexity

Algorithms and Their Complexity CSCE 222 Discrete Structures for Computing David Kebo Houngninou Algorithms and Their Complexity Chapter 3 Algorithm An algorithm is a finite sequence of steps that solves a problem. Computational complexity

More information

You separate binary numbers into columns in a similar fashion. 2 5 = 32

You separate binary numbers into columns in a similar fashion. 2 5 = 32 RSA Encryption 2 At the end of Part I of this article, we stated that RSA encryption works because it s impractical to factor n, which determines P 1 and P 2, which determines our private key, d, which

More information

Advanced Counting Techniques. Chapter 8

Advanced Counting Techniques. Chapter 8 Advanced Counting Techniques Chapter 8 Chapter Summary Applications of Recurrence Relations Solving Linear Recurrence Relations Homogeneous Recurrence Relations Nonhomogeneous Recurrence Relations Divide-and-Conquer

More information

Lecture 3. Big-O notation, more recurrences!!

Lecture 3. Big-O notation, more recurrences!! Lecture 3 Big-O notation, more recurrences!! Announcements! HW1 is posted! (Due Friday) See Piazza for a list of HW clarifications First recitation section was this morning, there s another tomorrow (same

More information

Divide and Conquer. CSE21 Winter 2017, Day 9 (B00), Day 6 (A00) January 30,

Divide and Conquer. CSE21 Winter 2017, Day 9 (B00), Day 6 (A00) January 30, Divide and Conquer CSE21 Winter 2017, Day 9 (B00), Day 6 (A00) January 30, 2017 http://vlsicad.ucsd.edu/courses/cse21-w17 Merging sorted lists: WHAT Given two sorted lists a 1 a 2 a 3 a k b 1 b 2 b 3 b

More information

Jim Lambers MAT 610 Summer Session Lecture 2 Notes

Jim Lambers MAT 610 Summer Session Lecture 2 Notes Jim Lambers MAT 610 Summer Session 2009-10 Lecture 2 Notes These notes correspond to Sections 2.2-2.4 in the text. Vector Norms Given vectors x and y of length one, which are simply scalars x and y, the

More information

Divide and conquer. Philip II of Macedon

Divide and conquer. Philip II of Macedon Divide and conquer Philip II of Macedon Divide and conquer 1) Divide your problem into subproblems 2) Solve the subproblems recursively, that is, run the same algorithm on the subproblems (when the subproblems

More information

Fundamental Algorithms

Fundamental Algorithms Chapter 2: Sorting, Winter 2018/19 1 Fundamental Algorithms Chapter 2: Sorting Jan Křetínský Winter 2018/19 Chapter 2: Sorting, Winter 2018/19 2 Part I Simple Sorts Chapter 2: Sorting, Winter 2018/19 3

More information

Lecture 1: Asymptotics, Recurrences, Elementary Sorting

Lecture 1: Asymptotics, Recurrences, Elementary Sorting Lecture 1: Asymptotics, Recurrences, Elementary Sorting Instructor: Outline 1 Introduction to Asymptotic Analysis Rate of growth of functions Comparing and bounding functions: O, Θ, Ω Specifying running

More information

Topic 17. Analysis of Algorithms

Topic 17. Analysis of Algorithms Topic 17 Analysis of Algorithms Analysis of Algorithms- Review Efficiency of an algorithm can be measured in terms of : Time complexity: a measure of the amount of time required to execute an algorithm

More information

Fundamental Algorithms

Fundamental Algorithms Fundamental Algorithms Chapter 2: Sorting Harald Räcke Winter 2015/16 Chapter 2: Sorting, Winter 2015/16 1 Part I Simple Sorts Chapter 2: Sorting, Winter 2015/16 2 The Sorting Problem Definition Sorting

More information

Algorithms Exam TIN093 /DIT602

Algorithms Exam TIN093 /DIT602 Algorithms Exam TIN093 /DIT602 Course: Algorithms Course code: TIN 093, TIN 092 (CTH), DIT 602 (GU) Date, time: 21st October 2017, 14:00 18:00 Building: SBM Responsible teacher: Peter Damaschke, Tel. 5405

More information

An analogy from Calculus: limits

An analogy from Calculus: limits COMP 250 Fall 2018 35 - big O Nov. 30, 2018 We have seen several algorithms in the course, and we have loosely characterized their runtimes in terms of the size n of the input. We say that the algorithm

More information

Review Of Topics. Review: Induction

Review Of Topics. Review: Induction Review Of Topics Asymptotic notation Solving recurrences Sorting algorithms Insertion sort Merge sort Heap sort Quick sort Counting sort Radix sort Medians/order statistics Randomized algorithm Worst-case

More information

Discrete Mathematics and Probability Theory Summer 2014 James Cook Note 5

Discrete Mathematics and Probability Theory Summer 2014 James Cook Note 5 CS 70 Discrete Mathematics and Probability Theory Summer 2014 James Cook Note 5 Modular Arithmetic In several settings, such as error-correcting codes and cryptography, we sometimes wish to work over a

More information

Introduction. An Introduction to Algorithms and Data Structures

Introduction. An Introduction to Algorithms and Data Structures Introduction An Introduction to Algorithms and Data Structures Overview Aims This course is an introduction to the design, analysis and wide variety of algorithms (a topic often called Algorithmics ).

More information

Computational Complexity

Computational Complexity Computational Complexity S. V. N. Vishwanathan, Pinar Yanardag January 8, 016 1 Computational Complexity: What, Why, and How? Intuitively an algorithm is a well defined computational procedure that takes

More information

Ch01. Analysis of Algorithms

Ch01. Analysis of Algorithms Ch01. Analysis of Algorithms Input Algorithm Output Acknowledgement: Parts of slides in this presentation come from the materials accompanying the textbook Algorithm Design and Applications, by M. T. Goodrich

More information

Kartsuba s Algorithm and Linear Time Selection

Kartsuba s Algorithm and Linear Time Selection CS 374: Algorithms & Models of Computation, Fall 2015 Kartsuba s Algorithm and Linear Time Selection Lecture 09 September 22, 2015 Chandra & Manoj (UIUC) CS374 1 Fall 2015 1 / 32 Part I Fast Multiplication

More information

Advanced Counting Techniques

Advanced Counting Techniques . All rights reserved. Authorized only for instructor use in the classroom. No reproduction or further distribution permitted without the prior written consent of McGraw-Hill Education. Advanced Counting

More information

Lecture 4: Constructing the Integers, Rationals and Reals

Lecture 4: Constructing the Integers, Rationals and Reals Math/CS 20: Intro. to Math Professor: Padraic Bartlett Lecture 4: Constructing the Integers, Rationals and Reals Week 5 UCSB 204 The Integers Normally, using the natural numbers, you can easily define

More information

1 Caveats of Parallel Algorithms

1 Caveats of Parallel Algorithms CME 323: Distriuted Algorithms and Optimization, Spring 2015 http://stanford.edu/ reza/dao. Instructor: Reza Zadeh, Matroid and Stanford. Lecture 1, 9/26/2015. Scried y Suhas Suresha, Pin Pin, Andreas

More information

CS1800: Strong Induction. Professor Kevin Gold

CS1800: Strong Induction. Professor Kevin Gold CS1800: Strong Induction Professor Kevin Gold Mini-Primer/Refresher on Unrelated Topic: Limits This is meant to be a problem about reasoning about quantifiers, with a little practice of other skills, too

More information

1 Substitution method

1 Substitution method Recurrence Relations we have discussed asymptotic analysis of algorithms and various properties associated with asymptotic notation. As many algorithms are recursive in nature, it is natural to analyze

More information

CDM. Recurrences and Fibonacci. 20-fibonacci 2017/12/15 23:16. Terminology 4. Recurrence Equations 3. Solution and Asymptotics 6.

CDM. Recurrences and Fibonacci. 20-fibonacci 2017/12/15 23:16. Terminology 4. Recurrence Equations 3. Solution and Asymptotics 6. CDM Recurrences and Fibonacci 1 Recurrence Equations Klaus Sutner Carnegie Mellon University Second Order 20-fibonacci 2017/12/15 23:16 The Fibonacci Monoid Recurrence Equations 3 Terminology 4 We can

More information

Induction and Recursion

Induction and Recursion . All rights reserved. Authorized only for instructor use in the classroom. No reproduction or further distribution permitted without the prior written consent of McGraw-Hill Education. Induction and Recursion

More information

Final Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2

Final Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2 Final Review Sheet The final will cover Sections Chapters 1,2,3 and 4, as well as sections 5.1-5.4, 6.1-6.2 and 7.1-7.3 from chapters 5,6 and 7. This is essentially all material covered this term. Watch

More information