Discrete Mathematics CS 2610 October 17, 2006
Uncountable sets Theorem: The set of real numbers is uncountable. If a subset of a set is uncountable, then the set is uncountable. The cardinality of a subset is at least as large as the cardinality of the entire set. It is enough to prove that there is a subset of R that is uncountable Theorem: The open interval of real numbers [0,1) = {r R 0 r < 1} is uncountable. Proof by contradiction using the Cantor diagonalization argument (Cantor, 1879) 2
Uncountable Sets: R Proof (BWOC) using diagonalization: Suppose R is countable (then any subset say [0,1) is also countable). So, we can list them: r 1, r 2, r 3, where r 1 = 0.d 11 d 12 d 13 d 14 the d ij are digits 0-9 r 2 = 0.d 21 d 22 d 23 d 24 r 3 = 0.d 31 d 32 d 33 d 34 r 4 = 0.d 41 d 42 d 43 d 44 etc. Now let r = 0.d 1 d 2 d 3 d 4 where d i = 4 if d ii 4 d i = 5 if d ii = 4 But r is not equal to any of the items in the list it s missing from the list so we can t list them after all. r differs from r i in the i th position, for all i. So, our assumption that we could list them all is incorrect. 3
Algorithms An Algorithm is a finite set of precise instructions for performing a computation or for solving a problem. Properties of an algorithm: - input: input values are from a specified set - output: output values are from a specified set - definiteness: each step is precisely defined - correctness: correct output produced - finiteness: takes a finite number of steps - effectiveness: each step is finite & exact - generality: applicable to various input sizes 4
Analysis of Algorithms Analyzing an algorithm Time complexity Space complexity Time complexity Running time needed by an algorithm as a function of the size of the input Denoted as T(N) We are interested in measuring how fast the time complexity increases as the input size grows Asymptotic Time Complexity of an Algorithm 5
Pseudocode procedure procname(argument: type) variable := expression informal statement begin statements end {comment} if condition then statement1 [else statement2] for variable := initial value to final value statement while condition statement return expression procname(arg1,..,argn) 6
Algorithm Complexity Worst Case Analysis Largest number of operations to solve a problem of a specified size. Analyze the worst input case for each input size. Upper bound of the running time for any input. Most widely used. Average Case Analysis Average number of operations over all inputs of a given size Sometimes it s too complicated 7
Example: Max Algorithm procedure max(a 1, a 2,, a n : integers) v := a 1 for i := 2 to n if a i > v then v := a i return v 1 n n - 1? 0, 1,.., n -1 1 How many times is each step executed? Worst-Case: the input sequence is a strictly increasing sequence 8
Searching Algorithms Searching Algorithms Problem: Find an element x in a list a 1, a n (not necessarily ordered) Linear Search Strategy: Examine the sequence one element after another until all the elements have been examined or the current element being examined is the element x. 9
Example: Linear Search procedure linear search (x: integer, a 1, a 2,, a n : distinct integers) i := 1 while (i n x a ) i i := i + 1 if i n then location := i else location := 0 return location Worst-Case occurs when x is the last element in the sequence Best Case occurs when x is the first element in the sequence 10
Example: Linear Search Average Case: x is the first element: 1 loop comparison x is the second element 2 loop comparisons, 1 iteration of the loop x is the third element 3 loop comparisons, 2 iterations of the loop x is n-th element n loop comparisons, n-1 iterations of the loop 11
Binary Search Problem: Locate an element x in a sequence of sorted elements in non-decreasing order. Strategy: On each step, look at the middle element of the remaining list to eliminate half of it, and quickly zero in on the desired element. 12
Binary Search procedure binary search (x:integer, a 1, a 2,, a n : integer) {a 1, a 2,, a n are distinct integers sorted smallest to largest} i := 1 {start of search range} j := n {end of search range} while i < j begin m := (i +j)/2 if x > a m then i := m + 1 else j := m end if x = a i then location := i else location := 0 return location Suppose n=2 k 13
Binary Search Binary Search The loop is executed k times k=log 2 (n) 14
Linear Search vs. Binary Search Linear Search Time Complexity: T(n) is O(n) Binary Search Time Complexity: T(n) is O(log 2 n) 15
Sorting Algorithms Problem: Given a sequence of numbers, sort the sequence in weakly increasing order. Sorting Algorithms: Input: A sequence of n numbers a 1, a 2,, a n Output: A reordering of the input sequence (a 1, a 2,, a n ) such that a 1 a 2 a n 16
Bubble Sort Smallest elements float up to the top (front) of the list, like bubbles in a container of liquid Largest elements sink to the bottom (end). See the animation at: http://math.hws.edu/tmcm/java/xsortlab 17
Example: Bubble Sort procedure bubblesort (a 1, a 2,, a n : distinct integers) for i = 1 to n-1 for j = 1 to n-i if (a j > a j+1 ) then swap a j and a j+1 Worst-Case: The sequence is sorted in decreasing order At step i The loop condition of the inner loop is executed n i + 1 times. The body of the loop is executed n i times 18
Algorithm- Insertion Sort For each element: The elements on its left are already sorted Shift the element with the element on the left until it is in the correct place. See animation http://math.hws.edu/tmcm/java/xsortlab/ 19
Algorithm- Insertion Sort procedure insertsort (a 1, a 2,, a n : distinct integers) for j=2 to n begin i=j - 1 while i > 0 and a i > a i+1 swap a i and a i+1 i=i-1 end Worst-Case: The sequence is in decreasing order At step j, the while loop condition is executed j times the body of the loop is executed j-1 times 20
Discrete Mathematics CS 2610 October 19, 2006
Algorithm- Insertion Sort For each element: The elements on its left are already sorted Shift the element with the element on the left until it is in the correct place. See animation http://math.hws.edu/tmcm/java/xsortlab/ 22
Algorithm- Insertion Sort procedure insertsort (a 1, a 2,, a n : distinct integers) for j=2 to n begin i=j - 1 end while i > 0 and a i > a i+1 swap a i and a i+1 i=i-1 Worst-Case: The sequence is in decreasing order At step j, the while loop condition is executed j times the body of the loop is executed j-1 times 23
Greedy Algorithms Problem: Assign meeting to conference rooms Policy: In decreasing order of room capacity, assign a meeting to the next largest available room Meeting 1: 70 Room A 200 M 1 Meeting 2: 46 Meeting 3: 125 Room B 150 Room C 150 M 2 Can you think of a better solution? M3 Meeting 4: 110 X Room D 100 M5 Meeting 5: 30 Room E 75 Meeting 6: 87 X Room F 50 24
Greedy Algorithm Policy: : In ascending order of room capacity, assign a meeting to the smallest room that can hold the meeting. Meeting 1:70 Meeting 2:46 Meeting 3:125 Meeting 4:110 Meeting 5:30 Meeting 6:87 Room F 50 Room E 75 Room D 100 Room C 150 Room B 150 Room A 200 M 2 M 1 M5 M3 M4 M6 25
Order of Growth Terminology Best O(1) Constant O(log cn) Logarithmic (c Z + ) O(log c n) Polylogarithmic (c Z + ) O(n) Linear O(n c ) Polynomial (c Z + ) O(c n ) Exponential (c Z + ) O(n!) Factorial Worst 26
Complexity of Problems Tractable A problem that can be solved with a deterministic polynomial (or better) worst-case time complexity. Also denoted as P Example: Search Problem Sorting problem Find the maximum 27
Complexity of Problems Intractable Problems that are not tractable. Example: Traveling salesperson problem Wide use of greedy algorithms to get an approximate solution. For example under certain circumstances you can get an approximation that is at most double the optimal solution. 28
P vs. NP NP: Solvable problems whose solution can be checked in polynomial time. P NP The most famous unproven conjecture in computer science is that this inclusion is proper. P NP rather than P=NP 29
Complexity of Problems Not Solvable Proven to have no algorithm that computes it Example: Halting problem (Alan Turing) Determine whether an arbitrary given algorithm, will eventually halt for any given finite input. Corollary: The question of whether or not a program halts for a given input is unsolvable. 30
Discrete Mathematics CS 2610 October 19, 2006
Big-O Notation Big-O notation is used to express the time complexity of an algorithm We can assume that any operation requires the same amount of time. The time complexity of an algorithm can be described independently of the software and hardware used to implement the algorithm. 32
Big-O Notation Def.: Let f, g be functions with domain R 0 or N and codomain R. f(x) is O(g(x)) if there are constants C and k st x > k, f (x ) C g (x ) f (x ) is asymptotically dominated by g (x ) C g(x) is an upper bound of f(x). C and k are called witnesses to the relationship between f & g. C g(x) f(x) k 33
Big-O Notation To prove that a function f(x) is O(g(x)) Find values for k and C, not necessarily the smallest one, larger values also work!! It is sufficient to find a certain k and C that works In many cases, for all x 0, if f(x) 0 then f(x) = f(x) Example: f(x) = x 2 + 2x + 1 is O(x 2 ) for C = 4 and k = 1 34
Big-O Notation Show that f(x) = x 2 + 2x + 1 is O(x 2 ). When x > 1 we know that x x 2 and 1 x 2 then 0 x 2 + 2x + 1 x 2 + 2x 2 + x 2 = 4x 2 so, let C = 4 and k = 1 as witnesses, i.e., f(x) = x 2 + 2x + 1 < 4x 2 when x > 1 Could try x > 2. Then we have 2x x 2 & 1 x 2 then 0 x 2 + 2x + 1 x 2 + x 2 + x 2 = 3x 2 so, C = 3 and k = 2 are also witnesses to f(x) being O(x 2 ). Note that f(x) is also O(x 3 ), etc. 35
Big-O Notation Show that f(x) = 7x 2 is O(x 3 ). When x > 7 we know that 7x 2 < x 3 (multiply x > 7 by x 2 ) so, let C = 1 and k = 7 as witnesses. Could try x > 1. Then we have 7x 2 < 7x 3 so, C = 7 and k = 1 are also witnesses to f(x) being O(x 3 ). Note that f(x) is also O(x 4 ), etc. 36
Big-O Notation Show that f(n) = n 2 is not O(n). Show that no pair of C and k exists such that n 2 Cn whenever n > k. When n > 0, divide both sides of n 2 Cn by n to get n C. No matter what C and k are, n C will not hold for all n with n > k. 37
Big-O Notation Observe that g(x) = x 2 is O(x 2 + 2x + 1) Def:Two functions f(x) and g(x) have the same order iff g(x) is O(f(x)) and f(x) is O(g(x)) 38
Big-O Notation Also, the function f(x) = 3x 2 + 2x + 3 is O(x 3 ) What about O(x 4 )? In fact, the function Cg(x) is an upper bound for f(x), but not necessarily the tightest bound. When Big-O notation is used, g(x) is chosen to be as small as possible. 39
Big-Oh - Theorem Theorem: If f(x) = a n x n + a n-1 x n-1 + + a 1 x+ a 0 where a i R, i=0, n; then f(x) is O(x n ). Leading term dominates! Proof: if x > 1 we have f(x) = a n x n + a n-1 x n-1 + + a 1 x+ a 0 a n x n + a n-1 x n-1 + + a 1 x+ a 0 = x n ( a n + a n-1 /x + + a 1 /x n-1 + a 0 /x n ) x n ( a n + a n-1 + + a 1 + a 0 ) So, f(x) Cx n where C = a n + a n-1 + + a 1 + a 0 whenever x > 1 (what s k? k = 1, why?) What s this: a + b a + b 40
Big-O Example: Prove that f(n) = n! is O(n n ) Proof (easy): n! = 1 2 3 4 5 n n n n n n n = n n where our witnesses are C = 1 and k = 1 Example: Prove that log(n!) is O(nlogn) Using the above, take the log of both sides: log(n!) log(n n ) which is equal to n log(n) 41
Big-O Lemma:A constant function is O(1). Proof: Left to the viewer The most common functions used to estimate the time complexity of an algorithm. (in increasing O() order): 1, (log n), n, (n log n), n 2, n 3, 2 n, n! 42
Big-O Properties Transitivity:if f is O(g) and g is O(h) then f is O(h) Sum Rule: If f 1 is O(g 1 ) and f 2 is O(g 2 ) then f 1 +f 2 is O(max( g 1, g 2 )) If f 1 is O(g) and f 2 is O(g) then f 1 +f 2 is O(g) Product Rule If f 1 is O(g 1 ) andf 2 is O(g 2 ) then f 1 f 2 is O(g 1 g 2 ) For all c > 0, O(cf), O(f + c),o(f c) are O(f) 43
Big-O Properties Example Example: Give a big-o estimate for 3n log (n!) + (n 2 +3)log n, n>0 1) For 3n log (n!) we know log(n!) is O(nlogn) and 3n is O(n) so we know 3n log(n!) is O(n 2 logn) 2) For (n 2 +3)log n we have (n 2 +3) < 2n 2 when n > 2 so it s O(n 2 ); and (n 2 +3)log n is O(n 2 log n) 3) Finally we have an estimate for 3n log (n!) + (n 2 +3)log n that is: O(n 2 log n) 44
Big-O Notation Def.:Functions f and g are incomparable, if f(x) is not O(g) and g is not O(f). f: R + R, f(x) = 5 x 1.5 g: R + R, g(x) = x 2 sin x 2500 2000 1500 1000 -- 5 x 1.5 -- x 2 sin x 500 -- x 2 0 0 5 10 15 20 25 30 35 40 45 50 45
Big-Omega Notation Def.: Let f, g be functions with domain R 0 or N and codomain R. f(x) is Ω(g(x)) if there are positive constants C and k such that x > k, C g (x ) f (x ) C g(x) is a lower bound for f(x) f(x) C g(x) k 46
Big-Omega Property Theorem: f(x) is Ω(g(x)) iff g(x) is O(f(x)). Is the trivial or what? 47
Big-Omega Property Example: prove that f(x) = 3x 2 + 2x + 3 is Ω(g(x)) where g(x) = x 2 Proof: first note that 3x 2 + 2x + 3 3x 2 for all x 0. That s the same as saying that g(x) = x 2 is O(3x 2 + 2x + 3) 48
Big-Theta Notation Def.:Let f, g be functions with domain R 0 or N and codomain R. f(x) is Θ(g(x)) if f(x) is O(g(x)) and f(x) is Ω(g(x)). C 2 g(x) f(x) C 1 g(x) 49
Big-Theta Notation When f(x) is Θ(g(x)), we know that g(x) is Θ(f(x)). Also, f(x) is Θ(g(x)) iff f(x) is O(g(x)) and g(x) is O(f(x)). Typical g functions: x n, c x, log x, etc. 50
Big-Theta Notation To prove that f(x) is order g(x) Method 1 Prove that f is O(g(x)) Prove that f is Ω(g(x)) Method 2 Prove that f is O(g(x)) Prove that g is O(f(x)) 51
Big-Theta Example show that 3x 2 + 8x log x is Θ(x 2 ) (or order x 2 ) 0 8x log x 8x 2 so 3x 2 + 8x log x 11x 2 for x > 1. So, 3x 2 + 8x log x is O(x 2 ) (can I get a witness?) Is x 2 O(3x 2 + 8x log x)? You betcha! Why? Therefore, 3x 2 + 8x log x is Θ(x 2 ) 52
Big Summary Upper Bound Use Big-Oh Lower Bound Use Big-Omega Upper and Lower (or Order of Growth) Use Big-Theta 53
Time to Shift Gears Again Number Theory Number Theory Livin Large 54
Discrete Mathematics CS 2610 October 19, 2006
Number Theory Elementary number theory, concerned with numbers, usually integers and their properties or rational numbers mainly divisibility among integers Modular arithmetic Some Applications Cryptography E-commerce Payment systems Random number generation Coding theory Hash functions (as opposed to stew functions ) 56
Number Theory - Division Let a, b and c be integers, st a 0, we say that a divides b or a b if there is an integer c where b = a c. a and c are said to divide b (or are factors) a b c b b is a multiple of both a and c Example: 5 30 and 5 55 but 5 27 57
Number Theory - Division Theorem 3.4.1: for all a, b, c Z: 1. a 0 2. (a b a c) a (b + c) 3. a b a bc for all integers c 4. (a b b c) a c Proof: (2) a b means b = ap, and a c means c = aq b + c = ap + aq = a(p + q) therefore, a (b + c), or (b + c) = ar where r = p+q Proof: (4) a b means b = ap, and b c means c = bq c = bq = apq therefore, a c or c = ar where r = pq 58
Division Remember long division? 3 30 109 90 19 109 = 30 3 + 19 a = dq + r (dividend = divisor quotient + remainder) 59
The Division Algorithm Division Algorithm Theorem: Let a be an integer, and d be a positive integer. There are unique integers q, r with r {0,1,2,,d-1} (ie, 0 r < d) satisfying d is the divisor q is the quotient q = a div d a = dq + r r is the remainder r = a mod d 60
Mod Operation Let a, b Z with b > 1. a = q b + r, where 0 r < b Then a mod b denotes the remainder r from the division algorithm with dividend a and divisor b 109 mod 30 =? 0 a mod b b 1 61
Modular Arithmetic Let a, b Z, m Z + Then a is congruent to b modulo m iff m (a b). Notation: a b (mod m) reads a is congruent to b modulo m a b (mod m) reads a is not congruent to b modulo m. Examples: 5 25 (mod 10) 5 25 (mod 2) 62
Modular Arithmetic Theorem 3.4.3: Let a, b Z, m Z +. Then a b (mod m) iff a mod m = b mod m Proof: (1) given a mod m = b mod m we have a = ms + r or r = a ms, b = mp + r or r = b mp, a ms = b mp which means a b = ms mp = m(s p) so m (a b) which means a b (mod m) 63
Modular Arithmetic Theorem 3.4.3: Let a, b Z, m Z +. Then a b (mod m) iff a mod m = b mod m Proof: (2) given a b (mod m) we have m (a b) let a = mq a + r a and b = mq b + r b so, m ((mq a + r a ) (mq b + r b )) or m m(q a q b ) + (r a r b ) recall 0 r a < m and 0 r b < m therefore (r a r b ) must be 0 that is, the two remainders are the same which is the same as saying a mod m = b mod m 64
Modular Arithmetic Theorem 3.4.4: Let a, b Z, m Z +. Then: a b (mod m) iff there exists a k Z st a = b + km. Proof: a = b + km means a b = km which means m (a b) which is the same as saying a b (mod m) (to complete the proof, reverse the steps) Examples: 27 12 (mod 5) 27 = 12 + 5k k = 3 105-45 (mod 10) 105 = -45 + 10k k = 15 65
Modular Arithmetic Theorem 3.4.5: Let a, b, c, d Z, m Z +. Then if a b (mod m) and c d (mod m), then: 1. a + c b + d (mod m), 2. a - c b - d (mod m), 3. ac bd (mod m) Proof: a = b + k 1 m and c = d + k 2 m a + c = b + d + k 1 m + k 2 m or a + c = b + d + m(k 1 + k 2 ) which is a + c b + d (mod m) others are similar 66