Algorithms 2/6/2018. Algorithms. Enough Mathematical Appetizers! Algorithm Examples. Algorithms. Algorithm Examples. Algorithm Examples

Similar documents
Discrete Mathematics CS October 17, 2006

Algorithms and Their Complexity

3. Algorithms. What matters? How fast do we solve the problem? How much computer resource do we need?

Describe an algorithm for finding the smallest integer in a finite sequence of natural numbers.

Growth of Functions. As an example for an estimate of computation time, let us consider the sequential search algorithm.

Announcements. CompSci 102 Discrete Math for Computer Science. Chap. 3.1 Algorithms. Specifying Algorithms

CSCE 222 Discrete Structures for Computing

Algorithm. Executing the Max algorithm. Algorithm and Growth of Functions Benchaporn Jantarakongkul. (algorithm) ก ก. : ก {a i }=a 1,,a n a i N,

Topic 17. Analysis of Algorithms

With Question/Answer Animations

Mat Week 6. Fall Mat Week 6. Algorithms. Properties. Examples. Searching. Sorting. Time Complexity. Example. Properties.

Student Responsibilities Week 6. Mat Properties of Algorithms. 3.1 Algorithms. Finding the Maximum Value in a Finite Sequence Pseudocode

COMP 182 Algorithmic Thinking. Algorithm Efficiency. Luay Nakhleh Computer Science Rice University

3 The Fundamentals:Algorithms, the Integers, and Matrices

CISC 235: Topic 1. Complexity of Iterative Algorithms

Announcements. CompSci 230 Discrete Math for Computer Science. The Growth of Functions. Section 3.2

Theory of Computation

Computer Algorithms CISC4080 CIS, Fordham Univ. Outline. Last class. Instructor: X. Zhang Lecture 2

Computer Algorithms CISC4080 CIS, Fordham Univ. Instructor: X. Zhang Lecture 2

Algorithms: Review from last time

Module 1: Analyzing the Efficiency of Algorithms

ECOM Discrete Mathematics

Big O Notation. P. Danziger

Lecture 2. Fundamentals of the Analysis of Algorithm Efficiency

Algorithm Design CS 515 Fall 2015 Sample Final Exam Solutions

Analysis of Algorithm Efficiency. Dr. Yingwu Zhu

Big O Notation. P. Danziger

Mathematics for Business and Economics - I. Chapter 5. Functions (Lecture 9)

CS173 Running Time and Big-O. Tandy Warnow

Time Analysis of Sorting and Searching Algorithms

Practice Midterm Exam

CSE332: Data Structures & Parallelism Lecture 2: Algorithm Analysis. Ruth Anderson Winter 2019

Algorithm Analysis, Asymptotic notations CISC4080 CIS, Fordham Univ. Instructor: X. Zhang

The Growth of Functions (2A) Young Won Lim 4/6/18

7.4 Relative Rates of Growth

Quantum Searching. Robert-Jan Slager and Thomas Beuman. 24 november 2009

What is Performance Analysis?

CSED233: Data Structures (2017F) Lecture4: Analysis of Algorithms

Running Time Evaluation

CS361 Homework #3 Solutions

Outline. Part 5. Computa0onal Complexity (3) Compound Interest 2/15/12. Recurrence Rela0ons: An Overview. Recurrence Rela0ons: Formal Defini0on

Simplex method(s) for solving LPs in standard form

Asymptotic Analysis. Thomas A. Anastasio. January 7, 2004

CSE332: Data Structures & Parallelism Lecture 2: Algorithm Analysis. Ruth Anderson Winter 2018

Order Notation and the Mathematics for Analysis of Algorithms

csci 210: Data Structures Program Analysis

CS481: Bioinformatics Algorithms

CSE332: Data Structures & Parallelism Lecture 2: Algorithm Analysis. Ruth Anderson Winter 2018

CS 4407 Algorithms Lecture 3: Iterative and Divide and Conquer Algorithms

Lecture 10: Big-Oh. Doina Precup With many thanks to Prakash Panagaden and Mathieu Blanchette. January 27, 2014

Remainders. We learned how to multiply and divide in elementary

csci 210: Data Structures Program Analysis

Discrete Structures May 13, (40 points) Define each of the following terms:

Module 1: Analyzing the Efficiency of Algorithms

Analysis of Algorithms

Limitations of Algorithm Power

A New Paradigm for the Computational Complexity Analysis of Algorithms and Functions

Lecture 2. More Algorithm Analysis, Math and MCSS By: Sarah Buchanan

MAT 243 Test 2 SOLUTIONS, FORM A

Recurrences COMP 215

Divide and Conquer CPE 349. Theresa Migler-VonDollen

Reminder of Asymptotic Notation. Inf 2B: Asymptotic notation and Algorithms. Asymptotic notation for Running-time

Introduction. How can we say that one algorithm performs better than another? Quantify the resources required to execute:

Analysis of Algorithms

CS 4407 Algorithms Lecture 2: Iterative and Divide and Conquer Algorithms

Growth of Functions (CLRS 2.3,3)

Infinite Series Summary

Lecture 1: Asymptotics, Recurrences, Elementary Sorting

Unit 1A: Computational Complexity

The P-vs-NP problem. Andrés E. Caicedo. September 10, 2011

Introduction. An Introduction to Algorithms and Data Structures

MATH 22 FUNCTIONS: ORDER OF GROWTH. Lecture O: 10/21/2003. The old order changeth, yielding place to new. Tennyson, Idylls of the King

Analysis of Algorithms

Lecture 4: Best, Worst, and Average Case Complexity. Wednesday, 30 Sept 2009

NOTE: You have 2 hours, please plan your time. Problems are not ordered by difficulty.

Lecture 3: Big-O and Big-Θ

Loop Invariants and Binary Search. Chapter 4.4, 5.1

It ain t no good if it ain t snappy enough. (Efficient Computations) COS 116, Spring 2011 Sanjeev Arora

Data Structures. Outline. Introduction. Andres Mendez-Vazquez. December 3, Data Manipulation Examples

Performance and Scalability. Lars Karlsson

Admin NP-COMPLETE PROBLEMS. Run-time analysis. Tractable vs. intractable problems 5/2/13. What is a tractable problem?

Computational complexity and some Graph Theory

CSCI 3110 Assignment 6 Solutions

Searching Algorithms. CSE21 Winter 2017, Day 3 (B00), Day 2 (A00) January 13, 2017

Orders of Growth. Also, f o(1) means f is little-oh of the constant function g(x) = 1. Similarly, f o(x n )

Data Structures and Algorithms

CS1210 Lecture 23 March 8, 2019

Copyright 2000, Kevin Wayne 1

Problem-Solving via Search Lecture 3

Big O 2/14/13. Administrative. Does it terminate? David Kauchak cs302 Spring 2013

5 + 9(10) + 3(100) + 0(1000) + 2(10000) =

i=1 i B[i] B[i] + A[i, j]; c n for j n downto i + 1 do c n i=1 (n i) C[i] C[i] + A[i, j]; c n

AMTH140 Trimester Question 1. [8 marks] Answer

Analysis of Algorithms [Reading: CLRS 2.2, 3] Laura Toma, csci2200, Bowdoin College

Principles of Algorithm Analysis

Input Decidable Language -- Program Halts on all Input Encoding of Input -- Natural Numbers Encoded in Binary or Decimal, Not Unary

Problem. Problem Given a dictionary and a word. Which page (if any) contains the given word? 3 / 26

Announcements. CSE332: Data Abstractions Lecture 2: Math Review; Algorithm Analysis. Today. Mathematical induction. Dan Grossman Spring 2010

NP-Completeness I. Lecture Overview Introduction: Reduction and Expressiveness

CPSC 467: Cryptography and Computer Security

Transcription:

Enough Mathematical Appetizers! Algorithms What is an algorithm? Let us look at something more interesting: Algorithms An algorithm is a finite set of precise instructions for performing a computation or for solving a problem. This is a rather vague definition. You will get to know a more precise and mathematically useful definition when you attend CS420 or CS620. But this one is good enough for now 1 2 Algorithms Properties of algorithms: Input from a specified set, Output from a specified set (solution), Definiteness of every step in the computation, Correctness of output for every possible input, Finiteness of the number of calculation steps, Effectiveness of each calculation step and Generality for a class of problems. We will use a pseudocode to specify algorithms, which slightly reminds us of Basic and Pascal. Example: an algorithm that finds the maximum element in a finite sequence procedure max(a 1, a 2,, a n : max := a 1 for i := 2 to n if max < a i then max := a i {max is the largest element} 3 4 Another example: a linear search algorithm, that is, an algorithm that linearly searches a sequence for a particular element. procedure linear_search(x: integer; a 1, a 2,, a n : i := 1 while (i n and x a i ) i := i + 1 if i n then location := i If the terms in a sequence are ordered, a binary search algorithm is more efficient than linear search. The binary search algorithm iteratively restricts the relevant until it closes in on the position of the element to be located. 5 6 1

7 8 9 10 found! 11 procedure binary_search(x: integer; a 1, a 2,, a n : i := 1 {i is left endpoint of } j := n {j is right endpoint of } begin m := (i + j)/2 if x > a m then i := m + 1 else j := m end if x = a i then location := i 12 2

Obviously, on sorted sequences, binary search is more efficient than linear search. How can we analyze the efficiency of algorithms? We can measure the time (number of elementary computations) and space (number of memory cells) that the algorithm requires. These measures are called computational complexity and space complexity, respectively. What is the time complexity of the linear search algorithm? We will determine the worst-case number of comparisons as a function of the number n of terms in the sequence. The worst case for the linear algorithm occurs when the element to be located is not included in the sequence. In that case, every item in the sequence is compared to the element to be located. 13 14 Here is the linear search algorithm again: procedure linear_search(x: integer; a 1, a 2,, a n : i := 1 while (i n and x a i ) i:= i+ 1 if i n then location := i For n elements, the loop while (i n and x a i ) i := i + 1 is processed n times, requiring 2n comparisons. When it is entered for the (n+1)th time, only the comparison i n is executed and terminates the loop. Finally, the comparison if i n then location := i is executed, so all in all we have a worst-case time complexity of 2n + 2. 15 16 Reminder: Binary Search Algorithm procedure binary_search(x: integer; a 1, a 2,, a n : i := 1 {i is left endpoint of } j := n {j is right endpoint of } begin m := (i + j)/2 if x > a m then i := m + 1 else j := m end if x = a i then location := i 17 What is the time complexity of the binary search algorithm? Again, we will determine the worst-case number of comparisons as a function of the number n of terms in the sequence. Let us assume there are n = 2 k elements in the list, which means that k = log n. If n is not a power of 2, it can be considered part of a larger list, where 2 k < n < 2 k+1. 18 3

In the first cycle of the loop begin m := (i + j)/2 if x > a m then i := m + 1 else j := m end the is restricted to 2 k-1 elements, using two comparisons. In the second cycle, the is restricted to 2 k-2 elements, again using two comparisons. This is repeated until there is only one (2 0 ) element left in the. At this point 2k comparisons have been conducted. 19 20 Then, the comparison exits the loop, and a final comparison if x = a i then location := i determines whether the element was found. In general, we are not so much interested in the time and space complexity for small inputs. For example, while the difference in time complexity between linear and binary search is meaningless for a sequence with n = 10, it is gigantic for n = 2 30. Therefore, the overall time complexity of the binary search algorithm is 2k + 2 = 2 log n + 2. 21 22 For example, let us assume two algorithms A and B that solve the same class of problems. Comparison: time complexity of algorithms A and B The time complexity of A is 5,000n, the one for B is 1.1 n for an input with n elements. Input Size n 10 100 Algorithm A 5,000n 50,000 500,000 Algorithm B 1.1 n 3 13,781 1,000 1,000,000 5,000,000 5 10 9 2.5 10 41 4.8 10 41392 23 24 4

This means that algorithm B cannot be used for large inputs, while running algorithm A is still feasible. So what is important is the growth of the complexity functions. The growth of time and space complexity with increasing input size n is a suitable measure for the comparison of algorithms. The growth of functions is usually described using the big-o notation. Definition: Let f and g be functions from the integers or the real numbers to the real numbers. We say that f(x) is O(g(x)) if there are constants C and k such that f(x) C g(x) whenever x > k. 25 26 When we analyze the growth of complexity functions, f(x) and g(x) are always positive. Therefore, we can simplify the big-o requirement to f(x) C g(x) whenever x > k. If we want to show that f(x) is O(g(x)), we only need to find one pair (C, k) (which is never unique). The idea behind the big-o notation is to establish an upper boundary for the growth of a function f(x) for large x. This boundary is specified by a function g(x) that is usually much simpler than f(x). We accept the constant C in the requirement f(x) C g(x) whenever x > k, because C does not grow with x. We are only interested in large x, so it is OK if f(x) > C g(x) for x k. 27 28 Example: Show that f(x) = x 2 + 2x + 1 is O(x 2 ). For x > 1 we have: x 2 + 2x + 1 x 2 + 2x 2 + x 2 x 2 + 2x + 1 4x 2 Therefore, for C = 4 and k = 1: f(x) Cx 2 whenever x > k. Question: If f(x) is O(x 2 ), is it also O(x 3 )? Yes. x 3 grows faster than x 2, so x 3 grows also faster than f(x). Therefore, we always have to find the smallest simple function g(x) for which f(x) is O(g(x)). f(x) is O(x 2 ). 29 30 5

Popular functions g(n) are n log n, 1, 2 n, n 2, n!, n, n 3, log n Listed from slowest to fastest growth: 1 log n n n log n n 2 n 3 2 n n! A problem that can be solved with polynomial worstcase complexity is called tractable. Problems of higher complexity are called intractable. Problems that no algorithm can solve are called unsolvable. You will find out more about this in CS420. 31 32 Useful Rules for Big-O For any polynomial f(x) = a n x n + a n-1 x n-1 + + a 0, where a 0, a 1,, a n are real numbers, f(x) is O(x n ). If f 1 (x) is O(g(x)) and f 2 (x) is O(g(x)), then (f 1 + f 2 )(x) is O(g(x)). If f 1 (x) is O(g 1 (x)) and f 2 (x) is O(g 2 (x)), then (f 1 + f 2 )(x) is O(max(g 1 (x), g 2 (x))) If f 1 (x) is O(g 1 (x)) and f 2 (x) is O(g 2 (x)), then (f 1 f 2 )(x) is O(g 1 (x) g 2 (x)). 33 6