CS 4104 Data and Algorithm Analysis. CS4014: What You Need to Already Know. CS4104: What We Will Do. Introduction to Problem Solving (1)

Similar documents
CS 4104 Data and Algorithm Analysis. Recurrence Relations. Modeling Recursive Function Cost. Solving Recurrences. Clifford A. Shaffer.

An analogy from Calculus: limits

Chapter 2. Mathematical Reasoning. 2.1 Mathematical Models

Lecture 12 : Recurrences DRAFT

5 + 9(10) + 3(100) + 0(1000) + 2(10000) =

MI 4 Mathematical Induction Name. Mathematical Induction

/633 Introduction to Algorithms Lecturer: Michael Dinitz Topic: Asymptotic Analysis, recurrences Date: 9/7/17

More Asymptotic Analysis Spring 2018 Discussion 8: March 6, 2018

N/4 + N/2 + N = 2N 2.

Solving recurrences. Frequently showing up when analysing divide&conquer algorithms or, more generally, recursive algorithms.

b + O(n d ) where a 1, b > 1, then O(n d log n) if a = b d d ) if a < b d O(n log b a ) if a > b d

Computational Complexity. This lecture. Notes. Lecture 02 - Basic Complexity Analysis. Tom Kelsey & Susmit Sarkar. Notes

Lecture 3. Big-O notation, more recurrences!!

This chapter covers asymptotic analysis of function growth and big-o notation.

1 Closest Pair of Points on the Plane

CS 310 Advanced Data Structures and Algorithms

CSC236 Week 3. Larry Zhang

ASYMPTOTIC COMPLEXITY SEARCHING/SORTING

Lecture 1: Asymptotic Complexity. 1 These slides include material originally prepared by Dr.Ron Cytron, Dr. Jeremy Buhler, and Dr. Steve Cole.

CS 5114: Theory of Algorithms. Tractable Problems. Tractable Problems (cont) Decision Problems. Clifford A. Shaffer. Spring 2014

Lecture Notes on Inductive Definitions

Lecture Notes on Inductive Definitions

Menu. Lecture 2: Orders of Growth. Predicting Running Time. Order Notation. Predicting Program Properties

Lecture for Week 2 (Secs. 1.3 and ) Functions and Limits

Matrix Multiplication

Asymptotic Analysis and Recurrences

Math 38: Graph Theory Spring 2004 Dartmouth College. On Writing Proofs. 1 Introduction. 2 Finding A Solution

CS1800: Mathematical Induction. Professor Kevin Gold

CS1800: Strong Induction. Professor Kevin Gold

One-to-one functions and onto functions

CS 124 Math Review Section January 29, 2018

Computational Complexity

Another Proof By Contradiction: 2 is Irrational

CSC 5170: Theory of Computational Complexity Lecture 4 The Chinese University of Hong Kong 1 February 2010

What we have learned What is algorithm Why study algorithm The time and space efficiency of algorithm The analysis framework of time efficiency Asympt

Theory of Computation

CSC236 Week 4. Larry Zhang

CSE 241 Class 1. Jeremy Buhler. August 24,

Recursion: Introduction and Correctness

Data Structure Lecture#4: Mathematical Preliminaries U Kang Seoul National University

Math 391: Midterm 1.0 Spring 2016

When we use asymptotic notation within an expression, the asymptotic notation is shorthand for an unspecified function satisfying the relation:

Introducing Proof 1. hsn.uk.net. Contents

Supplementary Notes on Inductive Definitions

CS 2110: INDUCTION DISCUSSION TOPICS

Great Theoretical Ideas in Computer Science. Lecture 7: Introduction to Computational Complexity

Algorithms Design & Analysis. Analysis of Algorithm

Math 3361-Modern Algebra Lecture 08 9/26/ Cardinality

Analysis of Algorithms [Reading: CLRS 2.2, 3] Laura Toma, csci2200, Bowdoin College

Big-oh stuff. You should know this definition by heart and be able to give it,

Asymptotic Analysis 1

Algorithm efficiency can be measured in terms of: Time Space Other resources such as processors, network packets, etc.

Quick Sort Notes , Spring 2010

4.4 Recurrences and Big-O

Lecture 27: Theory of Computation. Marvin Zhang 08/08/2016

CS173 Running Time and Big-O. Tandy Warnow

Ch01. Analysis of Algorithms

Reading 5 : Induction

Central Algorithmic Techniques. Iterative Algorithms

Asymptotic Notation. such that t(n) cf(n) for all n n 0. for some positive real constant c and integer threshold n 0

Ch 01. Analysis of Algorithms

CS 4407 Algorithms Lecture 2: Iterative and Divide and Conquer Algorithms

Finding Limits Graphically and Numerically

The Inductive Proof Template

Taking Stock. IE170: Algorithms in Systems Engineering: Lecture 3. Θ Notation. Comparing Algorithms

Analysis of Algorithm Efficiency. Dr. Yingwu Zhu

Big O 2/14/13. Administrative. Does it terminate? David Kauchak cs302 Spring 2013

1 Terminology and setup

Calculus II. Calculus II tends to be a very difficult course for many students. There are many reasons for this.

Discrete Structures Proofwriting Checklist

Sequences and infinite series

Reading 10 : Asymptotic Analysis

Descriptive Statistics (And a little bit on rounding and significant digits)

Please bring the task to your first physics lesson and hand it to the teacher.

Computational Complexity - Pseudocode and Recursions

Computer Science 324 Computer Architecture Mount Holyoke College Fall Topic Notes: Digital Logic

Lecture 4: Constructing the Integers, Rationals and Reals

Growth of Functions (CLRS 2.3,3)

Quiz 1 Solutions. Problem 2. Asymptotics & Recurrences [20 points] (3 parts)

Data Structure Lecture#4: Mathematical Preliminaries U Kang Seoul National University

Divide and Conquer Problem Solving Method

1 Maintaining a Dictionary

Difference Equations

CS1210 Lecture 23 March 8, 2019

Concrete models and tight upper/lower bounds

1.1 Administrative Stuff

CSE373: Data Structures and Algorithms Lecture 2: Math Review; Algorithm Analysis. Hunter Zahn Summer 2016

Proof Techniques (Review of Math 271)

Lecture 2. Fundamentals of the Analysis of Algorithm Efficiency

CS 350 Algorithms and Complexity

Data structures Exercise 1 solution. Question 1. Let s start by writing all the functions in big O notation:

MATH 22 FUNCTIONS: ORDER OF GROWTH. Lecture O: 10/21/2003. The old order changeth, yielding place to new. Tennyson, Idylls of the King

Asymptotic Analysis. Thomas A. Anastasio. January 7, 2004

CSE332: Data Structures & Parallelism Lecture 2: Algorithm Analysis. Ruth Anderson Winter 2019

Chapter 1 Review of Equations and Inequalities

Why write proofs? Why not just test and repeat enough examples to confirm a theory?

Algebra Exam. Solutions and Grading Guide

CSE373: Data Structures and Algorithms Lecture 3: Math Review; Algorithm Analysis. Catie Baker Spring 2015

Algebra. Here are a couple of warnings to my students who may be here to get a copy of what happened on a day that you missed.

Introduction to Algebra: The First Week

Transcription:

Department of Computer Science Virginia Tech Blacksburg, Virginia Copyright c 010,017 by Clifford A. Shaffer Data and Algorithm Analysis Title page Data and Algorithm Analysis Clifford A. Shaffer Spring 017 Clifford A. Shaffer Department of Computer Science Virginia Tech Blacksburg, Virginia Spring 017 Copyright c 010,017 by Clifford A. Shaffer Data and Algorithm Analysis Spring 017 1 / 348 CS4014: What You Need to Already Know CS4014: What You Need to Already Know Basic data structures: Lists, search trees, heaps, graphs CS4014: What You Need to Already Know Discrete Math Proof by contradiction and induction Summations Set theory, relations The basics of Asymptotic Analysis Big-oh, Big-Ω, Θ Most of what was covered in CS3114 Basic data structures Algorithms for searching and sorting Discrete Math Proof by contradiction and induction Summations Set theory, relations The basics of Asymptotic Analysis Big-oh, Big-Ω, Θ Most of what was covered in CS3114 Basic data structures Algorithms for searching and sorting Basic sort algorithms; search techniques such as binary search, hashing Data and Algorithm Analysis Spring 017 / 348 CS4104: What We Will Do CS4104: What We Will Do CS4104: What We Will Do Finally understand upper/lower bounds Lower bounds proofs Analysis techniques (no hand waving!) Recurrance Relations Reductions, N P-completeness theory, and a little computability theory Process: Weekly homework sets (they are hard!) Work in pairs Show up! (Attendance policy) Finally understand upper/lower bounds Lower bounds proofs Analysis techniques (no hand waving!) Recurrance Relations Reductions, N P-completeness theory, and a little computability theory Process: Weekly homework sets (they are hard!) Work in pairs Show up! (Attendance policy) The first homework has been posted. You should get your partner decided ASAP and get started. Data and Algorithm Analysis Spring 017 3 / 348 Introduction to Problem Solving (1) Introduction to Problem Solving (1) Introduction to Problem Solving (1) Principle of Intimate Engagement This is the most important consideration Actively engaging the problem, getting involved Need to build up mental muscles for problem solving For more details, see the PSintro.pdf notes posted at the website. Principle of Intimate Engagement This is the most important consideration Actively engaging the problem, getting involved Need to build up mental muscles for problem solving Data and Algorithm Analysis Spring 017 4 / 348

Introduction to Problem Solving () Introduction to Problem Solving () Introduction to Problem Solving () Effective vs. Ineffective problem solvers (Engagers vs. Dismissers) Engagers have a history of success Dismissers have a history of failure You probably engage some problems and dismiss others You could solve more problems if you overcame the mental hurdles that lead to dismissing Transfer successful problem solving in some parts of your life to other areas Effective vs. Ineffective problem solvers (Engagers vs. Dismissers) Engagers have a history of success Dismissers have a history of failure You probably engage some problems and dismiss others You could solve more problems if you overcame the mental hurdles that lead to dismissing Transfer successful problem solving in some parts of your life to other areas Mental hurdles: That is, you have the knowledge and ability necessary to solve the problem, if you had sufficient motivation. Data and Algorithm Analysis Spring 017 5 / 348 Introduction to Problem Solving (3) Introduction to Problem Solving (3) Introduction to Problem Solving (3) Getting your hands dirty Example: Repairing a wobbly table Get underneath and look Example: Repairing a dryer Open up back panel and look We will see examples of this concept, initially with doing summations Getting your hands dirty Example: Repairing a wobbly table Get underneath and look Example: Repairing a dryer Open up back panel and look Data and Algorithm Analysis Spring 017 6 / 348 Investigation and Argument Problem solving has two parts: the investigation and the argument. Students are used to seeing only the argument in their textbooks and lectures. To be successful in school and in life, one needs to be good at both To solve the problem, you must investigate successfully. Then, to give the answer to your client, you need to be able to make the argument in a way that gets the solution across clearly and succinctly. Writing skills. Proof Skills Methods of argument: Deduction (direct proof), contradiction, induction Investigation and Argument Investigation and Argument Problem solving has two parts: the investigation and the argument. Students are used to seeing only the argument in their textbooks and lectures. To be successful in school and in life, one needs to be good at both To solve the problem, you must investigate successfully. Then, to give the answer to your client, you need to be able to make the argument in a way that gets the solution across clearly and succinctly. Writing skills. Proof Skills Methods of argument: Deduction (direct proof), contradiction, induction Unfortunately, while seeing lots of examples of argument (proof), too many students don t recognize the importance of being good at doing it. Data and Algorithm Analysis Spring 017 7 / 348 Heuristics for Problem Solving (1) These heuristics most appropriate for problem solving in the small Puzzles Math and CS test or homework problems A list of standard Heuristics: 1 Externalize: write it down After motivation and mental attitude, the most important limitation on your ability to solve problems is biological For active manipulation, you can only store 7 ± pieces of information Take advantage of your environment to get around this Write things down Manipulate problem (good representation) Data and Algorithm Analysis Spring 017 8 / 348 Heuristics for Problem Solving (1) Heuristics for Problem Solving (1) These heuristics most appropriate for problem solving in the small Puzzles Math and CS test or homework problems A list of standard Heuristics: 1 Externalize: write it down After motivation and mental attitude, the most important limitation on your ability to solve problems is biological For active manipulation, you can only store 7 ± pieces of information Take advantage of your environment to get around this Write things down Manipulate problem (good representation)

Heuristics for Problem Solving () Heuristics for Problem Solving () Heuristics for Problem Solving () Get your hands dirty Play around with the problem to get some initial insight 3 Look for special features Example: Cryptogram addition problems A D + D I D I D 4 Go to the extremes Study problem boundary conditions 5 Simplify This might give a partial solution that can be extended to the original problem. Get your hands dirty Play around with the problem to get some initial insight 3 Look for special features Example: Cryptogram addition problems A D + D I D I D 4 Go to the extremes Study problem boundary conditions 5 Simplify This might give a partial solution that can be extended to the original problem. Extremes: We will use this often Data and Algorithm Analysis Spring 017 9 / 348 Heuristics for Problem Solving (3) 6 Penultimate step What precondition must take place before the final solution step is possible? Solving the penultimate step might be easier than the original problem 7 Lateral thinking Don t be lead into a blind alley. Using an inappropriate problem solving strategy might blind you to the solution 8 Wishful thinking A version of simplifying the problem Transform problem into something easy; take start position to something that you wish was the solution That might be a smaller step to the actual solution Heuristics for Problem Solving (3) Rush Hour is an excellent example. We will see another example next week: TOH Heuristics for Problem Solving (3) 6 Penultimate step What precondition must take place before the final solution step is possible? Solving the penultimate step might be easier than the original problem 7 Lateral thinking Don t be lead into a blind alley. Using an inappropriate problem solving strategy might blind you to the solution 8 Wishful thinking A version of simplifying the problem Transform problem into something easy; take start position to something that you wish was the solution That might be a smaller step to the actual solution Data and Algorithm Analysis Spring 017 10 / 348 Heuristics for Problem Solving (4) Heuristics for Problem Solving (4) Heuristics for Problem Solving (4) 9 Sleep on it 10 Symmetry & Invariants Symmetries in the problem might give clues to the solution 9 Sleep on it 10 Symmetry & Invariants Symmetries in the problem might give clues to the solution Data and Algorithm Analysis Spring 017 11 / 348 Working With a Partner Working With a Partner Working With a Partner The homework problems are intended to make you struggle Having a partner will be important Possible roles of parter: Active collaborator Skeptical Verifier You might not get something right, but you should never get it wrong! The homework problems are intended to make you struggle Having a partner will be important Possible roles of parter: Active collaborator Skeptical Verifier You might not get something right, but you should never get it wrong! Data and Algorithm Analysis Spring 017 1 / 348

Pairs Problem Solving Pairs Problem Solving Pairs Problem Solving An effective way to work in pairs to solve problems: Partner roles: problem solver and listener Responsibilities of the problem solver Constant vocalization Spell out all the assumptions Carefully detail all steps taken Responsibilities of the listener An effective way to work in pairs to solve problems: Partner roles: problem solver and listener See paper on pairs programming. Continually check for accuracy Demand constant vocalization Responsibilities of the problem solver Constant vocalization Spell out all the assumptions Carefully detail all steps taken Responsibilities of the listener Continually check for accuracy Demand constant vocalization Data and Algorithm Analysis Spring 017 13 / 348 Errors in Reasoning Errors in Reasoning Errors in Reasoning Getting the wrong answer on a test or homework usually results from a breakdown in problem solving. Typical breakdowns: Failing to observe and use all relevant facts of a problem Failing to approach the problem in a systematic manner. Instead, making leaps in logic without checking steps. Failing to spell out relationships fully Being sloppy and inaccurate in collecting information and carrying out mental activities Getting the wrong answer on a test or homework usually results from a breakdown in problem solving. Typical breakdowns: Failing to observe and use all relevant facts of a problem Failing to approach the problem in a systematic manner. Instead, making leaps in logic without checking steps. Failing to spell out relationships fully Being sloppy and inaccurate in collecting information and carrying out mental activities In pairs problem solving (such as the homework in this class) there had to be a serious breakdown if the answer is wrong since the partner (the listener) should never have let it happen. Data and Algorithm Analysis Spring 017 14 / 348 Our Central Question This semester s Central Question: Given a problem, do we have (or can we devise) a good solution? Good solution generally means as efficient as the problem will allow. How do we know if a solution is good or not? Our Central Question Our Central Question This semester s Central Question: Given a problem, do we have (or can we devise) a good solution? Good solution generally means as efficient as the problem will allow. How do we know if a solution is good or not? Ultimately, we want efficient programs. How do we measure the efficiency of a program? (Assume we are concerned primarily with time.) On what input? How do we speed it up? When do we stop speeding it up? Should we write the program in the first place? Ultimately, we want efficient programs. How do we measure the efficiency of a program? (Assume we are concerned primarily with time.) On what input? How do we speed it up? When do we stop speeding it up? Should we write the program in the first place? Data and Algorithm Analysis Spring 017 15 / 348 Problems (1) Problems (1) Problems (1) Our problems must be well-defined enough to be solved on computers. A problem is a function (i.e., a mapping of inputs to outputs). We have different instances (inputs) for the problem, where each instance has a size. To solve a problem, we must provide an algorithm, a coding of problem instances into inputs for the algorithm, and a coding for outputs into solutions. Our problems must be well-defined enough to be solved on computers. A problem is a function (i.e., a mapping of inputs to outputs). Actually, to solve a problem we need more than just a clear definition. By the end of the semester, we will discuss problems that are not computable (i.e., cannot be solved) even though their definition is clear. We have different instances (inputs) for the problem, where each instance has a size. To solve a problem, we must provide an algorithm, a coding of problem instances into inputs for the algorithm, and a coding for outputs into solutions. Data and Algorithm Analysis Spring 017 16 / 348

Problems () Problems () Problems () An algorithm executes the mapping. A proposed algorithm must work for ALL instances (give the correct mapping to the output for that input instance). GOAL: Solve problems with as little computational effort per instance as possible. An algorithm executes the mapping. A proposed algorithm must work for ALL instances (give the correct mapping to the output for that input instance). GOAL: Solve problems with as little computational effort per instance as possible. Actually, we will relax this restriction later... Approximation and Probabilistic algorithms. We are most often interested in solutions to large instances of the problem (asymptotic Analysis). Occasionally we are concerned with small instances. Then, constants matter. Data and Algorithm Analysis Spring 017 17 / 348 Modeling Program Cost Modeling Program Cost Modeling Program Cost Since we don t want to spend the time to write worthless programs, we will focus on algorithm efficiency. This is a model We need a yardstick. It should measure something we care about. It should by quantitative, allowing comparisons. It should be easy to compute (the measure, not the program). It should be a good predictor. Since we don t want to spend the time to write worthless programs, we will focus on algorithm efficiency. This is a model Remember that we are discussing an analytic model. We do not want to do performance analysis on a real program. We need a yardstick. It should measure something we care about. It should by quantitative, allowing comparisons. It should be easy to compute (the measure, not the program). It should be a good predictor. Data and Algorithm Analysis Spring 017 18 / 348 Modeling Algorithm Cost Modeling Algorithm Cost Modeling Algorithm Cost The fundamental driver for algorithm analysis is the behavior (growth rate) of a the algorithm as the problem size grows. To model the growth rate of an algorithm, we need: A measure for problem size. A measure for solution effort. We use a count of the basic operations as a measure of solution effort. Just to complicate things: Algorithms can behave very different on different inputs of a given size. Best, Average, Worst Case come in here The fundamental driver for algorithm analysis is the behavior (growth rate) of a the algorithm as the problem size grows. To model the growth rate of an algorithm, we need: A measure for problem size. A measure for solution effort. We use a count of the basic operations as a measure of solution effort. Just to complicate things: Algorithms can behave very different on different inputs of a given size. Best, Average, Worst Case come in here To have a meaningful discussion about the behavior of an algorithm, we have to agree in advance about which of the many behaviors that algorithm might have in terms of its growth rate as the input size grows. Data and Algorithm Analysis Spring 017 19 / 348 Cost Model (1) Cost Model (1) Cost Model (1) To get a measurement, we need a model. Example: PROBLEM: Square a number. Model input size as simply the value of the number. In the solution: Assigning to a variable takes fixed time. All other operations take no time. To get a measurement, we need a model. Example: PROBLEM: Square a number. Model input size as simply the value of the number. In the solution: Assigning to a variable takes fixed time. All other operations take no time. Example of a model for cost measure. It might or might not be a good model. Later we will come to realize that this is a lousy model for the input size of a numeric problem. But its good enough for now. Data and Algorithm Analysis Spring 017 0 / 348

sum = n*n; sum = 0; for (; i<=n; i++) sum = sum + n; sum = 0; for (; i<=n; i++) for (j=1; j<=n; j++) sum = sum + 1; Cost Model () Cost Model () Cost Model () One assignment was made, so the cost is 1. Assignments made are 1 + n 1 = n + 1. (Depending on how you want to deal with loop variables, you might want to say it is n + 1.) sum = n*n; One assignment was made, so the cost is 1. sum = 0; for (; i<=n; i++) sum = sum + n; n + 1 vs n + 1: Does it matter? Not so much. We didn t know the exact amount of time for an operation to begin with, so the factor of doesn t seem to mean much. What is important is that the growth rates of these two are the same. Assignments made are 1 + n 1 = n + 1. (Depending on how you want to deal with loop variables, you might want to say it is n + 1.) Data and Algorithm Analysis Spring 017 1 / 348 Cost Model (3) Cost Model (3) Cost Model (3) Assignments made are 1 + n n j=1 1 = n + 1. What makes a model good? Consider string assignment (done by copying). In this case, is our model still good? Consider accessing the ith value in a list. Use a model of constant time for this operation. Is this a good model for an array? For a linked list? sum = 0; for (; i<=n; i++) for (j=1; j<=n; j++) sum = sum + 1; In our example with for loops, n + 1 and n + 1 are both linear, so they are both equally predictive of growth rate. Assignments made are 1 + n n j=1 1 = n + 1. What makes a model good? Consider string assignment (done by copying). In this case, is our model still good? Consider accessing the ith value in a list. Use a model of constant time for this operation. Is this a good model for an array? For a linked list? Data and Algorithm Analysis Spring 017 / 348 Big Issues Big Issues Big Issues How do we create an efficient algorithm? Q: How do we recognize a good algorithm? A: By the relationship of its performance to the intrinsic difficulty of the problem. How hard is a problem? How do we create an efficient algorithm? Q: How do we recognize a good algorithm? A: By the relationship of its performance to the intrinsic difficulty of the problem. How hard is a problem? We use problem solving and algorithm design skills. This semester, we will see some standard algorithm design techniques. Example: Dynamic programming. A key issue, because we don t know whether to stop with trying to create a good algorithm unless we can recognize one. This is where lower bounds come in. For now, we restrict hard to mean How much does it cost to run? Later, we will talk about some different meanings for the term hard. Data and Algorithm Analysis Spring 017 3 / 348 Big Issues () Big Issues () Big Issues () General Plan: Define a PROBLEM. Build a MODEL to measure cost of a solution to the problem. Design an ALGORITHM to solve the problem. ANALYZE both problem and algorithm under the model. Analyze an algorithm to get an UPPER BOUND. Analyze a problem to get a LOWER BOUND. COMPARE the bounds to see if our solution is good enough. General Plan: Define a PROBLEM. Build a MODEL to measure cost of a solution to the problem. Design an ALGORITHM to solve the problem. ANALYZE both problem and algorithm under the model. Analyze an algorithm to get an UPPER BOUND. Analyze a problem to get a LOWER BOUND. COMPARE the bounds to see if our solution is good enough. Data and Algorithm Analysis Spring 017 4 / 348

Big Issues (3) Big Issues (3) Big Issues (3) If the bounds do not match, then here are some options: Redesign the algorithm. Tighten the lower bound. Change the model. Change the problem. If the bounds do not match, then here are some options: Redesign the algorithm. Tighten the lower bound. Change the model. Change the problem. Data and Algorithm Analysis Spring 017 5 / 348 Analyzing Problems Analyzing Problems Analyzing Problems Given a problem to solve, we often can come up with an algorithm that solves it. The question then is: Is this a "good" algorithm? What does that even mean? Our definition for a "good" algorithm will be one that requires no more work than is required by the intrinsic difficulty of the problem. Given a problem to solve, we often can come up with an algorithm that solves it. This requires that we know the cost of our algorithm, and the intrinsic difficulty for the problem. The question then is: Is this a "good" algorithm? What does that even mean? Our definition for a "good" algorithm will be one that requires no more work than is required by the intrinsic difficulty of the problem. This requires that we know the cost of our algorithm, and the intrinsic difficulty for the problem. Data and Algorithm Analysis Spring 017 6 / 348 Towers of Hanoi (1) Towers of Hanoi (1) Towers of Hanoi (1) This is an exercise first in the process of problem solving, and second in the process of analyzing a problem in detail. Try to pretend that you have never seen this problem before, that you are approaching it for the first time. Given: 3 poles and n disks of different sizes placed in order of size on Pole A. This is an exercise first in the process of problem solving, and second in the process of analyzing a problem in detail. Try to pretend that you have never seen this problem before, that you are approaching it for the first time. Given: 3 poles and n disks of different sizes placed in order of size on Pole A. This problem is especially good for us to use as a starting example for the analysis of a problem. The reason is that there is essentially only one optimal algorithm, and its simple enough to recognize this to be a fact. This avoids a lot of the complication that we normally encounter in the analysis process, even when considering the simplest of problems. As a bonus, we can talk a bit about the creative aspects of problem solving, as separate from the mechanics of analyzing the efficiency of the solution. Data and Algorithm Analysis Spring 017 7 / 348 Towers of Hanoi () Towers of Hanoi () Towers of Hanoi () Problem: Move the disks to Pole B, given the following constraints: A move takes the topmost disk from one pole and places it on another pole (the only action allowed). A disk may never be on top of a smaller disk. Problem: Move the disks to Pole B, given the following constraints: A move takes the topmost disk from one pole and places it on another pole (the only action allowed). A disk may never be on top of a smaller disk. Data and Algorithm Analysis Spring 017 8 / 348

void Tower1(int n, POLE start, POLE goal, POLE tmp) { if (n == 0) return; // Base case Tower1(n-1, start, tmp, goal); // Recurse: n-1 disk move(start, goal); // Move one disk Tower1(n-1, tmp, goal, start); // Recurse: n-1 disk The Model The Model The Model Recall that to do analysis, we have to define a model with two parts: 1 The size of the input is the number of disks. The cost of the solution is the number of moves. Recall that to do analysis, we have to define a model with two parts: 1 The size of the input is the number of disks. The cost of the solution is the number of moves. Data and Algorithm Analysis Spring 017 9 / 348 Finding an Algorithm Finding an Algorithm Finding an Algorithm Start by trying to solve the problem for small instances. 0 disks, 1 disk, disks... When we get to 3 disks, it starts to get harder. Can we generalize the insight from solving for 3 disks? 4 disks? Observation 1: The largest disk has no effect on the movements of the other disks. Why? Observation (Key!): We can t move the bottom disk from Pole A to Pole B unless all other disks are on Pole C. Start by trying to solve the problem for small instances. 0 disks, 1 disk, disks... When we get to 3 disks, it starts to get harder. Can we generalize the insight from solving for 3 disks? 4 disks? Observation 1: The largest disk has no effect on the movements of the other disks. Why? Observation (Key!): We can t move the bottom disk from Pole A to Pole B unless all other disks are on Pole C. Data and Algorithm Analysis Spring 017 30 / 348 Think about all the possible choices for a 3-disk series of moves. Because it is always below the other disks, so they can move around as though it did not exist. Problem solving often relies on a key insight that lets you crack the problem. Similarly, analysis of the problem might rely on a key insight on how to view the analysis. Often a simplification for the states or progess of the algorithm, or a recognition of the key input classes for the problem. Recursive Solutions (1) When we generalize the problem to more disks, we end up with something like: Move all but the bottom disk to Pole C. Move the bottom disk from Pole A to Pole B. Move the remaining disks from Pole C to Pole B. Use recursion. Recursive Solutions (1) Recursive Solutions (1) When we generalize the problem to more disks, we end up with something like: Move all but the bottom disk to Pole C. Move the bottom disk from Pole A to Pole B. Move the remaining disks from Pole C to Pole B. Problem-solving heuristics used: Get our hands dirty: Play with some simple examples Go to the extremes: Check the small cases first Penultimate step: Key insight is that we can t solve the problem until we move the bottom disk (and there is only one efficient way to do that). How do we deal with the n 1 disks (twice)? Problem-solving heuristics used: Get our hands dirty: Play with some simple examples Go to the extremes: Check the small cases first Penultimate step: Key insight is that we can t solve the problem until we move the bottom disk (and there is only one efficient way to do that). How do we deal with the n 1 disks (twice)? Data and Algorithm Analysis Spring 017 31 / 348 Recursive Solutions () Recursive Solutions () Recursive Solutions () Forward-backward strategy: Solve simple special cases and generalize their solution, then test the generalization on other special cases. Forward-backward strategy: Solve simple special cases and generalize their solution, then test the generalization on other special cases. void Tower1(int n, POLE start, POLE goal, POLE tmp) { if (n == 0) return; // Base case Tower1(n-1, start, tmp, goal); // Recurse: n-1 disk move(start, goal); // Move one disk Tower1(n-1, tmp, goal, start); // Recurse: n-1 disk Data and Algorithm Analysis Spring 017 3 / 348

Analysis of Tower1 Analysis of Tower1 Analysis of Tower1 There is only one input instance of size n. We want to count the number of moves required as a function of n. Some facts: f (0) = 0. f (1) = 1. f () = 3. f (3) = 7. There is only one input instance of size n. We want to count the number of moves required as a function of n. Some facts: f (0) = 0. f (1) = 1. f () = 3. f (3) = 7. How can we generalize? Worst/best/average cost are the same, so it doesn t matter which you do. This is one of the reasons why we picked this algorithm to discuss we don t have the complexity of a range of inputs for a given size n. The second reason why we picked this problem to start with is because it is obvious what the lower bound cost is. So now we can focus entirely on the technique of proving the math, not figuring out what to analyze. How can we generalize? Data and Algorithm Analysis Spring 017 33 / 348 Analysis of Tower1 () Analysis of Tower1 () Analysis of Tower1 () Observe that, beyond base cases, the algorithm calls itself recursively. The cost of the recursive call is the cost of the algorithm... on a smaller input. What is the cost of the algorithm? We don t know, but we CAN give it a name. f (n) = f (n 1) + 1 + f (n 1) = f (n 1) + 1, n > 0. f (0) = 0. Observe that, beyond base cases, the algorithm calls itself recursively. The cost of the recursive call is the cost of the algorithm... on a smaller input. What is the cost of the algorithm? We don t know, but we CAN give it a name. f (n) = f (n 1) + 1 + f (n 1) = f (n 1) + 1, n > 0. f (0) = 0. Data and Algorithm Analysis Spring 017 34 / 348 Recurrence Relation Recurrence Relation Recurrence Relation If we have a recursive algorithm, then we model its cost with a recurrence relation. What is the recurrence relation for Tower1? { 0 n = 0 f (n) = f (n 1) + 1 n > 0 If we have a recursive algorithm, then we model its cost with a recurrence relation. What is the recurrence relation for Tower1? { 0 n = 0 f (n) = f (n 1) + 1 n > 0 Data and Algorithm Analysis Spring 017 35 / 348 Recurrence Relation How can we find a closed form solution for this recurrence? Normally, we can t get anywhere with one of these analysis problems until we get our hands dirty. Usually like this: n 0 1 3 4 5 6 f (n) 0 1 3 7 15 31 63 Recurrence Relation Recurrence Relation How can we find a closed form solution for this recurrence? Normally, we can t get anywhere with one of these analysis problems until we get our hands dirty. Usually like this: n 0 1 3 4 5 6 f (n) 0 1 3 7 15 31 63 It looks like each time we add a disk, we roughly double the cost something like n. If we examine some simple cases, we see that they appear to fit the equation f (n) = n 1. How do we prove that this ALWAYS works? In practice, this is a common way to start: look for a pattern. It is so common, it has its own name: Guess and Test. It looks like each time we add a disk, we roughly double the cost something like n. If we examine some simple cases, we see that they appear to fit the equation f (n) = n 1. How do we prove that this ALWAYS works? Data and Algorithm Analysis Spring 017 36 / 348

Proof for Recurrence Proof for Recurrence Proof for Recurrence Let s ASSUME that f (n 1) = n 1 1, and see what happens. From the recurrence, f (n) = f (n 1) + 1 = ( n 1 1) + 1 = n 1. Implication: if there is EVER an n for which f (n) = n 1, then for all greater values of n, f conforms to this rule. This is the essence of proof by induction. Let s ASSUME that f (n 1) = n 1 1, and see what happens. From the recurrence, f (n) = f (n 1) + 1 = ( n 1 1) + 1 = n 1. Implication: if there is EVER an n for which f (n) = n 1, then for all greater values of n, f conforms to this rule. This is the essence of proof by induction. Data and Algorithm Analysis Spring 017 37 / 348 Proof by Induction To prove by induction, we need to show two things: We can get started (base case). Being true for k implies that it is true also for k + 1. Proof by Induction Proof by Induction To prove by induction, we need to show two things: We can get started (base case). Being true for k implies that it is true also for k + 1. Here is the complete induction proof for Tower1: For n = 0, f (0) = 0, so f (0) = 0 1. Assume f (k) = k 1, for k < n. Then, from the recurrence we have f (n) = f (n 1) + 1 = ( n 1 1) + 1 = n 1 Thus, being true for k 1 implies that it is also true for k. Thus, we conclude that formula is correct for all n 0. Is this a good algorithm? That would depend on what? On the intrinsic difficulty of the problem! Here is the complete induction proof for Tower1: For n = 0, f (0) = 0, so f (0) = 0 1. Assume f (k) = k 1, for k < n. Then, from the recurrence we have f (n) = f (n 1) + 1 = ( n 1 1) + 1 = n 1 Thus, being true for k 1 implies that it is also true for k. Thus, we conclude that formula is correct for all n 0. Is this a good algorithm? Data and Algorithm Analysis Spring 017 38 / 348 Lower Bound of a Problem (1) Lower Bound of a Problem (1) Lower Bound of a Problem (1) To decide if the algorithm is good, we need a lower bound on the cost of the PROBLEM. We are fortunate for this problem that we have insight about the most efficient way POSSIBLE to solve this problem. So we can reason about that. We know that we must move all disks off the bottom disk, move the bottom disk, then move all disks again. Any other algorithm can only add work. To decide if the algorithm is good, we need a lower bound on the cost of the PROBLEM. We are fortunate for this problem that we have insight about the most efficient way POSSIBLE to solve this problem. So we can reason about that. We know that we must move all disks off the bottom disk, move the bottom disk, then move all disks again. Any other algorithm can only add work. Data and Algorithm Analysis Spring 017 39 / 348 Lower Bound (cont.) Lower bounds (of problems) are harder than upper bounds (of algorithms) because we must consider ALL of the possible algorithms including the ones we don t know! Upper bound: How bad is the algorithm? Lower bound: How hard is the problem? Lower bounds don t give you a good algorithm. They only help you know when to stop looking. If the lower bound for the problem matches the upper bound for the algorithm (within a constant factor), then we know that we can find an algorithm that is better only by a constant factor. Can lower bound tell us if an algorithm is NOT optimal? Lower Bound (cont.) Lower Bound (cont.) Lower bounds (of problems) are harder than upper bounds (of algorithms) because we must consider ALL of the possible algorithms including the ones we don t know! Upper bound: How bad is the algorithm? Lower bound: How hard is the problem? Lower bounds don t give you a good algorithm. They only help you know when to stop looking. If the lower bound for the problem matches the upper bound for the algorithm (within a constant factor), then we know that we can find an algorithm that is better only by a constant factor. Can lower bound tell us if an algorithm is NOT optimal? Since we cannot even enumerate all the algorithms and check all the bounds, we need a different approach! It is a judgement call as to whether its worth improving by a constant factor.no, sorry! Why not? Because we might not have the tightest possible lower bound! Data and Algorithm Analysis Spring 017 40 / 348

Lower Bounds for TOH Problem Try #1: We must move each disk at least twice, except for the largest we move once. f (n) = n 1. Is this a good match to the cost of our algorithm? Where is the problem: the lower bound or the algorithm? Insight #1: f (n) > f (n 1). Seems obvious, but why? Is this true for all problems? Try #: To move the bottom disk to Pole B, we MUST first move n 1 disks to Pole C. Later, we MUST move n 1 disks back to Pole B. f (n) f (n 1) + 1. Thus, Tower1 is optimal (for our model). Lower Bounds for TOH Problem No! Ω(n) isn t close to O( n ). Lower Bounds for TOH Problem Try #1: We must move each disk at least twice, except for the largest we move once. f (n) = n 1. Is this a good match to the cost of our algorithm? Where is the problem: the lower bound or the algorithm? Insight #1: f (n) > f (n 1). Seems obvious, but why? Is this true for all problems? Try #: To move the bottom disk to Pole B, we MUST first move n 1 disks to Pole C. Later, we MUST move n 1 disks back to Pole B. f (n) f (n 1) + 1. Thus, Tower1 is optimal (for our model). We must move n 1 disks off the bottom disk first. No! For example, sorting cost depends on particular problem instances.since it does nothing more than the minimum required by the observation. Warning: Normally we cannot prove anything about a problem in general with this sort of behavioristic argument. Usually, we cannot say so much about how an algorithm must work. Data and Algorithm Analysis Spring 017 41 / 348 New Models New Models New Models New model #1: We can move a stack of disks in one move. New model #: Not all disks start on Pole A. New model #3: Different numbers of poles. New model #4: We want to know what the kth move is. Model #1: A big help! O(n) or even O(1). New model #1: We can move a stack of disks in one move. New model #: Not all disks start on Pole A. Model #: Doesn t seem to change the cost of the problem. Combining these two things: Looks to be O(n). New model #3: Different numbers of poles. New model #4: We want to know what the kth move is. Data and Algorithm Analysis Spring 017 4 / 348 Problem Solving Algorithm Problem Solving Algorithm Problem Solving Algorithm If the upper and lower bounds match, then stop, else if close or problem isn t important, then stop, else if model focuses on wrong thing, then restate it, else if the algorithm is too slow, then generate faster algorithm, else if lower bound is too weak, then generate stronger bound. Repeat until done. If the upper and lower bounds match, then stop, else if close or problem isn t important, then stop, else if model focuses on wrong thing, then restate it, else if the algorithm is too slow, then generate faster algorithm, else if lower bound is too weak, then generate stronger bound. Does this algorithm always terminate? No you might get stuck in a loop if you go through and make no progress. Repeat until done. Data and Algorithm Analysis Spring 017 43 / 348 Factorial Growth (1) Factorial Growth (1) Factorial Growth (1) Which function grows faster? f (n) = n or g(n) = n! How about h(n) = n? n 1 3 4 5 6 7 8 g(n) n! 1 6 4 10 70 5040 4030 f (n) n 4 8 16 3 64 18 56 h(n) n 4 16 64 56 104 4096 16384 65536 Which function grows faster? f (n) = n or g(n) = n! How about h(n) = n? n 1 3 4 5 6 7 8 g(n) n! 1 6 4 10 70 5040 4030 f (n) n 4 8 16 3 64 18 56 h(n) n 4 16 64 56 104 4096 16384 65536 Data and Algorithm Analysis Spring 017 44 / 348 Hopefully your intuition tells you that n! grows much faster than n. This one is probably not as obvious. It helps to build some proficiency with exponents. n = n = 4 n = ( n ) = ( n )( n ) So if your intuition is good, you will realize that n! is much faster growing than 4 n for the same reason that it is faster growing than n (since most numbers are bigger than 4, or ). It so happens that n! will be become bigger than n for n = 9. This is a deficiency with our common practice of writing out a few terms to explore what happens, and why we can never stop with a guess of a pattern.

Consider the recurrences: Factorial Growth (1) { 4 n = 1 h(n) = 4h(n 1) n > 1 { 1 n = 1 g(n) = ng(n 1) n > 1 I hope your intuition tells you the right thing. Factorial Growth (1) Consider the recurrences: Factorial Growth (1) h(n) = g(n) = { 4 n = 1 4h(n 1) n > 1 { 1 n = 1 ng(n 1) n > 1 I hope your intuition tells you the right thing. But, how do you PROVE it? Induction? What is the base case? The n > 1 clause is the important part of the recurrence for growth. The second recurrence is just n! in recurrence form. Sorry, we don t know the base case. It must be something bigger than 8. So, we can t use induction! Induction is great for verifying a hypothesis. It is not so good for generating candidate formulae! But, how do you PROVE it? Induction? What is the base case? Data and Algorithm Analysis Spring 017 45 / 348 Using Logarithms (1) Using Logarithms (1) Using Logarithms (1) n! n iff log n! log n = n. Why? n! = n (n 1) n (n 1) 1 n n n 1 1 1 = ( n )n/ Therefore log n! log( n )n/ = ( n ) log(n ). Need only show that this grows to be bigger than n. n! n iff log n! log n = n. Why? n! = n (n 1) n (n 1) 1 n n n 1 1 1 = ( n )n/ Take log of both sides. Note that log always means log for us unless explicitly stated otherwise. We have n n/ times and we have 1 also n/ times. This isn t quite perfect. What if n is odd? Therefore log n! log( n )n/ = ( n ) log(n ). Since we noted earlier that log n! > n if n! > n. Need only show that this grows to be bigger than n. Data and Algorithm Analysis Spring 017 46 / 348 Using Logarithms () Using Logarithms () Using Logarithms () ( n) log( n log( n) 4 n 3 So, n! n once n 3. Now we could prove this with induction, using 3 for the base case. What is the tightest base case? How did we get such a big over-estimate? So, n! n once n 3. ( n ) log( n ) n log( n ) 4 n 3 Multiply by /n 4 = 3. Take antilog and multiply by. 9 We grossly overestimated when going from n! to ( n )n/. Now we could prove this with induction, using 3 for the base case. What is the tightest base case? How did we get such a big over-estimate? Data and Algorithm Analysis Spring 017 47 / 348 Logs and Factorials Logs and Factorials Logs and Factorials We have proved that n! Ω( n ). We have also proved that log n! Ω(n log n). From here, its easy to prove that log n! O(n log n), so log n! = Θ(n log n). This does not mean that n! = Θ(n n ). Note that log n = Θ(log n ) but n Θ(n ). The log function is a flattener when dealing with asymptotics. We have proved that n! Ω( n ). We have also proved that log n! Ω(n log n). From here, its easy to prove that log n! O(n log n), so log n! = Θ(n log n). This does not mean that n! = Θ(n n ). Note that log n = Θ(log n ) but n Θ(n ). The log function is a flattener when dealing with asymptotics. Graphically, we can see a curve for n! that is above the curve for n. But we dn t know how big the gap is (if any). Why? Because n! < n n. Fine distinction: f > g iff log f > log g. BUT, even if f (x) grows faster than g(x), it is possible that log f (x) does NOT grow faster than log g(x). Data and Algorithm Analysis Spring 017 48 / 348

I In Algorithm Upper Bounds (1) Worst case cost (for size n): Maximum cost for the algorithm over all problem instances of size n. Best case cost (for size n): Minimum cost for the algorithm over all problem instances of size n. A: The algorithm. I n : The set of all possible inputs to A of size n. f A : Function expressing the resource cost of A. I is an input in I n. Algorithm Upper Bounds (1) Algorithm Upper Bounds (1) Worst case cost (for size n): Maximum cost for the algorithm over all problem instances of size n. Best case cost (for size n): Minimum cost for the algorithm over all problem instances of size n. A: The algorithm. In: The set of all possible inputs to A of size n. fa: Function expressing the resource cost of A. I is an input in In. worst cost(a) = max fa(i). best cost(a) = min fa(i). I In It is possible that the {best, worst case cost changes radically with n. That is, even n might have a very different cost from odd n. This point that we are considering all of the inputs of size n is crucial. In other words, we don t pick the n for which the best (or worst) case occurs. So it is wrong to say something like The best case is when n = 1. worst cost(a) = max f A (I). I In best cost(a) = min f A (I). I In Data and Algorithm Analysis Spring 017 49 / 348 Algorithm Upper Bounds () Algorithm Upper Bounds () Algorithm Upper Bounds () Examples: Factorial: One input of size n, one cost Findmax: Various models for number of inputs, all cases have same cost Find: Various models for number of inputs, n different costs The input is just a value, n. Model choices: Examples: Factorial: One input of size n, one cost Findmax: Various models for number of inputs, all cases have same cost Find: Various models for number of inputs, n different costs 1. All numbers: Infinite number of inputs.. Permutation of 1 to n: n! inputs. 3. Focus only on position of x: n inputs. Same model choices as for findmax. Data and Algorithm Analysis Spring 017 50 / 348 Show graphs of cost vs I n for factorial, findmax (3rd model) and find (3rd model). Average Case Average Case Average Case We may want the average case cost. For each input of size n, we need: Its frequency. Its cost. Given this information, we can calculate the weighted average. Q: Can the average cost be worse than the worst cost? Or better than the best cost? We may want the average case cost. For each input of size n, we need: Its frequency. Its cost. Given this information, we can calculate the weighted average. Q: Can the average cost be worse than the worst cost? Or better than the best cost? Frequency can be hard to determine! Example: Average cost of sequential search is (n + 1)/, but only if the frequency of occurence for each case is equal. freq(i) cost(i) I In No, because that would require at least one case with greater cost than the worst case. No, for the same reason. Data and Algorithm Analysis Spring 017 51 / 348 Problem Bounds (1) Problem Bounds (1) Problem Bounds (1) In general, analyzing a problem is more complicated than our example for TOH. Perhaps lots of inputs of size n. Need to decide if we are discussing worst, average, or best case, since each could be different. Might be lots of "reasonable" algorithms to solve the problem. In general, analyzing a problem is more complicated than our example for TOH. Perhaps lots of inputs of size n. Need to decide if we are discussing worst, average, or best case, since each could be different. Might be lots of "reasonable" algorithms to solve the problem. Data and Algorithm Analysis Spring 017 5 / 348

& + * ) "!$#&%' $ ( & + * ) I In "!$#&%' $ ( Problem Bounds () Problem Bounds () Problem Bounds () Upper bound (for discussion topic of worst, best, average case): Best algorithm that we know (for THAT situation!) Can Lower bound (for discussion topic of worst, best, average case): The minium cost for ANY POSSIBLE algorithm (for THAT situation!) Must Upper bound (for discussion topic of worst, best, average case): Best algorithm that we know (for THAT situation!) Can Lower bound (for discussion topic of worst, best, average case): The minium cost for ANY POSSIBLE algorithm (for THAT situation!) Must Data and Algorithm Analysis Spring 017 53 / 348 Problem Bounds (3) Problem Bounds (3) Problem Bounds (3) Lower bound in the worst case: { min max fa(i) A A M AM: The set of all possible algorithms that solve the problem under the model A: Some algorithm In: The set of all possible inputs to A of size n fa: Function expressing the resource cost of A I is an input in In fa(i): The cost to execute algorithm A on input I Lower bound in the worst case: { min A A M max f A (I) I In A M : The set of all possible algorithms that solve the problem under the model A: Some algorithm I n : The set of all possible inputs to A of size n f A : Function expressing the resource cost of A I is an input in I n f A (I): The cost to execute algorithm A on input I Data and Algorithm Analysis Spring 017 54 / 348 Growth Rates Growth Rates Growth Rates Two functions of n have different growth rates if as n goes to infinity their ratio either goes to infinity or goes to zero. Two functions of n have different growth rates if as n goes to infinity their ratio either goes to infinity or goes to zero. no note Where does (1.618) n go on here? Where does (1.618) n go on here? Data and Algorithm Analysis Spring 017 55 / 348 Estimating Growth Rates Exact equations relating program operations to running time require machine-dependent constants. Sometimes, the equation for exact running time is complicated to compute. Usually, we are satisfied with knowing an approximate growth rate. Example: Given two algorithms with growth rate c 1 n and c n!, do we need to know the values of c 1 and c? Consider n and 3n. PROVE that n must eventually become (and remain) bigger. Data and Algorithm Analysis Spring 017 56 / 348 Estimating Growth Rates Estimating Growth Rates Exact equations relating program operations to running time require machine-dependent constants. Sometimes, the equation for exact running time is complicated to compute. Usually, we are satisfied with knowing an approximate growth rate. Example: Given two algorithms with growth rate c1n and c n!, do we need to know the values of c1 and c? Consider n and 3n. PROVE that n must eventually become (and remain) bigger.

Proof by Contradiction Proof by Contradiction Proof by Contradiction Assume there are some values for constants r and s such that, for all values of n, n < rn + s. Then, n < r + s/n. But, as n grows, what happens to s/n? Since n grows toward infinity, the assumption must be false. Assume there are some values for constants r and s such that, for all values of n, n < rn + s. It goes to zero. Conclusion: In the limit, as n, constants don t matter. Limits are the typical way to prove that one function grows faster than another. Then, n < r + s/n. But, as n grows, what happens to s/n? Since n grows toward infinity, the assumption must be false. Data and Algorithm Analysis Spring 017 57 / 348 Some Growth Rates (1) Some Growth Rates (1) Some Growth Rates (1) If f grows faster than g, then Must f grow faster than g? Must log f grow faster than log g? Yes. No. If f grows faster than g, then Must f grow faster than g? Must log f grow faster than log g? log n log n within a constant factor, that is, the growth rate is the same! Data and Algorithm Analysis Spring 017 58 / 348 Some Growth Rates () Some Growth Rates () Some Growth Rates () Since n grows faster than n, n grows faster than n. n 4 grows faster than n. n grows faster than n. log n grows no slower than log n. Since n grows faster than n, n grows faster than n. n 4 grows faster than n. n grows faster than n. log n grows no slower than log n. Took antilog of both sides. We squared boths sides. n = ( n). We replaced n with n. Took log of both sides. Log flattens growth rates. Data and Algorithm Analysis Spring 017 59 / 348 Some Growth Rates (3) Some Growth Rates (3) Some Growth Rates (3) Since n! grows faster than n, n!! grows faster than n!. n! grows faster than n. n! grows faster than n. n! grows faster than n. log n! grows no slower than n. Since n! grows faster than n, n!! grows faster than n!. n! grows faster than n. n! grows faster than n. n! grows faster than n. log n! grows no slower than n. Apply factorial to both sides. Take antilog of both sides. Squared both sides. Took square root of both sides. Took log of both sides. Actually, it grows faster since log n! = Θ(n log n). Data and Algorithm Analysis Spring 017 60 / 348