Chapter 5. Divide and Conquer CLRS 4.3. Slides by Kevin Wayne. Copyright 2005 Pearson-Addison Wesley. All rights reserved.

Similar documents
Chapter 5. Divide and Conquer CLRS 4.3. Slides by Kevin Wayne. Copyright 2005 Pearson-Addison Wesley. All rights reserved.

! Break up problem into several parts. ! Solve each part recursively. ! Combine solutions to sub-problems into overall solution.

Copyright 2000, Kevin Wayne 1

Divide-and-Conquer. Consequence. Brute force: n 2. Divide-and-conquer: n log n. Divide et impera. Veni, vidi, vici.

Chapter 5. Divide and Conquer. Slides by Kevin Wayne. Copyright 2005 Pearson-Addison Wesley. All rights reserved.

Matrix Multiplication

5. DIVIDE AND CONQUER I

5. DIVIDE AND CONQUER I

Algorithm Design and Analysis

Divide and Conquer. Andreas Klappenecker. [based on slides by Prof. Welch]

Divide-Conquer-Glue Algorithms

Chapter 5. Divide and Conquer. CS 350 Winter 2018

Divide-and-conquer: Order Statistics. Curs: Fall 2017

Divide-and-conquer. Curs 2015

CSE 421 Algorithms: Divide and Conquer

CPS 616 DIVIDE-AND-CONQUER 6-1

Chapter 2. Recurrence Relations. Divide and Conquer. Divide and Conquer Strategy. Another Example: Merge Sort. Merge Sort Example. Merge Sort Example

Divide and Conquer Strategy

Divide and Conquer. Andreas Klappenecker

Linear Time Selection

How to Multiply. 5.5 Integer Multiplication. Complex Multiplication. Integer Arithmetic. Complex multiplication. (a + bi) (c + di) = x + yi.

Divide and Conquer. Recurrence Relations

CS 5321: Advanced Algorithms - Recurrence. Acknowledgement. Outline. Ali Ebnenasir Department of Computer Science Michigan Technological University

Data Structures and Algorithms Chapter 3

CS 5321: Advanced Algorithms Analysis Using Recurrence. Acknowledgement. Outline

5. DIVIDE AND CONQUER I

DIVIDE AND CONQUER II

1. Basic Algorithms: Bubble Sort

Copyright 2000, Kevin Wayne 1

Analysis of Algorithms - Using Asymptotic Bounds -

1. Basic Algorithms: Bubble Sort

Data Structures and Algorithms Chapter 3

CS 580: Algorithm Design and Analysis

Algorithms, Design and Analysis. Order of growth. Table 2.1. Big-oh. Asymptotic growth rate. Types of formulas for basic operation count

Divide & Conquer. Jordi Cortadella and Jordi Petit Department of Computer Science

Lecture 4. Quicksort

Design and Analysis of Algorithms

The Divide-and-Conquer Design Paradigm

CMPS 2200 Fall Divide-and-Conquer. Carola Wenk. Slides courtesy of Charles Leiserson with changes and additions by Carola Wenk

Asymptotic Analysis and Recurrences

Chapter 4 Divide-and-Conquer

Review Of Topics. Review: Induction

CSCI 3110 Assignment 6 Solutions

Algorithms. Quicksort. Slide credit: David Luebke (Virginia)

Data Structures and Algorithms CSE 465

Lecture 1: Asymptotics, Recurrences, Elementary Sorting

Algorithms And Programming I. Lecture 5 Quicksort

CS/COE 1501 cs.pitt.edu/~bill/1501/ Integer Multiplication

CMPSCI611: Three Divide-and-Conquer Examples Lecture 2

Algorithms Test 1. Question 1. (10 points) for (i = 1; i <= n; i++) { j = 1; while (j < n) {

CS 577 Introduction to Algorithms: Strassen s Algorithm and the Master Theorem

The maximum-subarray problem. Given an array of integers, find a contiguous subarray with the maximum sum. Very naïve algorithm:

COMP 250: Quicksort. Carlos G. Oliver. February 13, Carlos G. Oliver COMP 250: Quicksort February 13, / 21

b + O(n d ) where a 1, b > 1, then O(n d log n) if a = b d d ) if a < b d O(n log b a ) if a > b d

Divide and Conquer Algorithms

Methods for solving recurrences

Analysis of Algorithms CMPSC 565

A design paradigm. Divide and conquer: (When) does decomposing a problem into smaller parts help? 09/09/ EECS 3101

Algorithms and Data Structures Strassen s Algorithm. ADS (2017/18) Lecture 4 slide 1

Quicksort (CLRS 7) We previously saw how the divide-and-conquer technique can be used to design sorting algorithm Merge-sort

Divide-and-Conquer. a technique for designing algorithms

Divide & Conquer. Jordi Cortadella and Jordi Petit Department of Computer Science

Algorithms. Algorithms 2.2 MERGESORT. mergesort bottom-up mergesort sorting complexity divide-and-conquer ROBERT SEDGEWICK KEVIN WAYNE

COMP 382: Reasoning about algorithms

Introduction to Algorithms 6.046J/18.401J/SMA5503

Divide and conquer. Philip II of Macedon

Divide & Conquer. CS 320, Fall Dr. Geri Georg, Instructor CS320 Div&Conq 1

2. ALGORITHM ANALYSIS

Partition and Select

Reductions, Recursion and Divide and Conquer

Inf 2B: Sorting, MergeSort and Divide-and-Conquer

Class Note #14. In this class, we studied an algorithm for integer multiplication, which. 2 ) to θ(n

Copyright 2000, Kevin Wayne 1

MA008/MIIZ01 Design and Analysis of Algorithms Lecture Notes 3

CSC373: Algorithm Design, Analysis and Complexity Fall 2017

Design and Analysis of Algorithms Recurrence. Prof. Chuhua Xian School of Computer Science and Engineering

Algorithms Design & Analysis. Analysis of Algorithm

Grade 11/12 Math Circles Fall Nov. 5 Recurrences, Part 2

Copyright 2000, Kevin Wayne 1

Tutorials. Algorithms and Data Structures Strassen s Algorithm. The Master Theorem for solving recurrences. The Master Theorem (cont d)

CS 470/570 Divide-and-Conquer. Format of Divide-and-Conquer algorithms: Master Recurrence Theorem (simpler version)

Data Structures and Algorithms

Matching Residents to Hospitals

COMP Analysis of Algorithms & Data Structures

1 Substitution method

COMP Analysis of Algorithms & Data Structures

10: Analysis of Algorithms

What we have learned What is algorithm Why study algorithm The time and space efficiency of algorithm The analysis framework of time efficiency Asympt

Divide and Conquer. CSE21 Winter 2017, Day 9 (B00), Day 6 (A00) January 30,

The Master Theorem for solving recurrences. Algorithms and Data Structures Strassen s Algorithm. Tutorials. The Master Theorem (cont d)

CSE 421 Algorithms. T(n) = at(n/b) + n c. Closest Pair Problem. Divide and Conquer Algorithms. What you really need to know about recurrences

/633 Introduction to Algorithms Lecturer: Michael Dinitz Topic: Asymptotic Analysis, recurrences Date: 9/7/17

Divide and Conquer Algorithms. CSE 101: Design and Analysis of Algorithms Lecture 14

Linear Selection and Linear Sorting

Divide and Conquer: Polynomial Multiplication Version of October 1 / 7, 24201

CS483 Design and Analysis of Algorithms

CS 231: Algorithmic Problem Solving

Divide and Conquer algorithms

Asymptotic Algorithm Analysis & Sorting

Divide and Conquer. Maximum/minimum. Median finding. CS125 Lecture 4 Fall 2016

Transcription:

Chapter 5 Divide and Conquer CLRS 4.3 Slides by Kevin Wayne. Copyright 2005 Pearson-Addison Wesley. All rights reserved. 1

Divide-and-Conquer Divide-and-conquer. Break up problem into several parts. Solve each part recursively. Combine solutions to sub-problems into overall solution. Most common usage. Break up problem of size n into two equal parts of size ½n. Solve two parts recursively. Combine two solutions into overall solution in linear time. Consequence. Brute force: n 2. Divide-and-conquer: n log n. Divide et impera. Veni, vidi, vici. - Julius Caesar 2

5.1 Mergesort

Sorting Sorting. Given n elements, rearrange in ascending order. Obvious sorting applications. List files in a directory. Organize an MP3 library. List names in a phone book. Display Google PageRank results. Problems become easier once sorted. Find the median. Find the closest pair. Binary search in a database. Identify statistical outliers. Find duplicates in a mailing list. Non-obvious sorting applications. Data compression. Computer graphics. Interval scheduling. Computational biology. Minimum spanning tree. Supply chain management. Simulate a system of particles. Book recommendations on Amazon. Load balancing on a parallel computer.... 4

Mergesort Mergesort. Divide array into two halves. Recursively sort each half. Merge two halves to make sorted whole. Jon von Neumann (1945) A L G O R I T H M S A L G O R I T H M S divide O(1) A G L O R H I M S T sort 2T(n/2) A G H I L M O R S T merge O(n) 5

Merging Merging. Combine two pre-sorted lists into a sorted whole. How to merge efficiently? Linear number of comparisons. Use temporary array. A G L O R H I M S T A G H I Challenge for the bored. In-place merge. [Kronrod, 1969] using only a constant amount of extra storage 6

Merging Merge. Keep track of smallest element in each sorted half. Insert smallest of two elements into auxiliary array. Repeat until done. smallest smallest A G L O R H I M S T A auxiliary array 7

Merging Merge. Keep track of smallest element in each sorted half. Insert smallest of two elements into auxiliary array. Repeat until done. smallest smallest A G L O R H I M S T A G auxiliary array 8

Merging Merge. Keep track of smallest element in each sorted half. Insert smallest of two elements into auxiliary array. Repeat until done. smallest smallest A G L O R H I M S T A G H auxiliary array 9

Merging Merge. Keep track of smallest element in each sorted half. Insert smallest of two elements into auxiliary array. Repeat until done. smallest smallest A G L O R H I M S T A G H I auxiliary array 10

Merging Merge. Keep track of smallest element in each sorted half. Insert smallest of two elements into auxiliary array. Repeat until done. smallest smallest A G L O R H I M S T A G H I L auxiliary array 11

Merging Merge. Keep track of smallest element in each sorted half. Insert smallest of two elements into auxiliary array. Repeat until done. smallest smallest A G L O R H I M S T A G H I L M auxiliary array 12

Merging Merge. Keep track of smallest element in each sorted half. Insert smallest of two elements into auxiliary array. Repeat until done. smallest smallest A G L O R H I M S T A G H I L M O auxiliary array 13

Merging Merge. Keep track of smallest element in each sorted half. Insert smallest of two elements into auxiliary array. Repeat until done. smallest smallest A G L O R H I M S T A G H I L M O R auxiliary array 14

Merging Merge. Keep track of smallest element in each sorted half. Insert smallest of two elements into auxiliary array. Repeat until done. first half exhausted smallest A G L O R H I M S T A G H I L M O R S auxiliary array 15

Merging Merge. Keep track of smallest element in each sorted half. Insert smallest of two elements into auxiliary array. Repeat until done. first half exhausted smallest A G L O R H I M S T A G H I L M O R S T auxiliary array 16

Merging Merge. Keep track of smallest element in each sorted half. Insert smallest of two elements into auxiliary array. Repeat until done. first half exhausted second half exhausted A G L O R H I M S T A G H I L M O R S T auxiliary array 17

A Useful Recurrence Relation Def. T(n) = number of comparisons to mergesort an input of size n. Mergesort recurrence. Solution. T(n) O(n log 2 n). Assorted proofs. We describe several ways to prove this recurrence. Initially we assume n is a power of 2 and replace with =. 18

Proof by Recursion Tree T(n) n T(n/2) T(n/2) 2(n/2) T(n/4) T(n/4) T(n/4) T(n/4) log 2 n 4(n/4)... T(n / 2 k ) 2 k (n / 2 k )... T(2) T(2) T(2) T(2) T(2) T(2) T(2) T(2) n/2 (2) n log 2 n 19

Proof by Induction Claim. If T(n) satisfies this recurrence, then T(n) = n log 2 n. assumes n is a power of 2 Pf. (by induction on n) Base case: n = 1. Inductive hypothesis: T(n) = n log 2 n. Goal: show that T(2n) = 2n log 2 (2n). 20

Analysis of Mergesort Recurrence Claim. If T(n) satisfies the following recurrence, then T(n) n lg n. log 2 n Pf. (by induction on n) Base case: n = 1. T(1) = 0 = 1 lg 1. Define n 1 = n / 2, n 2 = n / 2. Induction step: Let n 2, assume true for 1, 2,..., n 1. 21

5.4 Closest Pair of Points

Closest Pair of Points Closest pair. Given n points in the plane, find a pair with smallest Euclidean distance between them. Fundamental geometric primitive. Graphics, computer vision, geographic information systems, molecular modeling, air traffic control. Special case of nearest neighbor, Euclidean MST, Voronoi. fast closest pair inspired fast algorithms for these problems Brute force. Check all pairs of points p and q with Θ(n 2 ) comparisons. 1-D version. O(n log n) easy if points are on a line. Assumption. No two points have same x coordinate. to make presentation cleaner 23

Closest Pair of Points Algorithm. Divide: draw vertical line L so that roughly ½n points on each side. L 24

Closest Pair of Points Algorithm. Divide: draw vertical line L so that roughly ½n points on each side. Conquer: find closest pair in each side recursively. L 21 12 25

Closest Pair of Points Algorithm. Divide: draw vertical line L so that roughly ½n points on each side. Conquer: find closest pair in each side recursively. Combine: find closest pair with one point in each side. seems like Θ(n 2 ) Return best of 3 solutions. L 8 21 12 26

Closest Pair of Points Find closest pair with one point in each side, assuming that distance < δ. L 21 12 δ = min(12, 21) 27

Closest Pair of Points Find closest pair with one point in each side, assuming that distance < δ. Observation: only need to consider points within δ of line L. L 21 12 δ = min(12, 21) δ 28

Closest Pair of Points Find closest pair with one point in each side, assuming that distance < δ. Observation: only need to consider points within δ of line L. Sort points in 2δ-strip by their y coordinate. 7 L 6 4 5 21 12 3 δ = min(12, 21) 2 1 δ 29

Closest Pair of Points Find closest pair with one point in each side, assuming that distance < δ. Observation: only need to consider points within δ of line L. Sort points in 2δ-strip by their y coordinate. Only check distances of those within 11 positions in sorted list! 7 L 6 4 5 21 12 3 δ = min(12, 21) 2 1 δ 30

Closest Pair of Points Def. Let s i be the point in the 2δ-strip, with the i th smallest y-coordinate. Claim. If i j 12, then the distance between s i and s j is at least δ. Pf. 31 39 j No two points lie in same ½δ-by-½δ box. Two points at least 2 rows apart have distance 2(½δ) = δ. 2 rows 29 30 ½δ ½δ Fact. Still true if we replace 12 with 7. i 27 28 ½δ 26 25 Scan points in y-order and compare distance between each point and next 11 neighbours. If any of these distances is less than δ, update δ. δ δ 31

Closest Pair of Points Smallest-Dist(p 1,, p n ) { if n=2 then return dist(p 1,p 2 ) Compute separation line L such that half the points are on one side and half on the other side. δ = Smallest-Dist(left half) δ = Smallest-Dist(right half) δ = min(δ,δ ) Delete all points further than δ from separation line L Sort remaining points by y-coordinate. Scan points in y-order and compare distance between each point and next 11 neighbours. If any of these distances is less than δ, update δ. O(n log n) 2T(n / 2) O(n) O(n log n) O(n) } return δ. 32

Closest Pair of Points Closest-Pair(p 1,, p n ) { if n=2 then return dist(p 1,p 2 ),p 1,p 2 Compute separation line L such that half the points are on one side and half on the other side. δ,p,q = Closest-Pair(left half) δ,p,q = Closest-Pair(right half) δ, p, q = min(δ, δ )(p,q,p,q ) Delete all points further than δ from separation line L Sort remaining points by y-coordinate. Scan points in y-order and compare distance between each point and next 11 neighbours. If any of these distances is less than δ, update δ,p,q. O(n log n) 2T(n / 2) O(n) O(n log n) O(n) } return δ,p,q. 33

Closest Pair of Points: Analysis Running time. Q. Can we achieve O(n log n)? A. Yes. First sort all points according to x coordinate before algo. Don t sort points in strip from scratch each time. Each recursion returns a list: all points sorted by y coordinate. Sort by merging two pre-sorted lists. 34

Matrix Multiplication

Matrix Multiplication Matrix multiplication. Given two n-by-n matrices A and B, compute C = AB. Brute force. Θ(n 3 ) arithmetic operations. Fundamental question. Can we improve upon brute force? 36

Matrix Multiplication: Warmup Divide-and-conquer. Divide: partition A and B into ½n-by-½n blocks. Conquer: multiply 8 ½n-by-½n recursively. Combine: add appropriate products using 4 matrix additions. 37

Matrix Multiplication: Key Idea Key idea. multiply 2-by-2 block matrices with only 7 multiplications. 7 multiplications. 18 = 10 + 8 additions (or subtractions). 38

Strassen: Recursion Tree 7 n 2 n 2 =n 2 7 4 7 4 7 4 7 7. 3 T(n) n 2 T(n/2) T(n/2) T(n/2) T(n/2) T(n/2) T(n/2) T(n/2) 7(n 2 /2 2 ) T(n/4) T(n/4) T(n/4) T(n/4) T(n/4) T(n/4) T(n/4) T(n/4)... T(n/4) T(n/4) T(n/4) T(n/4) log 2 n 49(n 2 /4 2 )... T(n / 2 k ) 7 k (n 2 / 4 k )...... T(2) T(2) T(2) T(2) T(2) T(2) T(2) T(2) 7 lg n (n 2 / 4 lg n ) =7 lg n = n lg 7 39

Fast Matrix Multiplication Fast matrix multiplication. (Strassen, 1969) Divide: partition A and B into ½n-by-½n blocks. Compute: 14 ½n-by-½n matrices via 10 matrix additions. Conquer: multiply 7 ½n-by-½n matrices recursively. Combine: 7 products into 4 terms using 18 matrix additions. Analysis. Assume n is a power of 2. T(n) = # arithmetic operations. 40

Fast Matrix Multiplication in Practice Implementation issues. Sparsity. Caching effects. Numerical stability. Odd matrix dimensions. Crossover to classical algorithm around n = 128. Common misperception: "Strassen is only a theoretical curiosity." Advanced Computation Group at Apple Computer reports 8x speedup on G4 Velocity Engine when n ~ 2,500. Range of instances where it's useful is a subject of controversy. Remark. Can "Strassenize" Ax=b, determinant, eigenvalues, and other matrix ops. 41

Fast Matrix Multiplication in Theory Q. Multiply two 2-by-2 matrices with only 7 scalar multiplications? A. Yes! [Strassen, 1969] Θ(n log 2 7 ) = O(n 2.81 ) Q. Multiply two 2-by-2 matrices with only 6 scalar multiplications? A. Impossible. [Hopcroft and Kerr, 1971] (n log 2 Θ 6 ) = O(n 2.59 ) Q. Two 3-by-3 matrices with only 21 scalar multiplications? (n log 3 A. Also impossible. Θ 21 ) = O(n 2.77 ) Q. Two 70-by-70 matrices with only 143,640 scalar multiplications? A. Yes! [Pan, 1980] (n log 70 Θ 143640 ) = O(n 2.80 ) Decimal wars. December, 1979: O(n 2.521813 ). January, 1980: O(n 2.521801 ). 42

Fast Matrix Multiplication in Theory Best known. O(n 2.376 ) [Coppersmith-Winograd, 1987-2010.] In 2010, Andrew Stothers gave an improvement to the algorithm O(n 2.374 ). In 2011, Virginia Williams combined a mathematical short-cut from Stothers' paper with her own insights and automated optimization on computers, improving the bound O(n 2.3728642 ). In 2014, François Le Gall simplified the methods of Williams and obtained an improved bound of O(n 2.3728639 ). Conjecture. O(n 2+ε ) for any ε > 0. Caveat. Theoretical improvements to Strassen are progressively less practical (hidden constant gets worse). 43

CLRS 4.3 Master Theorem

Master Theorem from CLRS 4.3 T(n) = at(n/b) + f(n) a = (constant) number of sub-instances, b = (constant) size ratio of sub-instances, f(n) = time used for dividing and recombining. 45

Proof by Recursion Tree T(n) = at(n/b) + f(n) T(n) f(n) T(n/b) T(n/b) T(n/b) af(n/b) T(n/b 2 ) T(n/b 2 ) T(n/b 2 ) T(n/b 2 ) T(n/b 2 ) T(n/b 2 ) T(n/b 2 ) T(n/b 2 ) T(n/b 2 ) a 2 f(n/b 2 )... log b n... T(n / b k ) T(n / b k ) T(n / b k ) T(n / b k ) T(n / b k )... T(n / b k ) T(n / b k ) T(n / b k ) T(n / b k ) a k f(n/b k )... T(2) T(2) T(2) T(2) T(2) T(2) T(2) T(2) T(2) T(2) T(2) T(2) T(2) T(2) T(2) T(2) T(2) a log b n T(2) = T(2) n log b a T(n) = a k f(n/b k ) 46

T(n) = at(n/b) + f(n) Case 1: f(n) O(n L ) for some constant L < logb a. Solution: T(n) Θ(n log b a ) Case 2: f(n) Θ(n log b a log k n), for some k 0. Solution: T(n) Θ(n log b a log k+1 n) Case 3: f(n) is Ω(n L ) for some constant L > logb a and f(n) satisfies the regularity condition af(n/b) cf(n) for some c<1 and all large n. Solution: T(n) Θ(f(n)) 47

Master Theorem T(n) = at(n/b) + f(n) Case 1: f(n) O(n L ) for some constant L < logb a. is Solution: T(n) Θ(n log b a ) 48

Master Theorem Case 1: f(n) O(n L ) for some constant L < logb a. Solution: T(n) Θ(n log b a ) T(n) = 5T(n/2) + Θ(n 2 ) Compare n log 2 5 vs. n 2. Since 2 < log2 5 use Case 1 Solution: T(n) Θ(n log 2 5 ) 49

Master Theorem T(n) = at(n/b) + f(n) Simple Case 2: f(n) Θ(n log b a ). Solution: T(n) Θ(n log b a log n) 50

Master Theorem T(n) = at(n/b) + f(n) Case 2: f(n) Θ(n log b a log k n), for some k 0. Solution: T(n) Θ(n log b a log k+1 n) 51

Master Theorem Case 2: f(n) Θ(n log b a log k n), for some k 0. Solution: T(n) Θ(n log b a log k+1 n) T(n) = 27T(n/3) + Θ(n 3 log n) Compare n log 3 27 vs. n 3. Since 3 = log3 27 use Case 2 Solution: T(n) Θ(n 3 log 2 n) 52

Master Theorem T(n) = at(n/b) + f(n) Case 3: f(n) Ω(n L ) for some constant L > logb a and f(n) satisfies the regularity condition af(n/b) cf(n) for some c<1 and all large n. Solution: T(n) Θ(f(n)) 53

Master Theorem Case 3: f(n) Ω(n L ) for some constant L > logb a and f(n) satisfies the regularity condition af(n/b) cf(n) for some c<1 and all large n. Solution: T(n) Θ(f(n)) 54

Master Theorem Case 3: f(n) Ω(n L ) for some constant L > logb a and f(n) satisfies the regularity condition af(n/b) cf(n) for some c<1 and all large n. Solution: T(n) Θ(f(n)) T(n) = 5T(n/2) + Θ(n 3 ) Compare n log 2 5 vs. n 3. Since 3 > log2 5 use Case 3 af(n/b) = 5(n/2) 3 = 5/8 n 3 cn 3, for c = 5/8 Solution: T(n) Θ(n 3 ) 55

Master Theorem T(n) = 27T(n/3) + Θ(n 3 /log n) Compare n log 3 27 vs. n 3. Since 3 = log3 27 use Case 2 but n 3 /log n not Θ(n 3 log k n) for k 0 Cannot use Master Method. 56

Beyond the Master Theorem

Median Finding

Median Finding Median Finding. Given n distinct numbers a 1,, a n, find i such that { j : a j <a i } = n-1 / 2 and { j : a j >a i } = n-1 / 2. 22 31 44 7 12 19 20 35 3 40 27 3 7 12 19 20 22 27 31 35 40 44 9 4 5 6 7 1 11 2 8 10 3 59

Selection Selection. Given n distinct numbers a 1,, a n, and index k, find i such that k=4 { j : a j <a i } = k-1 and { j : a j >a i } = n-k. 22 31 44 7 12 19 20 35 3 40 27 3 7 12 19 20 22 27 31 35 40 44 9 4 5 6 7 1 11 2 8 10 3 Median( a 1,, a n ) = Selection( a 1,, a n, n+1 / 2 ) 60

Partition (from QuickSort) Algorithm partition(a, start, stop) Input: An array A, indices start and stop. Output: Returns an index j and rearranges the elements of A such that for all i<j, A[i]! A[j] and for all k>i, j A[k] " A[j]. pivot # A[stop] left # start right # stop - 1 while left! right do while left! right and A[left]! pivot) do left # left + 1 while (left! right and A[right] " pivot) do right # right -1 if (left < right ) then exchange A[left] $ A[right] exchange A[stop] $ A[left] return left

Partition Example of execution of partition A = [ 6 3 7 3 2 5 7 5 ] pivot = 5 A = [ 6 3 7 3 2 5 7 5 ] swap 6, 2 A = [ 2 3 7 3 6 5 7 5 ] A = [ 2 3 7 3 6 5 7 5 ] swap 7,3 A = [ 2 3 3 7 6 5 7 5 ] A = [ 2 3 3 7 6 5 7 5 ] swap 7,pivot A = [ 2 3 3 5 6 5 7 7 ]!5 "5

Selection from Median Selection. Given n distinct numbers a 1,, a n, and index k, find i such that { j : a j <a i } = k-1 and { j : a j >a i } = n-k. Median Partition M k = n+1 / 2 return k k k-m Selection( a 1,, a n-1 / 2,k ) Selection( a n+3 / 2,, a n, k - n+1 / 2 ) 63

Selection Selection. Given n distinct numbers a 1,, a n, and index k, find i such that { j : a j <a i } = k-1 and { j : a j >a i } = n-k. m Pseudo- Median Partition k = m return k Selection( a 1,, a m-1,k ) Selection( a m+1,, a n, k - m ) 64

Selection Selection. Given n distinct numbers a 1,, a n, and index k, find i such that { j : a j <a i } = k-1 and { j : a j >a i } = n-k. 3n/10 7n/10 65

Selection Selection. Given n distinct numbers a 1,, a n, and index k, find i such that { j : a j <a i } = k-1 and { j : a j >a i } = n-k. T(n) T( n/5 ) + Max{ T( 3n/10 T( 7n/10 ) } + Θ(n) Solution: T(n) Θ(n) 66

Selection Selection. Given n distinct numbers a 1,, a n, and index k, find i such that { j : a j <a i } = k-1 and { j : a j >a i } = n-k. T(n) T( n/5 ) + T( 7n/10 ) + Θ(n) Solution: T(n) Θ(n) Assuming T(i) d i for 1 i n, Θ(n) cn T(n+1) T(n+1 /5) + T(7(n+1)/10) + c(n+1) d(n+1)/5 + 7d(n+1)/10 + c(n+1) = (2d+7d+10c)/10 (n+1) = (9d+10c)/10 (n+1) d (n+1) as long as (9d+10c)/10 d, or equivalently 10c d. 67

Selection Selection. Given n distinct numbers a 1,, a n, and index k, find i such that { j : a j <a i } = k-1 and { j : a j >a i } = n-k. T(n) T( n/5 ) + T( 7n/10 ) + Θ(n) Solution: T(n) Θ(n) example: d=10c, Assuming T(i) 10c i for 1 i n, Θ(n) cn T(n+1) T(n+1 /5) + T(7/10 (n+1)) + c(n+1) 10c/5 (n+1) + 7 10c/10 (n+1) + c(n+1) = (2c+7c+c)(n+1) = 10c (n+1) 68

Chapter 5 Divide and Conquer CLRS 4.3 Slides by Kevin Wayne. Copyright 2005 Pearson-Addison Wesley. All rights reserved. 69