Divide-and-Conquer. Reading: CLRS Sections 2.3, 4.1, 4.2, 4.3, 28.2, CSE 6331 Algorithms Steve Lai

Similar documents
CMPS 2200 Fall Divide-and-Conquer. Carola Wenk. Slides courtesy of Charles Leiserson with changes and additions by Carola Wenk

Algorithms, Design and Analysis. Order of growth. Table 2.1. Big-oh. Asymptotic growth rate. Types of formulas for basic operation count

Chapter 4 Divide-and-Conquer

Divide and Conquer. Arash Rafiey. 27 October, 2016

Divide and Conquer Strategy

1 Caveats of Parallel Algorithms

CS 577 Introduction to Algorithms: Strassen s Algorithm and the Master Theorem

Analysis of Algorithms I: Asymptotic Notation, Induction, and MergeSort

CSE548, AMS542: Analysis of Algorithms, Fall 2017 Date: October 11. In-Class Midterm. ( 7:05 PM 8:20 PM : 75 Minutes )

CPS 616 DIVIDE-AND-CONQUER 6-1

Recommended readings: Description of Quicksort in my notes, Ch7 of your CLRS text.

Divide and Conquer. Andreas Klappenecker. [based on slides by Prof. Welch]

Grade 11/12 Math Circles Fall Nov. 5 Recurrences, Part 2

Analysis of Algorithms - Using Asymptotic Bounds -

CS 4407 Algorithms Lecture 2: Iterative and Divide and Conquer Algorithms

Design and Analysis of Algorithms

A design paradigm. Divide and conquer: (When) does decomposing a problem into smaller parts help? 09/09/ EECS 3101

Divide&Conquer: MergeSort. Algorithmic Thinking Luay Nakhleh Department of Computer Science Rice University Spring 2014

COMP Analysis of Algorithms & Data Structures

CSCI Honor seminar in algorithms Homework 2 Solution

COMP Analysis of Algorithms & Data Structures

Asymptotic Analysis and Recurrences

Data structures Exercise 1 solution. Question 1. Let s start by writing all the functions in big O notation:

1 Substitution method

Lecture 1: Asymptotics, Recurrences, Elementary Sorting

Data Structures and Algorithms Chapter 3

Introduction to Divide and Conquer

CIS 121. Analysis of Algorithms & Computational Complexity. Slides based on materials provided by Mary Wootters (Stanford University)

CS 4407 Algorithms Lecture 3: Iterative and Divide and Conquer Algorithms

Chapter 2. Recurrence Relations. Divide and Conquer. Divide and Conquer Strategy. Another Example: Merge Sort. Merge Sort Example. Merge Sort Example

b + O(n d ) where a 1, b > 1, then O(n d log n) if a = b d d ) if a < b d O(n log b a ) if a > b d

Algorithms Design & Analysis. Analysis of Algorithm

Big O 2/14/13. Administrative. Does it terminate? David Kauchak cs302 Spring 2013

i=1 i B[i] B[i] + A[i, j]; c n for j n downto i + 1 do c n i=1 (n i) C[i] C[i] + A[i, j]; c n

Recurrence Relations

CS 161 Summer 2009 Homework #2 Sample Solutions

Methods for solving recurrences

The maximum-subarray problem. Given an array of integers, find a contiguous subarray with the maximum sum. Very naïve algorithm:

Divide and conquer. Philip II of Macedon

CS483 Design and Analysis of Algorithms

Divide and Conquer Algorithms

Algorithms And Programming I. Lecture 5 Quicksort

Design and Analysis of Algorithms

Notes for Recitation 14

Data Structures and Algorithms CSE 465

data structures and algorithms lecture 2

Solving recurrences. Frequently showing up when analysing divide&conquer algorithms or, more generally, recursive algorithms.

The Divide-and-Conquer Design Paradigm

Divide and Conquer CPE 349. Theresa Migler-VonDollen

Mergesort and Recurrences (CLRS 2.3, 4.4)

Data Structures and Algorithms Chapter 3

CS 231: Algorithmic Problem Solving

Divide and Conquer Algorithms

Introduction to Algorithms 6.046J/18.401J/SMA5503

Algorithm Analysis Recurrence Relation. Chung-Ang University, Jaesung Lee

Divide-and-conquer. Curs 2015

/633 Introduction to Algorithms Lecturer: Michael Dinitz Topic: Asymptotic Analysis, recurrences Date: 9/7/17

Divide-and-Conquer Algorithms Part Two

Divide and Conquer Algorithms. CSE 101: Design and Analysis of Algorithms Lecture 14

Analysis of Algorithms

Fast Convolution; Strassen s Method

Divide & Conquer. Jordi Cortadella and Jordi Petit Department of Computer Science

CMPSCI611: Three Divide-and-Conquer Examples Lecture 2

Lecture 2: Divide and conquer and Dynamic programming

Divide and Conquer. Andreas Klappenecker

Divide and Conquer. CSE21 Winter 2017, Day 9 (B00), Day 6 (A00) January 30,

COMP 382: Reasoning about algorithms

Asymptotic Notation. such that t(n) cf(n) for all n n 0. for some positive real constant c and integer threshold n 0

Divide and Conquer Algorithms

Divide and Conquer. Maximum/minimum. Median finding. CS125 Lecture 4 Fall 2016

Algorithms. Quicksort. Slide credit: David Luebke (Virginia)

LECTURE NOTES ON DESIGN AND ANALYSIS OF ALGORITHMS

CS483 Design and Analysis of Algorithms

1 Divide and Conquer (September 3)

Analysis of Multithreaded Algorithms

Cpt S 223. School of EECS, WSU

Answer the following questions: Q1: ( 15 points) : A) Choose the correct answer of the following questions: نموذج اإلجابة

CS173 Running Time and Big-O. Tandy Warnow

COMP 355 Advanced Algorithms

Divide and Conquer. Recurrence Relations

Algorithms Test 1. Question 1. (10 points) for (i = 1; i <= n; i++) { j = 1; while (j < n) {

Homework 1 Submission

Lecture 4. Quicksort

Algorithm efficiency analysis

Divide-and-conquer: Order Statistics. Curs: Fall 2017

CS 4104 Data and Algorithm Analysis. Recurrence Relations. Modeling Recursive Function Cost. Solving Recurrences. Clifford A. Shaffer.

Algorithm Design and Analysis

CS 5321: Advanced Algorithms - Recurrence. Acknowledgement. Outline. Ali Ebnenasir Department of Computer Science Michigan Technological University

Solving Recurrences. Lecture 23 CS2110 Fall 2011

COMP 355 Advanced Algorithms Algorithm Design Review: Mathematical Background

Chapter 5. Divide and Conquer CLRS 4.3. Slides by Kevin Wayne. Copyright 2005 Pearson-Addison Wesley. All rights reserved.

Divide-Conquer-Glue Algorithms

Dynamic Programming. Reading: CLRS Chapter 15 & Section CSE 6331: Algorithms Steve Lai

In-Class Soln 1. CS 361, Lecture 4. Today s Outline. In-Class Soln 2

Review Of Topics. Review: Induction

Problem Set 1 Solutions

CS 4407 Algorithms Lecture 2: Growth Functions

CS 5321: Advanced Algorithms Analysis Using Recurrence. Acknowledgement. Outline

V. Adamchik 1. Recurrences. Victor Adamchik Fall of 2005

Lecture 3. Big-O notation, more recurrences!!

Transcription:

Divide-and-Conquer Reading: CLRS Sections 2.3, 4.1, 4.2, 4.3, 28.2, 33.4. CSE 6331 Algorithms Steve Lai

Divide and Conquer Given an instance x of a prolem, the method works as follows: divide-and-conquer function DAC( x) if x is sufficiently small then solve it directly else divide x into smaller suinstances x, x,, x ; y DAC( x ), for 1 i k; i ( ) y comine y, y,, y ; return() y i 1 2 k 1 2 k 2

Analysis of Divide-and-Conquer Typically, x1,, xk are of the same size, say n. In that case, the time complexity of DAC, Tn ( ), satisfies a recurrence: Tn ( ) = c if n n kt ( n ) + f( n) if n> n0 0 Where f( n) is the running time of dividing x and comining yi's. What is c? What is n? 0 3

Mergesort: procedure ( Ai j ) mergesort [.. ] // Sort Ai [.. j]// if i = j then return m ( i+ j)2 mergesort [.. ] Sort an array A[1.. n] ( Ai m) ( Am+ 1.. j] ) ( Ai m Am+ j ) mergesort [ merge [.. ], [ 1.. ] Initial call: mergesort [ ( A1.. n] ) // ase case // divide and conquer 4

Analysis of Mergesort Let Tn ( ) denote the running time of mergesorting an array of size n. Tn ( ) satisfies the recurrence: Tn ( ) = c if n 1 ( ) ( ) T n 2 + T n 2 +Θ ( n) if n> 1 Solving the recurrence yields: Tn ( ) = Θ( nlog n) We will learn how to solve such recurrences. 5

Linked-List Version of Mergesort function mergesort, ( i j) // Sort A[ i.. j]. Initially, link[ k] = 0, 1 k n.// gloal A[1.. n], link[1.. n] if i = j then retu rn () i // ase case // m ( i+ j)2 ptr1 mergesort ( i, m) ptr2 mergesort ( m + 1, j) divide and conquer ptr merge( ptr1, ptr2) return ( ptr) 6

Solving Recurrences Suppose a function T( n) satisfies the recurrence T( n) = c if n 1 ( ) 3T n 4 + n if n> 1 where c is a positive constant. ( ) Wish to otain a function gn ( ) such that Tn ( ) = Θ g(). n Will solve it using various methods: Iteration Method, Recurrence Tree, Guess and Prove, and Master Method. 7

Iteration Method m Assume n is a power of 4. Say, n= 4. Then, Tn ( ) = n+ 3 Tn ( / 4) [ ] = n+ 3 n/ 4 + 3 Tn ( /16) [ ] = n+ 3( n/ 4) + 9 ( n/16) + 3 Tn ( / 64) 2 3 3 = n+ (3/4) n+ (3/4) n+ 3 Tn ( /4 ) 2 m 1 3 3 3 m n = n 1 + + + + + 3 T m 4 4 4 4 = nθ (1) + On ( ) = Θ( n) So, Tn ( ) =Θ ( nn a power of 4) Tn ( ) =Θ( n). (Why?) 8

Remark We have applied Theorem 7 to conclude Tn ( ) = Θ( n) from Tn ( ) = Θ( nn a power of 4). In order to apply Theorem 7, Tn ( ) needs to e nondecreasing. It will e a homework question for you to prove that Tn ( ) is indeed nondecreasing. 9

Recurrence Tree solving prolems 1 of size n 3 of size n / 4 3 of size n / 4 2 2 3 of size n / 4 3 3 m 1 3 of si n ze n / 4 3 of size n / 4 m 1 time needed 3 n /4 3 n /4 2 2 3 3 3 /4 n 3 m 1 3 / m m m n 4 Θ(1) m 1 10

Guess and Prove Solve Tn ( ) = c if n 1 ( ) 3T n 4 +Θ ( n) if n> 1 First, guess Tn ( ) = Θ( n), and then try to prove it. m Sufficient to consider n= 4, m= 0, 1, 2,... Need to prove: c 4 T(4 ) c 4 m m 1 2 and all m m for some m. We choose m = 0 y induction on 0 0 0 m. IB: When m= 0, c 4 T(4 ) c 4 m 0 0 0 1 2 f or i f c some c, c 1 2 1 2 IH: Assume c 4 T(4 ) c 4 for some c, c and prove m 1 m 1 m 1 1 2 1 2 c c.. 11

m m 1 m IS: T(4 ) = 3 T(4 ) +Θ(4 ) T m 3c 4 + c 4 for some constant c m 1 2 2 ( c c ) = 3 4+ 4 c 2 T 4 2 2 m if m m 1 m (4 ) = 3 (4 ) (4 ) 1 2 m c c +Θ 3c 4 + 4 m 1 1 c 1 ( c c ) = 3 4+ 4 1 1 m c1 4 if c 1 c1 4 Let c, c e such that c c c, c c 4, c 4 c. Then, c m m 2 2 1 2 2 2 1 1 m m m 14 T ( 4 ) c2 4 for all m 0. 4 for some constant 1 2 c 12

The Master Theorem Definition: f( n) is polynomially smaller than gn ( ), denoted ε ε as f( n) gn ( ), iff f( n) = Ognn ( ( ) ), or f( nn ) = Ogn ( ( )), for some ε > 0. 0.99 2 For example, 1 n n n n. Is 1 log n? Or n nlog n? To answer these, ask yourself whether or not n ε = O(log n). ( ) For convenience, write f( n) gn ( ) iff f( n) =Θ gn ( ). Note: the notations and are good only for this class. 13

The Master Theorem If T( n) satisfies the recurrence T( n) = at( n/ ) + f( n), then T( n) is ounded asymptotically as follows. 1. ( log ) a loga If ( ), then ( ). f n n T n = Θ n log a 2. If ( ), then ( ) ( ). 3. 4. f n n T n = Θ f n ( ) ( ) ( ) log a If ( ), then ( ) ( )log. f n n T n =Θ f n n log a k If ( ) log, then ( ) ( )log. f n n n T n =Θ f n n In case 2, it is required that af( n/ ) cf( n) for some c< 1, which is satisfied y most f( n) that we shall encounter. In the theorem, n/ should e interpreted as n/ or n/. 14

Examples: solve these recurrences Tn ( ) = 3 Tn ( / 4) + n. Tn ( ) = 9 Tn ( / 3) + n. Tn ( ) = T(2 n/ 3) + 1. Tn ( ) = 3 Tn ( / 4) + nlog n. Tn = Tn + Θ n 2 ( ) 7 ( / 2) ( ). Tn ( ) = 2 Tn ( / 2) + nlog n. Tn ( ) = Tn ( / 3) + T(2 n/ 3) + n. 15

Tn ( ) = atn ( / ) + fn ( ) a solving prolems time needed 2 log n 1 a log 1 of size n f( n) a of size n / a n of size n/ of size n/ of size n / 2 log n 1 log a 2 log n 1 a f( n/ ) a f( n/ f( n/ n log a n Θ (1) 2 ) log n 1 ) 16

Tn ( ) = atn ( / ) + fn ( ) 1 of size n a of size n/ a 2 log n 1 a a log n of size n/ of size n / of size n/ 2 log n 1 log n a 2 log n 1 f( n) a f( n/ ) a a log f( n/ f ( n / n 2 Θ(1) ) log n 1 ) Tn ( ) log n 1 i i log a a af( n/ ) + n (Note: logn= = logan log a) i= 0 loga = log n 17

Suppose f ( n) = Θ ( log ) a n. log ( ) ( ) loga loga i i n n Then f n / = Θ n/ = Θ =, i log a Θ i a ( ) ( log ) i a = Θ n i and thus af n/. Then, we have a Tn ( ) log n 1 = i= 0 a i i ( ) f n + log a / n (from the previous slide) log n 1 log a log a log a = = i= 0 = Θ ( log n) Θ( ( ) log n) n + n Θ n f n 18

When recurrences involve roots ( ) 2T n + log n if n> 2 Solve Tn ( ) = c otherwise m Suffices to consider only powers of 2. Let n = 2. m Define a new function Sm ( ) = T(2 ) = Tn ( ). The aove recurrence translates to ( ) 2 S m/ 2 + m if m> 1 Sm ( ) = c otherwise By Master Theorem, Sm ( ) = Θ( mlog m). So, Tn ( ) = Θ(log nlog log n) 19

Strassen's Algorithm for Matrix Multiplication Prolem: Compute C = AB, given n n matrices A and B. The straightforward method requires Θ( n using the formula c = a. n ij ik kj k = 1 log 7 2.81 matrices in On ( ) = On ( ) time. 3 ) time, Toward the end of 1960s, Strassen showed how to multiply 2.81 3 For n= 100, n 416,869, and n = 1, 000, 000. 2.521813 The time complexity was reduced to On ( ) in 1979, 2.521801 2.376 to On ( ) in 1980, and to On ( ) in 1986. In the following discussion, n is assumed to e a power of 2. 20

Write A11 A12 B11 B12 C11 C12 A=, B, C A A = = B B C C 21 22 21 22 21 22 where each A, B, C is a n/ 2 n/ 2 matrix. ij ij ij Then C C A A B B C C = A A B B 11 12 11 12 11 12 21 22 21 22 21 22. If we compute C = AB + AB, the running time Tn ( ) ij i1 1j i2 2 j 2 will satisfy the recurrence T( n) = 8 T( n/ 2) +Θ( n ) T n 3 ( ) will e Θ( ), not etter than the straightforward one. n Good for parallel processing. What's the running time using Θ( n 3 ) processors? 21

Strassen showed C C 11 12 2 3 1 2 5 6 C21 C = 22 M1 + M2 + M4 M7 M1 + M2 + M4 + M 5 where M = ( A + A A ) ( B B + B ) 1 21 22 11 22 12 11 M = A B 2 11 11 M = A B M 3 12 21 M + M M + M + M + M = ( A A ) ( B B ) 4 11 21 22 12 M = ( A + A ) ( B B ) 5 21 22 12 11 M = ( A A + A A ) B 6 12 21 11 22 22 M = A ( B + B B B ) 7 22 11 22 12 21 T( n) = 7 T( n / +Θ n T( n) = Θ( n 2 log7 2) ( ) ) 22

The Closest Pair Prolem Prolem Statement: Given a set of { } n points in the plane, A= ( x, y ) : 1 i n, find two points in A whose i i distance is smallest among all pairs. Straightforward method: Θ 2 ( n ). Divide and conquer: On ( log n). 23

The Divide-and-Conquer Approach 1. Partition A into two sets: A= B C. 2. Find a closest pair ( p, q ) in B. 1 1 3. Find a closest pair ( p, q ) in C. 2 2 { p q p q } 4. Let δ = min dist(, ), dist(, ). 1 1 2 2 5. Find a closest pair ( p, q ) etween B and C with 3 3 distance less than δ, if such a pair exists. 6. Return the pair of the three which is closest. Question : What would e the running time? Desired: T ( n) = 2 Tn ( /2) + On ( ) Tn ( ) = On ( log n). 24

Now let's see how to implement each step. Trivial: steps 2, 3, 4, 6. 25

Step 1: A A B C Partition into two sets: =. A natural choice is to draw a vertical line to divide the points into two groups. So, sort A[1.. n] y x-coordina te. (Do this Then, we can easily partition any set Ai [.. j] y Ai [.. j] = Ai [.. m] Am [ + 1.. j] where m= ( i+ j) 2. only onc e.) 26

Step 5: Find a closest pair etween B and C with distance less than δ, if exists. We will write a procedure Closest-Pair-Between-Two-Sets ( ) ( A[ i.. j], ptr, δ, ( p3, q3) ) which finds a closest pair etween Ai [.. m] and Am [ + 1.. j] with distance less than δ, if exists. The running time of this procedure must e no more than ( ) O Ai [.. j] in order for the final algorithm to e O nlog n. 27

Data Structures Let the coordinates of the n points e stored in X[1.. n] and Y[1.. n]. For simplicity, let Ai [ ] = ( X[ i], Y[ i]). For convenience, introduce two dummy points: A[0] = (, ) and An [ + 1] = (, ) We will use these two points to indicate "no pair" or "no pair closer than δ." Introduce an array Link[1.. n], initialized to all 0's. 28

Main Program Gloal variale: A[0.. n+ 1] Sort A[1.. n] such that X[1] X[2] Xn [ ]. That is, sort the given n points y x-coordinate. Call Procedure Closest-Pair with appropriate parameters. 29

{*returns a closest pair ( p, q) in Ai [.. j]*} If j i = 0: ( p, q) (0, n+ 1); If j i = 1: ( p, q) ( i, j); If j i > 1: m ( i+ j) 2 Closest-Pair Ai [.. m], ( p, q) ( Ai j p q ) //Version 1// Procedure Closest-Pair [.. ], (, ) ( 1 1 ) ( Am+ j p q ) Closest-Pair [ 1.. ], (, ) 2 2 ptr mergesort A[ i.. j] y y-coordinate into a linked list δ { p1 q1 p2 q2 } ( Ai j ptr, δ, ( p, q )) min dist(, ), dist(, ) Closest-Pair-Between-Two-Sets [.. ], 3 3 ( p, q) closest of the three ( p, q ), ( p, q ), ( p, q ) 1 1 2 2 3 3 30

Time Complexity of version 1 ( A n p q ) Initial call: Closest-Pair [1.. ], (, ). Assume Closest-Pair-Between-Two-Sets needs Θ( n) time. Let Tn ( ) denote the worst-case running time of Closest-Pair A[1.. n], ( p, q). ( ) Then, Tn ( ) = 2 Tn ( / 2) +Θ( nlog n). ( 2 ) So, Tn ( ) = Θ nlog n. Not as good as desi red. 31

How to reduce the time complexity to On ( log n)? Suppose we use Mergesort to sort Ai [.. j]: ptr Sort A[ i.. j] y y-coordinate into Rewrite the procedure as version 2. a linked list We only have to sort the ase cases and perform "merge." Here we take a free ride on Closest-Pair for dividing. That is, we comine Mergesort with Closest-Pai r. 32

If j i = 0: ( p, q) (0, n+ 1); If j i = 1: ( p, q) ( i, j); If j i > 1: m ( i+ j) 2 Closest-Pair [.. ], (, Closest-Pair [ 1.. ( Ai j p q ) Procedure Closest-Pair [.. ], (, ) ( A im p1 q1) ) ( Am+ j], ( p2, q2) ) t ( A[ i.. m] ) ( A m + j] ) ( ptr ptr ) //Version ptr1 Mergesor ptr2 Mergesort [ 1.. mergesort [.. j] ptr Merge 1, 2 (the rest is the same as in version 1) ( Ai ) 2 // 33

( A i j p q ptr) Procedure Closest-Pair [.. ], (, ), {* mergesort Ai [.. j] y y and find a closest pair ( p, q) in if j i = 0: ( p, q) (0, n + 1); ptr i if j i = 1: ( p, q) ( i, j); //final version// { } { ptr j Link j i} if Y[ i] Y[ j] then ptr i; Link[] i j else ; [ ] if j i > 1: m ( i+ j) 2 Closest-Pair Ai [.. m], ( p, q ), ptr1 ( 1 1 ) ( A m + j p2 q2 ptr ) ( ) Closest-Pair [ 1.. ], (, ), 2 ptr Merge ptr1, ptr2 (the rest is the same as in version 1) Ai [.. j] *} 34

Time Complexity of the final version ( A n p q pqr) Initial call: Closest-Pair [1.. ], (, ),. Assume Closest-Pair-Between-Two-Sets needs Θ( n) time. Let Tn ( ) denote the worst-case running time ( ) Closest-Pair A[1.. n], ( p, q), pqr. of Then, Tn ( ) = 2 Tn ( / 2) +Θ( n). ( ) So, Tn ( ) = Θ nlog n. Now, it remains to write the procedure Closest-Pair-Between-Two-Sets [.. ],,, (, ) ( A i j ptr δ p q ) 3 3 35

Closest-Pair-Between-Two-Sets ( A i j ptr δ ) Input: [.. ],, Output: a closest pair ( p, q) etween B= Ai [.. m] and C= Am [ + 1.. j] with distance < δ, where m= ( i+ j) / 2. If there is no such a pair, return the dummy pair (0, n + 1). ( Ai j ) Time complexity desired : O [.. ]. For each point B, we will compute dist( c, ) for O(1) points c C. Similarly for each point c C. Recall that Ai [.. j] has sorted linked list and look at each point. een sorted y y. We will follow the 36

Closest-Pair-Between-Two-Sets L0: vertical line passing through the point Am [ ]. L1 and L2: vertical lines to the left and right of L0 y δ. We oserve that: We only need to consider thos e points etween L1 and L2. For each point kin Ai [.. m], we only need to consider the points in Am [ + 1.. j] that are inside the square of δ δ. There are at most three such points. And they are among the most recently visited three points of Am [ + 1.. j] lying etween L0 and L2. Similar argument for each point kin Am [ + 1.. j]. 37

L 1 L 0 L 2 k Red points are apart y at least δ δ δ Blue points are apart y at least δ For k, only need to consider the points in the square less the right and ottom edges 38

Closest-Pair-Between-T wo-sets( A[ i.. j], ptr, δ, ( p, 3 3 // Find the closest pair etween Ai [.. m] and Am [ + 1.. j] with dist < δ. If there exists no such a pair, then return the dummy pair ( 0, n + 1). // gloal 3 3 X[0.. n+ 1], Y [0.. n+ 1], Link[1.. n] ( p, q ) (0, n+ 1),, 0 //most recently visited 3 points twn L, L // 1 2 3 0 1 c, c, c n+ 1 //such points etween L, L // 1 2 3 0 2 m ( i+ j)2 k ptr q )) 39

while k 0 do //follow the linked list until end// 1. if X[ k] X[ m] < δ then if k m then //point k is to the left of 3 2 2 1 1 // consider only twn L, L // compute d min{dist( k, c ) : 1 i 3}; if d < δ then update δ and ( p, q ); ; ; k; else //point k is to the right of L // 2. k Link[ k] 0 3 3 compute d min{ dist( k, ) : 1 i 3}; if d < δ then update δ and ( p, q ); c c ; c c; c k; 3 2 2 1 1 i i 3 3 L 0 // 1 2 40

Convex Hull Prolem Statement: Given a set of { } say, A= p, p, p,, p, 1 2 3 we want to find the convex hull of A. The convex hull of A, denoted y CH ( A), convex polygon that encloses all points of A. n n points in the plane, is the smallest Oservation: segment pip j is an edge of CH ( A) if all other points of Aare on the same side of pp (or on pp). 2 Straightforward method: Ω( n ). Divide and conquer: On ( log n). i j i j 41

Divide-and-Conquer for Convex Hull 0. Assume all x-coordinates are different, and no three points are colinear. (Will e removed later.) 1. Let A e sorted y x-coordinate. 2. If A 3, solve the prolem directly. Otherwise, apply divide-and-conquer as follows. 3. Break up A into A= B C. 4. Find the convex hull of B. 5. Find the convex hull of C. 6. Comine the two convex hulls y finding the upper and lower ridges to connect the two convex hulls. 42

Upper and Lower Bridges The upper ridge etween CH ( B) and CH ( C) is the the edge vw, where v CH ( B) and w CH ( C), such that all other vertices in CH ( B) and CH ( C) are elow vw, or the two neighors of v in CH ( B) and the two neighors of w in CH ( C) are elow vw, or the counterclockwise-neighor of v in CH ( B) and the clockwise-neighor of w in CH ( C) are elow vw, if v and w Lower ridge: simil ar. are chosen as in the next slide. 43

Finding the upper ridge v the rightmost point in CH ( B); w Loop vw the leftmost point in CH ( C). if counterclockwise-neighor( v) lies aove line v counterclockwise-neighor( v) else if clockwise-neighor( w) lies aove w clockwise neighor( w) else exit from the loop is the upper ridge. vw then vw then 44

Data Structure and Time Complexity What data structure will you use to represent a convex hull? Using your data structure, how much time will it take to find the upper and lower ridges? What is the over all running time of the algorithm? We assumed: (1) no two points in A share the same x-coordinate (2) no three points in A are colinear Now let's remove these assumptions. 45

Orientation of three points 1 2 3 ( ) ( ) ( ) Three points: p x, y, p x, y, p x, y. 1 1 1 2 2 2 3 3 3 ( p, p, p ) in that order is counterclockwise if x x x y 1 1 y 2 2 y 3 3 1 1 > 0 1 Clockwise if the determinant is negative. Colinear if the determinant is zero. 46