Activity selection. Goal: Select the largest possible set of nonoverlapping (mutually compatible) activities.

Similar documents
Lecture 9. Greedy Algorithm

Greedy Algorithms My T. UF

Knapsack and Scheduling Problems. The Greedy Method

CS583 Lecture 11. Many slides here are based on D. Luebke slides. Review: Dynamic Programming

CSE 421 Dynamic Programming

CS Lunch. Dynamic Programming Recipe. 2 Midterm 2! Slides17 - Segmented Least Squares.key - November 16, 2016

Dynamic Programming( Weighted Interval Scheduling)

General Methods for Algorithm Design

Algorithm Design and Analysis

CSE 421 Weighted Interval Scheduling, Knapsack, RNA Secondary Structure

DAA Unit- II Greedy and Dynamic Programming. By Mrs. B.A. Khivsara Asst. Professor Department of Computer Engineering SNJB s KBJ COE, Chandwad

Dynamic Programming: Interval Scheduling and Knapsack

Algorithm Design and Analysis

CSE 421 Greedy Algorithms / Interval Scheduling

CMSC 451: Lecture 7 Greedy Algorithms for Scheduling Tuesday, Sep 19, 2017

6. DYNAMIC PROGRAMMING I

CS 561, Lecture: Greedy Algorithms. Jared Saia University of New Mexico

Dynamic Programming. Cormen et. al. IV 15

Today s Outline. CS 561, Lecture 15. Greedy Algorithms. Activity Selection. Greedy Algorithm Intro Activity Selection Knapsack

Maximum sum contiguous subsequence Longest common subsequence Matrix chain multiplication All pair shortest path Kna. Dynamic Programming

Greedy. Outline CS141. Stefano Lonardi, UCR 1. Activity selection Fractional knapsack Huffman encoding Later:

Lecture 6: Greedy Algorithms I

Dynamic Programming. Credits: Many of these slides were originally authored by Jeff Edmonds, York University. Thanks Jeff!

CS 6901 (Applied Algorithms) Lecture 3

Weighted Activity Selection

CSE 202 Dynamic Programming II

Dynamic Programming (CLRS )

Chapter 4. Greedy Algorithms. Slides by Kevin Wayne. Copyright 2005 Pearson-Addison Wesley. All rights reserved.

M17 MAT25-21 HOMEWORK 6

CS 161: Design and Analysis of Algorithms

CSE101: Design and Analysis of Algorithms. Ragesh Jaiswal, CSE, UCSD

CSE 431/531: Analysis of Algorithms. Dynamic Programming. Lecturer: Shi Li. Department of Computer Science and Engineering University at Buffalo

CSE 417. Chapter 4: Greedy Algorithms. Many Slides by Kevin Wayne. Copyright 2005 Pearson-Addison Wesley. All rights reserved.

Dynamic Programming ACM Seminar in Algorithmics

if the two intervals overlap at most at end

Algorithm Design Strategies V

6. DYNAMIC PROGRAMMING I

Greedy Algorithms. Kleinberg and Tardos, Chapter 4

1 Basic Definitions. 2 Proof By Contradiction. 3 Exchange Argument

Computer Science & Engineering 423/823 Design and Analysis of Algorithms

Computer Science & Engineering 423/823 Design and Analysis of Algorithms

Chapter 4. Greedy Algorithms. Slides by Kevin Wayne. Copyright 2005 Pearson-Addison Wesley. All rights reserved.

The Greedy Method. Design and analysis of algorithms Cs The Greedy Method

(a) Write a greedy O(n log n) algorithm that, on input S, returns a covering subset C of minimum size.

CS 6783 (Applied Algorithms) Lecture 3

Midterm Exam 2 Solutions

/633 Introduction to Algorithms Lecturer: Michael Dinitz Topic: Dynamic Programming II Date: 10/12/17

What is Dynamic Programming

CS60007 Algorithm Design and Analysis 2018 Assignment 1

Copyright 2000, Kevin Wayne 1

Dynamic Programming 1

Lecture 13: Dynamic Programming Part 2 10:00 AM, Feb 23, 2018

CS 580: Algorithm Design and Analysis

Chapter 11. Approximation Algorithms. Slides by Kevin Wayne Pearson-Addison Wesley. All rights reserved.

CS 374: Algorithms & Models of Computation, Spring 2017 Greedy Algorithms Lecture 19 April 4, 2017 Chandra Chekuri (UIUC) CS374 1 Spring / 1

Outline / Reading. Greedy Method as a fundamental algorithm design technique

Mat 3770 Bin Packing or

This means that we can assume each list ) is

CSE 202 Homework 4 Matthias Springer, A

On Two Class-Constrained Versions of the Multiple Knapsack Problem

COSC 341: Lecture 25 Coping with NP-hardness (2)

CS483 Design and Analysis of Algorithms

Chapter 8 Dynamic Programming

Chapter 4. Greedy Algorithms. Slides by Kevin Wayne. Copyright 2005 Pearson-Addison Wesley. All rights reserved.

6. DYNAMIC PROGRAMMING I

Dynamic Programming. Weighted Interval Scheduling. Algorithmic Paradigms. Dynamic Programming

Dynamic Programming. Data Structures and Algorithms Andrei Bulatov

- Well-characterized problems, min-max relations, approximate certificates. - LP problems in the standard form, primal and dual linear programs

1 Some loose ends from last time

EECS 477: Introduction to algorithms. Lecture 12

Greedy Homework Problems

Lecture 2: Divide and conquer and Dynamic programming

CS 350 Algorithms and Complexity

Proof methods and greedy algorithms

Introduction. I Dynamic programming is a technique for solving optimization problems. I Key element: Decompose a problem into subproblems, solve them

Chapter 6. Dynamic Programming. Slides by Kevin Wayne. Copyright 2005 Pearson-Addison Wesley. All rights reserved.

1 T 1 = where 1 is the all-ones vector. For the upper bound, let v 1 be the eigenvector corresponding. u:(u,v) E v 1(u)

CS 170 Algorithms Fall 2014 David Wagner MT2

Recap & Interval Scheduling

Assignment 4. CSci 3110: Introduction to Algorithms. Sample Solutions

Algorithms Exam TIN093 /DIT602

Recursion: Introduction and Correctness

Dynamic Programming. Prof. S.J. Soni

Chapter 6. Dynamic Programming. CS 350: Winter 2018

1 Assembly Line Scheduling in Manufacturing Sector

Week 5: Quicksort, Lower bound, Greedy

RECOVERY OF NON-LINEAR CONDUCTIVITIES FOR CIRCULAR PLANAR GRAPHS

CSE 421 Introduction to Algorithms Final Exam Winter 2005

Chapter 8 Dynamic Programming

CS 350 Algorithms and Complexity

/633 Introduction to Algorithms Lecturer: Michael Dinitz Topic: Matroids and Greedy Algorithms Date: 10/31/16

Common-Deadline Lazy Bureaucrat Scheduling Problems

Greedy vs Dynamic Programming Approach

CS 598RM: Algorithmic Game Theory, Spring Practice Exam Solutions

Discrete Mathematics: Logic. Discrete Mathematics: Lecture 17. Recurrence

Design and Analysis of Algorithms

Preview 11/1/2017. Greedy Algorithms. Coin Change. Coin Change. Coin Change. Coin Change. Greedy algorithms. Greedy Algorithms

Lecture 13. More dynamic programming! Longest Common Subsequences, Knapsack, and (if time) independent sets in trees.

Lecture 7: Dynamic Programming I: Optimal BSTs

Theoretical Computer Science

Transcription:

Greedy Algorithm 1

Introduction Similar to dynamic programming. Used for optimization problems. Not always yield an optimal solution. Make choice for the one looks best right now. Make a locally optimal choice in hope of getting a globally optimal solution. 2

Activity selection n activities require exclusive use of a common resource. For example, scheduling the use of a classroom. Set of activities S = {a 1..., a n }. a 1 needs resource during period [s i, f i ), which is a half-open interval, where s i = start time and f i = finish time. Goal: Select the largest possible set of nonoverlapping (mutually compatible) activities. Note: Could have many other objectives: Schedule room for longest time. Maximize income rental fees. 3

Activity selection Example: S sorted by finish time i 1 2 3 4 5 6 7 8 9 10 11 s i f i 1 3 0 5 3 5 6 8 8 2 12 4 5 6 7 8 9 10 11 12 13 14 Maximum-size mutually compatible set: {a 1, a 4, a 8, a 11 }. Not unique: also {a 2, a 4, a 9, a 11 }. 4

Activity selection Optimal substructure of activity selection S ij = {a k S : f i s k f k s j } = activities that start after a i finishes and finish before a j starts. a i f f i s k k s j a k a j 5

Activity selection Activities in S ij are compatible with all activities that finish by f i, and all activities that start no earlier than s j. To represent the entire problem, add fictitious activities: a 0 = [, 0) a n+1 = [, + 1 ) we don t care about and + 1. Then S = S 0,n+1. Range for S ij is 0 i, j n+1. 6

Activity selection Assume that activities are sorted by monotonically increasing finish time: f 0 f 1 f 2 f n f n+1. Then i j S ij = Ø. If there exists a k S ij : f i s k < f k s j < f j f i < f j. But i j f i f j. Contradiction. So only need to worry about S ij with 0 i < j n + 1. All other S ij are Ø. 7

Activity selection Suppose that a solution to S ij includes a k. Have 2 subproblems: S ik (start after a i finishes, finish before a k starts) S kj (start after a k finishes, finish before a a j starts) Solution to S ij is (solution to S ik ) {a k } ( solution to S kj ). Since a k is in neither subproblem, and the subproblems are disjoint, solution to S = solution to S ik + 1 + solution to S kj. 8

Activity selection If an optimal solution to S ij includes a k, then the solutions to S ik and S kj used within this solution must be optimal as well. Use the usual cut-and-paste argument. Let A ij = optimal solution to S ij. So A ij = A ik {a k } A kj, assuming: S ij is nonempty, and we know a k. 9

Activity selection Recursive solution to activity selection c[i, j]=size of maximum-size subset of mutually compatible activities in S ij. i j S ij = Ø c[i, j] = 0. If S ij Ø, suppose we know that a k is in the subset. Then c[i, j] = c[i, k] + 1 + c[k, j]. But of course we don t know which k to use, and so C[i, j] = { 0 if S ij = Ø, max {c[i, k] + c[k, j] + 1} if S ij Ø. a k S ij i < k< j 10

Theorem Activity selection Let S ij Ø, and let a m be the activity in S ij with the earliest finish time: f m = min {f k : a k S ij }. Then: 1. a m is used in some maximum-size subset of mutually compatible activities of S ij. 2. S im = Ø, so that choosing a m leaves S mj as the only nonempty subproblem. 11

Proof Activity selection 2. Suppose there is some a k S im. Then f i s k f k s m f m f k f m. Then a k S ij and it has an earlier finish than f m, which contradicts our choice of a m. Therefore, there is no a k S im S im. 1. Let A ij be a maximum-size subset of mutually compatible activities in S ij. Order activities in A ij in monotonically increasing order of finish time. Let a k be the first activity in A ij. If a k a m, done (a m is used in a maximum-size subset). Otherwise, construct A ij A ij {a k } {a m } (replace a k by a m ) 12

Activity selection Claim Activities in A ' ij are disjoint. Proof Activities in A ij are disjoint, a k is the first activity in A ij to finish, f m f k (so a m doesn t overlap with anything else in A' ij ). (claim) Since A' ij = A ij and A ij is a maximum-size subset, so is A' ij. (theorem) 13

Activity selection This is great: before theorem after theorem # of subproblems in optimal solution 2 1 # of choices to consider j i 1 1 Now we can solve top down: To solve a problem S ij, Choose a m S ij with earliest finish time: the greedy choice. Then solve S mj. 14

Activity selection What are the subproblems? Original problem is S 0,n+1. Suppose our first choice is a m1. Then next subproblem is S m1,n+1. Suppose next choice is a m2. Next subproblem is S m2,n+1. And so on. Each subproblem is S mi,n+1, i.e., the last activities to finish. And the subproblems chosen have finish times that increase. Therefore, we can consider each activity just once, in monotonically increasing order of finish time. 15

Activity selection Easy recursive algorithm: Assumes activities already sorted by monotonically increasing finish time. (If not, then sort in O (n lg n) time.) Return an optimal solution for S i,n+1 : REC-ACTIVITY-SELECTOR(s, f, i, n) m i + 1 while m n and s m < f i Find first activity in S i,n+1. do m m + 1 If m n then return {a m } REC-ACTIVITY-SELECTOR(s, f, m, n) else return 16

Activity selection Initial call: REC-ACTIVITY-SELECTOR(s, f, 0, n) Idea: The while loop checks a i+1, a i+2,, a n until it finds an activity a m that is compatible with a i (need s m f i ). If the loop terminates because a m is found (m n), then recursively solve S m,n+1, and return this solution, along with a m. If the loop never finds a compatible a m (m > n), then just return empty set. Go through example given earlier. Should get {a 1, a 4, a 8, a 11 }. Time: Θ (n) each activity examined exactly once. 17

0 1 2 3 4 5 6 7 8 9 10 11 13 14 k s k f k 0-0 a 0 1 1 4 a 0 a 1 m = 1 R E C U R SI V E - A C T I V I T Y - SE L E C T O R (s, f, 0, 11 ) 2 3 5 a 1 a 2 R E C U R SI V E - A C T I V I T Y - SE L E C T O R (s, f, 1, 11 ) 3 0 6 a 3 a 1 4 5 7 a 1 a 4 m = 4 5 3 8 a 5 R E C U R SI V E - A C T I V I T Y - SE L E C T O R (s, f, 4, 11 ) a 1 a 4 6 5 9 a 6 a 1 a 4 7 6 10 a 7 a 1 a 4 8 8 11 a 1 a 4 a 8 m = 8 9 8 12 R E C U R SI V E - A C T I V I T Y - SE L E C T O R (s, f, 8, 11 ) a 9 a 1 a 4 a 8 a 10 10 2 13 a 1 a 4 a 8 11 12 14 a 11 a 1 a 4 a 8 m = 11 12 - R E C U R SI V E - A C T I V I T Y - SE L E C T O R (s, f, 11, 11 ) a 1 a 4 a 8 a 11 18 time

Can make this iterative. Activity selection GREEDY-ACTIVITY-SELECTOR(s, f, n) A {a 1 } i 1 for m 2 to n do if s m f i then A A {a m } i m a i is most recent addition to A. return A Go through example given earlier. Should again get {a 1, a 4, a 8, a 11 }. Time: Θ (n). 19

Greedy strategy The choice that seems best at the moment is the one we go with. What did we do for activity selection? 1. Determine the optimal substructure. 2. Develop a recursive solution. 3. Prove that at any stage of recursion, one of the optimal choices is the greedy choice. Therefore, it s always safe to make the greedy choice. 4. Show that all but one of the subproblems resulting from the greedy choice are empty. 5. Develop a recursive greedy algorithm. 6. Convert it to an iterative algorithm. 20

Greedy strategy Typical streamlined steps: 1. Cast the optimization problem as one in which we make a choice and are left with one subproblem to solve. 2. Prove that there s always an optimal solution that makes the greedy choice, so that the greedy choice is always safe. 3. Show that greedy choice and optimal solution to subproblem optimal solution to the problem. No general way to tell if a greedy algorithm is optimal, but two key ingredients are 1. greedy-choice property and 2. optimal substructure. 21

Greedy strategy Dynamic programming: Make a choice at each step. Choice depends on knowing optimal solutions to subproblems. Solve subproblems first. Solve bottom-up. Greedy: Make a choice at each step. Make the choice before solving the subproblems. Solve top-down 22

Greedy strategy Greedy vs. dynamic programming The knapsack problem is a good example of the difference. 0-1 knapsack problem: n items. Item i is worth $v i, weighs w i pounds. Find a most valuable subset of items with total weight W. Have to either take an item or not take it can t take part of it. 23

Greedy strategy Fractional knapsack problem: Like the 0-1 kanpsack problem, but can take fraction of an item. Both have optimal substructure. But the fractional kanpsack problem has the greedy-choice property, and the 0-1 knapsack problem does not have greedy-choice that returns optimal solution. To solve the fractional problem, rank items by value/weight: v i /w i. Let v i /w i v i+1 /w i+1 for all i. 24

Greedy strategy Fractional knapsack (v, w, W) load 0 i 1 while load W and i n do if w i W load then take all of item i else take (W load)/w i of item i add what was taken to load i i + 1 move to the next valuable item. Taking the items in order of greatest value per pound yields an optimal solution. 25

Greedy strategy Time: O (n lg n) to sort, O (n) thereafter Greedy doesn t work for the 0-1 knapsack problem. Might get empty space, which lowers the average value per pound of the items taken. i v i w i v i /w i 1 2 3 60 100 120 10 20 30 6 5 4 W = 50 26

Greedy strategy Greedy solution: Take items 1 and 2. value = 160, weight = 30. Have 20 pounds of capacity left over. 0-1 knapsack problem can t be solved w/ greedy strategy. (i.e., the optimal solution can t be obtained w/ greedy strategy) Fraction knapsack problem can be solved w/ greedy strategy by adding 20 pounds (2/3 of item 3) 4 = $80 value. Optimal solution (for 0-1 knapsack w/o greedy strategy): Take items 2 and 3. value = 220, weight = 50. No leftover capacity. 27