Knapsack and Scheduling Problems. The Greedy Method

Similar documents
The Greedy Method. Design and analysis of algorithms Cs The Greedy Method

Greedy Algorithms My T. UF

Outline / Reading. Greedy Method as a fundamental algorithm design technique

Activity selection. Goal: Select the largest possible set of nonoverlapping (mutually compatible) activities.

Greedy. Outline CS141. Stefano Lonardi, UCR 1. Activity selection Fractional knapsack Huffman encoding Later:

Lecture 9. Greedy Algorithm

Dynamic Programming( Weighted Interval Scheduling)

CS 374: Algorithms & Models of Computation, Spring 2017 Greedy Algorithms Lecture 19 April 4, 2017 Chandra Chekuri (UIUC) CS374 1 Spring / 1

CS 6901 (Applied Algorithms) Lecture 3

CS583 Lecture 11. Many slides here are based on D. Luebke slides. Review: Dynamic Programming

Algorithm Design and Analysis

Algorithm Design and Analysis

CMSC 451: Lecture 7 Greedy Algorithms for Scheduling Tuesday, Sep 19, 2017

Algorithm Design Strategies V

CS 561, Lecture: Greedy Algorithms. Jared Saia University of New Mexico

Chapter 4. Greedy Algorithms. Slides by Kevin Wayne. Copyright 2005 Pearson-Addison Wesley. All rights reserved.

Today s Outline. CS 561, Lecture 15. Greedy Algorithms. Activity Selection. Greedy Algorithm Intro Activity Selection Knapsack

Chapter 4. Greedy Algorithms. Slides by Kevin Wayne. Copyright 2005 Pearson-Addison Wesley. All rights reserved.

CSE 421 Dynamic Programming

CS 6783 (Applied Algorithms) Lecture 3

Greedy Algorithms. Kleinberg and Tardos, Chapter 4

Chapter 4. Greedy Algorithms. Slides by Kevin Wayne. Copyright 2005 Pearson-Addison Wesley. All rights reserved.

CS Lunch. Dynamic Programming Recipe. 2 Midterm 2! Slides17 - Segmented Least Squares.key - November 16, 2016

Dynamic Programming: Interval Scheduling and Knapsack

CSE 421 Weighted Interval Scheduling, Knapsack, RNA Secondary Structure

(a) Write a greedy O(n log n) algorithm that, on input S, returns a covering subset C of minimum size.

6. DYNAMIC PROGRAMMING I

Dynamic Programming. Cormen et. al. IV 15

0 1 d 010. h 0111 i g 01100

Dynamic Programming. Credits: Many of these slides were originally authored by Jeff Edmonds, York University. Thanks Jeff!

Greedy Homework Problems

CSE101: Design and Analysis of Algorithms. Ragesh Jaiswal, CSE, UCSD

CS 598RM: Algorithmic Game Theory, Spring Practice Exam Solutions

Greedy vs Dynamic Programming Approach

CS 161: Design and Analysis of Algorithms

Greedy Algorithms. CSE 101: Design and Analysis of Algorithms Lecture 9

Coin Changing: Give change using the least number of coins. Greedy Method (Chapter 10.1) Attempt to construct an optimal solution in stages.

This means that we can assume each list ) is

Algorithms: COMP3121/3821/9101/9801

CSE 202 Dynamic Programming II

CS 580: Algorithm Design and Analysis

DAA Unit- II Greedy and Dynamic Programming. By Mrs. B.A. Khivsara Asst. Professor Department of Computer Engineering SNJB s KBJ COE, Chandwad

General Methods for Algorithm Design

Lecture 6: Greedy Algorithms I

More Approximation Algorithms

Minimizing the Number of Tardy Jobs

Greedy Algorithms. CSE 101: Design and Analysis of Algorithms Lecture 10

1 Basic Definitions. 2 Proof By Contradiction. 3 Exchange Argument

CSE 421 Greedy Algorithms / Interval Scheduling

CSE 431/531: Analysis of Algorithms. Dynamic Programming. Lecturer: Shi Li. Department of Computer Science and Engineering University at Buffalo

Do not turn this page until you have received the signal to start. Please fill out the identification section above. Good Luck!

A 2-Approximation Algorithm for Scheduling Parallel and Time-Sensitive Applications to Maximize Total Accrued Utility Value

CS60007 Algorithm Design and Analysis 2018 Assignment 1

University of Toronto Department of Electrical and Computer Engineering. Final Examination. ECE 345 Algorithms and Data Structures Fall 2016

Proof methods and greedy algorithms

Dynamic Programming. Data Structures and Algorithms Andrei Bulatov

Recoverable Robustness in Scheduling Problems

CSE 417. Chapter 4: Greedy Algorithms. Many Slides by Kevin Wayne. Copyright 2005 Pearson-Addison Wesley. All rights reserved.

Greedy Algorithms and Data Compression. Curs 2018

if the two intervals overlap at most at end

CSE541 Class 22. Jeremy Buhler. November 22, Today: how to generalize some well-known approximation results

Weighted Activity Selection

Lecture 13: Dynamic Programming Part 2 10:00 AM, Feb 23, 2018

On Two Class-Constrained Versions of the Multiple Knapsack Problem

Design and Analysis of Algorithms

6. DYNAMIC PROGRAMMING I

CS781 Lecture 3 January 27, 2011

Data Structures in Java

Lecture 2: Divide and conquer and Dynamic programming

Combinatorial optimization problems

A lower bound for scheduling of unit jobs with immediate decision on parallel machines

COSC 341: Lecture 25 Coping with NP-hardness (2)

Aside: Golden Ratio. Golden Ratio: A universal law. Golden ratio φ = lim n = 1+ b n = a n 1. a n+1 = a n + b n, a n+b n a n

Announcements. CompSci 102 Discrete Math for Computer Science. Chap. 3.1 Algorithms. Specifying Algorithms

Examination paper for TDT4120 Algorithms and Data Structures

CSE 202 Homework 4 Matthias Springer, A

Scheduling Online Algorithms. Tim Nieberg

Greedy Algorithms and Data Compression. Curs 2017

What is Dynamic Programming

Online Appendix for Coordination of Outsourced Operations at a Third-Party Facility Subject to Booking, Overtime, and Tardiness Costs

Copyright 2000, Kevin Wayne 1

CS 4407 Algorithms Lecture: Shortest Path Algorithms

The Dual Simplex Algorithm

CSE 421 Introduction to Algorithms Final Exam Winter 2005

CS 580: Algorithm Design and Analysis

Non-Work-Conserving Non-Preemptive Scheduling: Motivations, Challenges, and Potential Solutions

Algorithms: Lecture 12. Chalmers University of Technology

1. REPRESENTATIVE PROBLEMS

FINAL EXAM PRACTICE PROBLEMS CMSC 451 (Spring 2016)

ICS 252 Introduction to Computer Design

CS 580: Algorithm Design and Analysis

Theoretical Computer Science

ILP Formulations for the Lazy Bureaucrat Problem

NP-Completeness. Subhash Suri. May 15, 2018

Design and Analysis of Algorithms

arxiv: v1 [math.oc] 3 Jan 2019

CS325: Analysis of Algorithms, Fall Final Exam

A Heuristic Algorithm for General Multiple Nonlinear Knapsack Problem

Scheduling Lecture 1: Scheduling on One Machine

CS Analysis of Recursive Algorithms and Brute Force

Transcription:

The Greedy Method: Knapsack and Scheduling Problems The Greedy Method 1

Outline and Reading Task Scheduling Fractional Knapsack Problem The Greedy Method 2

Elements of Greedy Strategy An greedy algorithm makes a sequence of choices, each of the choices that seems best at the moment is chosen NOT always produce an optimal solution Two ingredients that are exhibited by most problems that lend themselves to a greedy strategy Greedy-choice property Optimal substructure

Greedy-Choice Property A globally optimal solution can be arrived at by making a locally optimal (greedy) choice Make whatever choice seems best at the moment and then solve the sub-problem arising after the choice is made The choice made by a greedy algorithm may depend on choices so far, but it cannot depend on any future choices or on the solutions to sub-problems Of course, we must prove that a greedy choice at each step yields a globally optimal solution

Task Scheduling Given: a set T of n tasks, each having: A start time, s i A finish time, f i (where s i < f i ) Goal: Perform all the tasks using a minimum number of machines. Machine 3 Machine 2 Machine 1 1 2 3 4 5 6 7 8 9 The Greedy Method 5

Task Scheduling Algorithm Greedy choice: consider tasks by their start time and use as few machines as possible with this order. Run time: O(n log n). Correctness: Suppose there is a better schedule. We can use k-1 machines The algorithm uses k Let i be first task scheduled on machine k Task i must conflict with k-1 other tasks K mutually conflict tasks But that means there is no nonconflicting schedule using k-1 machines Algorithm taskschedule(t) Input: set T of tasks w/ start time s i and finish time f i Output: non-conflicting schedule with minimum number of machines m 0 {no. of machines} while T is not empty remove task i w/ smallest s i if there s a machine j for i then schedule i on machine j else m m + 1 schedule i on machine m The Greedy Method 6

Example Given: a set T of n tasks, each having: A start time, s i A finish time, f i (where s i < f i ) [1,4], [1,3], [2,5], [3,7], [4,7], [6,9], [7,8] (ordered by start) Goal: Perform all tasks on min. number of machines Machine 3 Machine 2 Machine 1 1 2 3 4 5 6 7 8 9 The Greedy Method 7

Event Scheduling Example A banquet hall manager has received a list of reservation requests for the exclusive use of her hall for specified time intervals She wishes to grant the maximum number of reservation requests that have no time overlap conflicts Help her select the maximum number of conflict free time intervals

Scheduling Problem Statement INPUT: A set S = { I 1, I 2,, I n } of n event time-intervals I k = s k, f k, k =1..n, where s k = start time of I k, f k = finishing time of I k, ( s k < f k ). OUTPUT: A maximum cardinality subset C S of mutually compatible intervals (i.e., no overlapping pairs). Example: S = the intervals shown below, C = the blue intervals, C =4. C is not the unique optimum. Can you spot another optimum solution? time

Some Greedy Heuristics Greedy iteration step: From among undecided intervals, select the interval I that looks BEST. Commit to I if it s conflict-free (i.e., doesn t overlap with the committed ones so far). Reject I otherwise. Greedy 1: BEST = earliest start time (min s k ). Greedy 2: BEST = latest finishing time (max f k ). Greedy 3: BEST = shortest interval (min f k s k ). Greedy 4: BEST = overlaps with fewest # of undecided intervals.

Activities sorted by finish time

Why it is Greedy? Greedy in the sense that it leaves as much opportunity as possible for the remaining activities to be scheduled The greedy choice is the one that maximizes the amount of unscheduled time remaining

Why this Algorithm is Optimal Algorithm has the following properties optimal substructure property The algorithm satisfies the greedy-choice property Thus, it is Optimal

Greedy-Choice Property Show there is an optimal solution that begins with a greedy choice (with activity 1, which as the earliest finish time) Suppose A S in an optimal solution Order the activities in A by finish time. The first activity in A is k If k = 1, the schedule A begins with a greedy choice If k 1, show that there is an optimal solution B to S that begins with the greedy choice, activity 1 Let B = A {k} {1} f 1 f k activities in B are disjoint (compatible) B has the same number of activities as A Thus, B is optimal

Optimal Substructures Once the greedy choice of activity 1 is made, the problem reduces to finding an optimal solution for the activity-selection problem over those activities in S that are compatible with activity 1 Optimal Substructure If A is optimal to S, then A = A {1} is optimal to S ={i S: s i f 1 } Why? If we could find a solution B to S with more activities than A, adding activity 1 to B would yield a solution B to S with more activities than A contradicting the optimality of A After each greedy choice is made, we are left with an optimization problem of the same form as the original problem By induction on the number of choices made, making the greedy choice at every step produces an optimal solution

Optimal Substructures A problem exhibits optimal substructure if an optimal solution to the problem contains within it optimal solutions to sub-problems If an optimal solution A to S begins with activity 1, then A = A {1} is optimal to S ={i S: s i f 1 }

Knapsack Problem One wants to pack n items in a luggage The ith item is worth v i dollars and weighs w i pounds Maximize the value but cannot exceed W pounds v i, w i, W are integers 0-1 knapsack each item is taken or not taken Fractional knapsack fractions of items can be taken Both exhibit the optimal-substructure property 0-1: If item j is removed from an optimal packing, the remaining packing is an optimal packing with weight at most W-w j Fractional: If w pounds of item j is removed from an optimal packing, the remaining packing is an optimal packing with weight at most W- w that can be taken from other n-1 items plus w j w of item j

The Fractional Knapsack Problem Given: A set S of n items, with each item i having b i - a positive benefit w i - a positive weight Goal: Choose items with maximum total benefit but with weight at most W. If we are allowed to take fractional amounts, then this is the fractional knapsack problem. In this case, we let x i denote the amount we take of item i Objective: maximize i S b ( x / w ) i i i Constraint: i S x i W The Greedy Method 18

Example Given: A set S of n items, with each item i having b i - a positive benefit w i - a positive weight Goal: Choose items with maximum total benefit but with volume at most W. knapsack Items: Volume: Benefit: Value: 3 ($ per ml) 1 2 3 4 5 4 ml 8 ml 2 ml 6 ml 1 ml $12 $32 $40 $30 $50 4 20 5 50 10 ml Solution: 1 ml of 5 2 ml of 3 6 ml of 4 1 ml of 2 19

The Fractional Knapsack Algorithm Greedy choice: Keep taking item with highest value (benefit to weight ratio) Use a heap-based priority queue to store the items, then the time complexity is O(n log n). Correctness: Suppose there is a better solution there is an item i with higher value than a chosen item j (i.e., v j <v i ), if we replace some j with i, we get a better solution Thus, there is no better solution than the greedy one Algorithm fractionalknapsack(s, W) Input: set S of items w/ benefit b i and weight w i ; max. weight W Output: amount x i of each item i to maximize benefit with weight at most W for each item i in S x i 0 v i b i / w i {value} w 0 {current total weight} while w < W remove item i with highest v i x i min{w i, W - w} w w + min{w i, W - w} The Greedy Method 20

Greedy Algorithm for Fractional Knapsack problem Fractional knapsack can be solvable by the greedy strategy Compute the value per pound v i /w i for each item Obeying a greedy strategy, take as much as possible of the item with the greatest value per pound. If the supply of that item is exhausted and there is still more room, take as much as possible of the item with the next value per pound, and so forth until there is no more room O(n lg n) (we need to sort the items by value per pound) Greedy Algorithm? Correctness?

O-1 knapsack is harder! 0-1 knapsack cannot be solved by the greedy strategy Unable to fill the knapsack to capacity, and the empty space lowers the effective value per pound of the packing We must compare the solution to the sub-problem in which the item is included with the solution to the subproblem in which the item is excluded before we can make the choice

Why is 0/1 Knapsack sub-optimal? Can you define a counter-example to Greedychoice property?