Dynamic Programming( Weighted Interval Scheduling)

Similar documents
Dynamic Programming. Cormen et. al. IV 15

Dynamic Programming: Interval Scheduling and Knapsack

Lecture 2: Divide and conquer and Dynamic programming

CSE 421 Dynamic Programming

CS Lunch. Dynamic Programming Recipe. 2 Midterm 2! Slides17 - Segmented Least Squares.key - November 16, 2016

CSE 202 Dynamic Programming II

Dynamic Programming 1

CSE 421 Weighted Interval Scheduling, Knapsack, RNA Secondary Structure

Copyright 2000, Kevin Wayne 1

CS 580: Algorithm Design and Analysis

Dynamic Programming. Data Structures and Algorithms Andrei Bulatov

6. DYNAMIC PROGRAMMING I

Knapsack and Scheduling Problems. The Greedy Method

Dynamic Programming. Weighted Interval Scheduling. Algorithmic Paradigms. Dynamic Programming

CSE 431/531: Analysis of Algorithms. Dynamic Programming. Lecturer: Shi Li. Department of Computer Science and Engineering University at Buffalo

CS 6901 (Applied Algorithms) Lecture 3

Chapter 6. Dynamic Programming. Slides by Kevin Wayne. Copyright 2005 Pearson-Addison Wesley. All rights reserved.

Dynamic Programming (CLRS )

Activity selection. Goal: Select the largest possible set of nonoverlapping (mutually compatible) activities.

Aside: Golden Ratio. Golden Ratio: A universal law. Golden ratio φ = lim n = 1+ b n = a n 1. a n+1 = a n + b n, a n+b n a n

The Greedy Method. Design and analysis of algorithms Cs The Greedy Method

Coin Changing: Give change using the least number of coins. Greedy Method (Chapter 10.1) Attempt to construct an optimal solution in stages.

Copyright 2000, Kevin Wayne 1

Outline / Reading. Greedy Method as a fundamental algorithm design technique

DAA Unit- II Greedy and Dynamic Programming. By Mrs. B.A. Khivsara Asst. Professor Department of Computer Engineering SNJB s KBJ COE, Chandwad

CSE101: Design and Analysis of Algorithms. Ragesh Jaiswal, CSE, UCSD

CS325: Analysis of Algorithms, Fall Final Exam

CS 6783 (Applied Algorithms) Lecture 3

CS60007 Algorithm Design and Analysis 2018 Assignment 1

This means that we can assume each list ) is

Chapter 6. Weighted Interval Scheduling. Dynamic Programming. Algorithmic Paradigms. Dynamic Programming Applications

CSE 421 Greedy Algorithms / Interval Scheduling

Chapter 6. Dynamic Programming. CS 350: Winter 2018

Week 7 Solution. The two implementations are 1. Approach 1. int fib(int n) { if (n <= 1) return n; return fib(n 1) + fib(n 2); } 2.

Greedy Algorithms My T. UF

Dynamic programming. Curs 2015

Greedy. Outline CS141. Stefano Lonardi, UCR 1. Activity selection Fractional knapsack Huffman encoding Later:

1 Basic Definitions. 2 Proof By Contradiction. 3 Exchange Argument

Areas. ! Bioinformatics. ! Control theory. ! Information theory. ! Operations research. ! Computer science: theory, graphics, AI, systems,.

Algorithm Design and Analysis

6. DYNAMIC PROGRAMMING I

(a) Write a greedy O(n log n) algorithm that, on input S, returns a covering subset C of minimum size.

General Methods for Algorithm Design

6. DYNAMIC PROGRAMMING I

More Approximation Algorithms

CS 561, Lecture: Greedy Algorithms. Jared Saia University of New Mexico

CS 374: Algorithms & Models of Computation, Spring 2017 Greedy Algorithms Lecture 19 April 4, 2017 Chandra Chekuri (UIUC) CS374 1 Spring / 1

Today s Outline. CS 561, Lecture 15. Greedy Algorithms. Activity Selection. Greedy Algorithm Intro Activity Selection Knapsack

Algorithms and Theory of Computation. Lecture 9: Dynamic Programming

Lecture 9. Greedy Algorithm

Algorithm Design Strategies V

CS Analysis of Recursive Algorithms and Brute Force

Weighted Activity Selection

Algorithm Design and Analysis

Dynamic programming. Curs 2017

Dynamic Programming: Matrix chain multiplication (CLRS 15.2)

Dynamic Programming. Reading: CLRS Chapter 15 & Section CSE 6331: Algorithms Steve Lai

1 Assembly Line Scheduling in Manufacturing Sector

Chapter 6. Dynamic Programming. Slides by Kevin Wayne. Copyright 2005 Pearson-Addison Wesley. All rights reserved.

CS 161: Design and Analysis of Algorithms

Algorithms Design & Analysis. Dynamic Programming

Maximum sum contiguous subsequence Longest common subsequence Matrix chain multiplication All pair shortest path Kna. Dynamic Programming

Quiz 1 Solutions. Problem 2. Asymptotics & Recurrences [20 points] (3 parts)

APTAS for Bin Packing

Algoritmiek, bijeenkomst 3

CSE 421 Introduction to Algorithms Final Exam Winter 2005

CS2223 Algorithms D Term 2009 Exam 3 Solutions

0-1 Knapsack Problem in parallel Progetto del corso di Calcolo Parallelo AA

Chapter 4. Greedy Algorithms. Slides by Kevin Wayne. Copyright 2005 Pearson-Addison Wesley. All rights reserved.

Knapsack. Bag/knapsack of integer capacity B n items item i has size s i and profit/weight w i

The Knapsack Problem. 28. April /44

Integer Programming Part II

LPT rule: Whenever a machine becomes free for assignment, assign that job whose. processing time is the largest among those jobs not yet assigned.

1.1 First Problem: Stable Matching. Six medical students and four hospitals. Student Preferences. s6 h3 h1 h4 h2. Stable Matching Problem

CSE101: Design and Analysis of Algorithms. Ragesh Jaiswal, CSE, UCSD

Greedy Homework Problems

CPSC 490 Problem Solving in Computer Science

Greedy Algorithms. Kleinberg and Tardos, Chapter 4

Unit 1A: Computational Complexity

Computer Science & Engineering 423/823 Design and Analysis of Algorithms

Introduction to Computer Science and Programming for Astronomers

Greedy Algorithms. CSE 101: Design and Analysis of Algorithms Lecture 9

CS 580: Algorithm Design and Analysis

1 Primals and Duals: Zero Sum Games

Dynamic Programming. Reading: CLRS Chapter 15 & Section CSE 2331 Algorithms Steve Lai

13 Dynamic Programming (3) Optimal Binary Search Trees Subset Sums & Knapsacks

Data Structures in Java

P,NP, NP-Hard and NP-Complete

Dynamic Programming. Credits: Many of these slides were originally authored by Jeff Edmonds, York University. Thanks Jeff!

CMPSCI 611 Advanced Algorithms Midterm Exam Fall 2015

Recoverable Robustness in Scheduling Problems

Chapter 4. Greedy Algorithms. Slides by Kevin Wayne. Copyright 2005 Pearson-Addison Wesley. All rights reserved.

The Knapsack Problem. n items with weight w i N and profit p i N. Choose a subset x of items

Chapter 8 Dynamic Programming

Bin packing and scheduling

Q = Set of states, IE661: Scheduling Theory (Fall 2003) Primer to Complexity Theory Satyaki Ghosh Dastidar

COL351: Analysis and Design of Algorithms (CSE, IITD, Semester-I ) Name: Entry number:

Design and Analysis of Algorithms

Advanced Analysis of Algorithms - Midterm (Solutions)

Graduate Algorithms CS F-09 Dynamic Programming

Transcription:

Dynamic Programming( Weighted Interval Scheduling) 17 November, 2016

Dynamic Programming 1 Dynamic programming algorithms are used for optimization (for example, finding the shortest path between two points, or the fastest way to multiply many matrices).

Dynamic Programming 1 Dynamic programming algorithms are used for optimization (for example, finding the shortest path between two points, or the fastest way to multiply many matrices). 2 We use the basic idea of divide and conquer. Dividing the problem into a number of subproblems.

Dynamic Programming 1 Dynamic programming algorithms are used for optimization (for example, finding the shortest path between two points, or the fastest way to multiply many matrices). 2 We use the basic idea of divide and conquer. Dividing the problem into a number of subproblems. 3 There are polynomial number of subproblems (If the input is of size n then there are at most n, n 2,.., n 5 subproblems and not 2 n 4 sub-problems).

Dynamic Programming 1 Dynamic programming algorithms are used for optimization (for example, finding the shortest path between two points, or the fastest way to multiply many matrices). 2 We use the basic idea of divide and conquer. Dividing the problem into a number of subproblems. 3 There are polynomial number of subproblems (If the input is of size n then there are at most n, n 2,.., n 5 subproblems and not 2 n 4 sub-problems). 4 The solution to the original problem can be easily computed from the solution to the subproblems (e.g. sum the solutions to the subproblems and get the solution to the original)

Dynamic Programming 1 Dynamic programming algorithms are used for optimization (for example, finding the shortest path between two points, or the fastest way to multiply many matrices). 2 We use the basic idea of divide and conquer. Dividing the problem into a number of subproblems. 3 There are polynomial number of subproblems (If the input is of size n then there are at most n, n 2,.., n 5 subproblems and not 2 n 4 sub-problems). 4 The solution to the original problem can be easily computed from the solution to the subproblems (e.g. sum the solutions to the subproblems and get the solution to the original) 5 The idea is to solve each subproblem only once. Once the solution to a given subproblem has been computed, it is stored or memorized: (the next time the same solution is needed, it is simply looked up). This way we reduce the number of computations.

Problem Statement: 1 We have a resource and many people request to use the resource for periods of time (an interval of time) 2 Each interval (request) i has a start time s i and finish time f i 3 Each interval (request) i, has a value v i Conditions: the resource can be used by at most one person at a time. we can accept only compatible intervals (requests) (overlap-free). Goal: a set of compatible intervals (requests) with a maximum total value.

What if all the values are the same?

What if all the values are the same? We start with the one which has the smallest finish time and add it into our solution and remove the ones that have intersection with it and repeat!

1 v 1 = 2 2 3 4 5 6 v 2 = 4 v 3 = 4 v 4 = 7 v 5 = 2 v 6 = 1

1 v 1 = 2 2 3 4 5 6 v 2 = 4 v 3 = 4 v 4 = 7 v 5 = 2 v 6 = 1 1 v 1 = 2 2 3 4 5 6 v 2 = 4 v 3 = 4 v 4 = 7 v 5 = 5 v 6 = 1

We order the intervals (requests) based on their finishing time (non-increasing order of finishing time) : f 1 f 2 f n

We order the intervals (requests) based on their finishing time (non-increasing order of finishing time) : f 1 f 2 f n For every interval j let p(j) be the largest index i < j such that intervals i, j do not overlap (have more than one points in common) p(j) = 0 if there is no interval before j and disjoint from j.

We order the intervals (requests) based on their finishing time (non-increasing order of finishing time) : f 1 f 2 f n For every interval j let p(j) be the largest index i < j such that intervals i, j do not overlap (have more than one points in common) p(j) = 0 if there is no interval before j and disjoint from j. 1 v 1 = 2 p(1) = 0 2 3 4 5 6 v 2 = 4 v 3 = 4 v 4 = 7 v 5 = 2 v 6 = 1 p(2) = 0 p(3) = 0 p(4) = 0 p(5) = 3 p(6) = 4

Let OPT (j) be the value of the optimal solution considering only intervals from 1 to j (according to their order).

Let OPT (j) be the value of the optimal solution considering only intervals from 1 to j (according to their order). We can write a recursive formula to compute OPT (j), Either interval j is in the optimal solution or j is not in the solution.

Let OPT (j) be the value of the optimal solution considering only intervals from 1 to j (according to their order). We can write a recursive formula to compute OPT (j), Either interval j is in the optimal solution or j is not in the solution. Therefore : OPT (j) = max{opt (j 1), v j + OPT (p(j))}

Let OPT (j) be the value of the optimal solution considering only intervals from 1 to j (according to their order). We can write a recursive formula to compute OPT (j), Either interval j is in the optimal solution or j is not in the solution. Therefore : OPT (j) = max{opt (j 1), v j + OPT (p(j))} Try not to implement using recursive call because the running time would be exponential! Recursive function is easy to implement but time consuming!

Dynamic Programming Iterative-Compute-Opt() 1. M[0] := 0; 2. for j = 1 to n 3. M[j] = max{v j + M[p(j)], M[j 1]}

Dynamic Programming Iterative-Compute-Opt() 1. M[0] := 0; 2. for j = 1 to n 3. M[j] = max{v j + M[p(j)], M[j 1]} Find-Solution(j) 1. if j = 0 then return 2. else 3. if (v j + M[p(j)] M[j 1]) 4. print j and print 5. Find-Solution(p(j)) 6. else 7. Find-Solution(j-1)

Subset Sums and Knapsack Problem Subset Sums (Problem Statement): 1 We are given n items {1, 2,..., n} and each item i has weight w i 2 We are also given a bound W Goal: select a subset S of the items so that : 1 Σ i S w i W 2 Σ i S w i is maximized

A greedy approach won t work if it is based on picking the biggest value first. Suppose we have a set {W /2 + 1, W /2, W /2} of items. If we choose W /2 + 1 then we can not choose anything else. However the optimal is W /2 + W /2.

A greedy approach won t work if it is based on picking the biggest value first. Suppose we have a set {W /2 + 1, W /2, W /2} of items. If we choose W /2 + 1 then we can not choose anything else. However the optimal is W /2 + W /2. A greedy approach won t work if it is based on picking the smallest value first. Suppose we have a set {1, W /2, W /2} of items. If we choose 1 then we can only choose W /2 and nothing more. However the optimal is W /2 + W /2.

A greedy approach won t work if it is based on picking the biggest value first. Suppose we have a set {W /2 + 1, W /2, W /2} of items. If we choose W /2 + 1 then we can not choose anything else. However the optimal is W /2 + W /2. A greedy approach won t work if it is based on picking the smallest value first. Suppose we have a set {1, W /2, W /2} of items. If we choose 1 then we can only choose W /2 and nothing more. However the optimal is W /2 + W /2. Exhaustive search! Produce all the subset and check which one satisfies the constraint ( W ) and has maximum size. Running time O(2 n ).

Let OPT [i, w] be the optimal maximum value of a set S {1, 2,..., i} where the total value of the items in S is at most w.

Let OPT [i, w] be the optimal maximum value of a set S {1, 2,..., i} where the total value of the items in S is at most w. If w i > w then OPT [i, w] = OPT [i 1, w] otherwise

Let OPT [i, w] be the optimal maximum value of a set S {1, 2,..., i} where the total value of the items in S is at most w. If w i > w then OPT [i, w] = OPT [i 1, w] otherwise OPT [i, w] = max{opt [i 1, w], w i + OPT [i 1, w w i ]}

Subset-Sum(n,W) 1. Array M[0,.., n, 0,.., W ] 2. for ( i = 1 to W ) M[0, i] = 0 3. for ( i = 1 to n ) 4. for ( w = 1 to W ) 5. if (w < w i ) then M[i, w] = M[i 1, w] 6. else 7. if (w i + M[i 1, w w i ] > M[i 1, w]) 8. M[i, w] = w i + M[i 1, w w i ] 9. else M[i, w] = M[i 1, w]

Subset-Sum(n,W) 1. Array M[0,.., n, 0,.., W ] 2. for ( i = 1 to W ) M[0, i] = 0 3. for ( i = 1 to n ) 4. for ( w = 1 to W ) 5. if (w < w i ) then M[i, w] = M[i 1, w] 6. else 7. if (w i + M[i 1, w w i ] > M[i 1, w]) 8. M[i, w] = w i + M[i 1, w w i ] 9. else M[i, w] = M[i 1, w] Time complexity O(nW )

Exercise Suppose W = 6 and n = 3 and the items of sizes w 1 = w 2 = 2 and w 3 = 3. Fill out matrix M.

Problem : We are given n jobs {J 1, J 2,..., J n } where each J i has a processing time p i > 0 ( an integer). We have two identical machines M 1, M 2 and we want to execute all the jobs. Schedule the jobs so that the finish time is minimized.

Knapsack Problem Knapsack Problem : 1 We are given n items {1, 2,..., n} and each item i has weight w i 2 Each item has value v i 3 We are also given a bound W Goal: select a subset S of the items so that : 1 Σ i S w i W 2 Σ i S v i is maximized

Let OPT [i, w] be the optimal maximum value of a set S {1, 2,..., i} where the total value of the items in S is at most w.

Let OPT [i, w] be the optimal maximum value of a set S {1, 2,..., i} where the total value of the items in S is at most w. If w i > w then OPT [i, w] = OPT [i 1, w] otherwise

Let OPT [i, w] be the optimal maximum value of a set S {1, 2,..., i} where the total value of the items in S is at most w. If w i > w then OPT [i, w] = OPT [i 1, w] otherwise OPT [i, w] = max{opt [i 1, w], v i + OPT [i 1, w w i ]}