Scheduling Lecture 1: Scheduling on One Machine

Similar documents
Scheduling Lecture 1: Scheduling on One Machine

Embedded Systems 15. REVIEW: Aperiodic scheduling. C i J i 0 a i s i f i d i

Real-time operating systems course. 6 Definitions Non real-time scheduling algorithms Real-time scheduling algorithm

Task Models and Scheduling

Single Machine Problems Polynomial Cases

RCPSP Single Machine Problems

Lecture 2: Scheduling on Parallel Machines

Algorithm Design. Scheduling Algorithms. Part 2. Parallel machines. Open-shop Scheduling. Job-shop Scheduling.

Embedded Systems 14. Overview of embedded systems design

Recoverable Robustness in Scheduling Problems

Single Machine Models

Andrew Morton University of Waterloo Canada

There are three priority driven approaches that we will look at

Networked Embedded Systems WS 2016/17

Real-Time Systems. Event-Driven Scheduling

Lecture 13. Real-Time Scheduling. Daniel Kästner AbsInt GmbH 2013

Simple Dispatch Rules

CIS 4930/6930: Principles of Cyber-Physical Systems

Embedded Systems - FS 2018

Clock-driven scheduling

P C max. NP-complete from partition. Example j p j What is the makespan on 2 machines? 3 machines? 4 machines?

Paper Presentation. Amo Guangmo Tong. University of Taxes at Dallas January 24, 2014

Greedy Homework Problems

Non-Preemptive and Limited Preemptive Scheduling. LS 12, TU Dortmund

CMSC 451: Lecture 7 Greedy Algorithms for Scheduling Tuesday, Sep 19, 2017

Minimizing Mean Flowtime and Makespan on Master-Slave Systems

Scheduling Online Algorithms. Tim Nieberg

Real-Time Systems. Lecture 4. Scheduling basics. Task scheduling - basic taxonomy Basic scheduling techniques Static cyclic scheduling

Real-Time Systems. Event-Driven Scheduling

Embedded Systems Development

Algorithm Design and Analysis

Scheduling. Uwe R. Zimmer & Alistair Rendell The Australian National University

Aperiodic Task Scheduling

EDF Feasibility and Hardware Accelerators

On-line Scheduling to Minimize Max Flow Time: An Optimal Preemptive Algorithm

Lecture 4 Scheduling 1

Lecture 6. Real-Time Systems. Dynamic Priority Scheduling

Deterministic Models: Preliminaries

CS 374: Algorithms & Models of Computation, Spring 2017 Greedy Algorithms Lecture 19 April 4, 2017 Chandra Chekuri (UIUC) CS374 1 Spring / 1

CPU SCHEDULING RONG ZHENG

Real-Time Scheduling and Resource Management

Multi-agent scheduling on a single machine to minimize total weighted number of tardy jobs

Single Machine Scheduling with a Non-renewable Financial Resource

Real-time Systems: Scheduling Periodic Tasks

EECS 571 Principles of Real-Time Embedded Systems. Lecture Note #7: More on Uniprocessor Scheduling

Greedy Algorithms. Kleinberg and Tardos, Chapter 4

Optimal Utilization Bounds for the Fixed-priority Scheduling of Periodic Task Systems on Identical Multiprocessors. Sanjoy K.

Non-preemptive Fixed Priority Scheduling of Hard Real-Time Periodic Tasks

RUN-TIME EFFICIENT FEASIBILITY ANALYSIS OF UNI-PROCESSOR SYSTEMS WITH STATIC PRIORITIES

Scheduling divisible loads with return messages on heterogeneous master-worker platforms

arxiv: v2 [cs.dm] 2 Mar 2017

Metode şi Algoritmi de Planificare (MAP) Curs 2 Introducere în problematica planificării

On the Soft Real-Time Optimality of Global EDF on Multiprocessors: From Identical to Uniform Heterogeneous

Non-Work-Conserving Non-Preemptive Scheduling: Motivations, Challenges, and Potential Solutions

Real-time Scheduling of Periodic Tasks (2) Advanced Operating Systems Lecture 3

Deterministic Scheduling. Dr inż. Krzysztof Giaro Gdańsk University of Technology

Scheduling I. Today. Next Time. ! Introduction to scheduling! Classical algorithms. ! Advanced topics on scheduling

CSE101: Design and Analysis of Algorithms. Ragesh Jaiswal, CSE, UCSD

On Machine Dependency in Shop Scheduling

Divisible Load Scheduling

Process Scheduling for RTS. RTS Scheduling Approach. Cyclic Executive Approach

Chapter 4. Greedy Algorithms. Slides by Kevin Wayne. Copyright 2005 Pearson-Addison Wesley. All rights reserved.

Real-Time Systems. LS 12, TU Dortmund

Combinatorial Structure of Single machine rescheduling problem

Energy-efficient Mapping of Big Data Workflows under Deadline Constraints

CSE 421 Greedy Algorithms / Interval Scheduling

CSE 380 Computer Operating Systems

Real-time Scheduling of Periodic Tasks (1) Advanced Operating Systems Lecture 2

EDF Scheduling. Giuseppe Lipari CRIStAL - Université de Lille 1. October 4, 2015

Multiprocessor Scheduling I: Partitioned Scheduling. LS 12, TU Dortmund

Partition is reducible to P2 C max. c. P2 Pj = 1, prec Cmax is solvable in polynomial time. P Pj = 1, prec Cmax is NP-hard

EDF Scheduling. Giuseppe Lipari May 11, Scuola Superiore Sant Anna Pisa

Chapter 4. Greedy Algorithms. Slides by Kevin Wayne. Copyright 2005 Pearson-Addison Wesley. All rights reserved.

Polynomial Time Algorithms for Minimum Energy Scheduling

Ideal preemptive schedules on two processors

Proportional Share Resource Allocation Outline. Proportional Share Resource Allocation Concept

Scheduling and fixed-parameter tractability

Online Appendix for Coordination of Outsourced Operations at a Third-Party Facility Subject to Booking, Overtime, and Tardiness Costs

A Theory of Rate-Based Execution. A Theory of Rate-Based Execution

SPT is Optimally Competitive for Uniprocessor Flow

Che-Wei Chang Department of Computer Science and Information Engineering, Chang Gung University

Minimizing total weighted tardiness on a single machine with release dates and equal-length jobs

Approximation Schemes for Parallel Machine Scheduling Problems with Controllable Processing Times

Resource Management in Machine Scheduling Problems: A Survey

The polynomial solvability of selected bicriteria scheduling problems on parallel machines with equal length jobs and release dates

CSCE 313 Introduction to Computer Systems. Instructor: Dezhen Song

A Branch-and-Bound Procedure to Minimize Total Tardiness on One Machine with Arbitrary Release Dates

Scheduling Periodic Real-Time Tasks on Uniprocessor Systems. LS 12, TU Dortmund

Single Machine Scheduling with Generalized Total Tardiness Objective Function

1 Ordinary Load Balancing

Completion Time Scheduling and the WSRPT Algorithm

Design of Real-Time Software

CS 6901 (Applied Algorithms) Lecture 3

A Framework for Scheduling with Online Availability

The Concurrent Consideration of Uncertainty in WCETs and Processor Speeds in Mixed Criticality Systems

A New Sufficient Feasibility Test for Asynchronous Real-Time Periodic Task Sets

TDDI04, K. Arvidsson, IDA, Linköpings universitet CPU Scheduling. Overview: CPU Scheduling. [SGG7] Chapter 5. Basic Concepts.

Real-Time and Embedded Systems (M) Lecture 5

CPU Scheduling. Heechul Yun

Multi-Objective Scheduling Using Rule Based Approach

Transcription:

Scheduling Lecture 1: Scheduling on One Machine Loris Marchal October 16, 2012 1 Generalities 1.1 Definition of scheduling allocation of limited resources to activities over time activities: tasks in computer environment, steps of a construction project, operations in a production process, lectures at the University, etc. tasks, jobs, resources: processors, workers, machines, lecturers, rooms, etc. processors, machines objective: minimize total time, energy consumption, average service time Many variations on the model, on the resource/activity interaction and on the objective. Typical example: organize production of a workshop, by making a schedule for machines. Making different objects may need different series of machines, with different processing time. The number of objects of each type (or the ratio) may be fixed, and we want to minimize the overall processing time (or the throughput). Another example: in an airport, allocate gates to airplanes, arriving at different gates, not all gates can accomodate all planes, some gates are further than others: how to minimize the time spent by passengers (taxiing and walking in the terminal). Other examples: schedule the activity of workers in a construction process, allocate rooms for lecture at the University, etc. Our context: in a computing system, we have a number of machines at hand. These machines may be close to each other, or linked with a limited communication network. We want to use this platform to execute an application, composed of several tasks. Tasks may have precedence constraints, or other type of constraints. Several users may share the platform, which might be widely distributed, or hierarchical. 1

1.2 Graham notation Classes of scheduling problems can be specified in terms of the three-field classification α β γ where α specifies the machine environment, β specifies the job characteristics, γ and describes the objective function(s). We will illustrate this notation on all following scheduling problems. 2 Scheduling with a single machine Motivations: understand the complexity of the problem for various objectives some of actual systems offers flexibility and may well be modeled by a single machine (e.g. virtual machines) practice usual scheduling techniques Objectives using C i : Makespan C max = max C i most common objective Total flow time: n j=1 C j Weighted (total) flow time: n j=1 w jc j 2.1 First example, with new objective, 1 w i C i, polynomial (Smith-ratio) Let us consider the problem: 1 w i C i, 1 machine no constraints on tasks (length p i ) Objective: weighted sum of completion times Exemple: tasks A B C D w i 2 1 3 1 p i 2 2 4 4 w i /p i 1 0.5 0.75 0.25 Intuitions: put high weight first (C, A, B, D, cost: 12 + 12 + 8 + 12 = 44) put longer tasks last (B, A, C, D, cost: 2 + 8 + 24 + 12 = 46) Order task by non-increasing Smith ratio: w 1 /p 1 w 2 /p 2 w n /p n (on the example A,C,B,D, cost:4 + 818 + 8 + 12 = 42) Proof: Consider a different optimal schedule S Let i and j be two consecutive tasks in this schedule such that w i /p i < w j /p j contribution of these tasks in S: S i = (w i + w j )(t + p i ) + w j p j contribution of these tasks if switched: S j = (w i + w j )(t + p j ) + w i p i 2

we have S i S j w i w j = p i w i p j w j Thus we decrease the objective by switching these tasks. 2.2 Adding due-dates Other objectives in the Graham notations, using due dates d j ((sometimes) appears in the job characteristics): lateness: L j = C j d j tardiness: T j = max{0, C j d j } unit penalty: U j = 0 if C j d j, 1 otherwise wich gives the following objectives: (total lateness=sum flow) maximum lateness: L max = max L j total tardiness T j total weighted tardiness w j T j number of late activities U j weighted number of late activities w j U j 2.2.1 EDF for 1 L max Let s study the problem 1 L max : minimize the maximum lateness on one machine. The strategy which places first the jobs with the earliest deadlines is optimal (Earliest Deadline First). Theorem 1. EDF is optimal for 1 L max. Proof. We assume that there exists an optimal schedule S which does not follow the EDF rule. We consider the first pair of consecutive tasks k, l processed in this order and such that d k > d l. We call S the schedule obtained by exchanging k and l in S, and t the sarting time of k in S. We call L S i the lateness of task i in schedule S: In schedule S : L S k = t + p k d k L S l = t + p k + p l d l = L S k + d k d l > L S k since d k > d l L S l = t + p l d l < L S l L S k = t + p k + p l d k = L S l + d l d k < L S l Thus, by exchanging k and l, we decrease their lateness and we do not change the other tasks. We can continue with the next pair of tasks which do not follow the EDF order, and after iterating this process, we end up with an EDF schedule with optimal L max. 2.2.2 Moore-Hodgson algorithm for 1 U i Example: job 1 2 3 4 5 d j 6 7 8 9 11 p j 4 3 2 5 6 3

Tasks are sorted by non-decreasing d i : d 1 d n 1. A := 2. For i = 1... n (a) If p(a) + p i d i, then A := A {i} (b) Otherwise, i. Let j be the longest task in A {i} ii. A := A {i} {j} Optimal solution : A = {2, 3, 5} Proof. Feasibility: We first prove that the algorithm produces a feasible schedule, that is, all task of A can be scheduled in this order without missing their deadline By induction: if no task is rejected, ok Assume that A is feasible, prove that A {i} {j} is feasible too Optimality: all tasks in A before j: no change all tasks in A after j: shorter completion task i: let k be the last task in A: p(a) d k since task j is the longest: p i p j, thus p(a {i} {j}) p(a) d k d i (because tasks are sorted) That is, the new task i terminates earlier than k before j was rejected. Since d i d k, this is enough. Assume that there exists an optimal set O different from the set A f output by the Moore-Hodgson algorithm Let j be the first task rejected by the algorithm which is included in O (there exists such a task since O is different from A f and the cardinality of O is larger or equal to the cardinality of A f ) We consider the set A at the moment when task j is rejected from A, and i the task being added at this moment A {i} is not feasible, thus O does not contain A {i} Let k be a task of A {i} which is not in O Since the algorithm rejects the longest task, p(o {k} {j}) p(o) In the solution of O, we can replace j by k without increasing any completion time. We can repeat this process, until we get the set of tasks scheduled by the algorithm. 4

2.3 Adding release dates, 1 r i C i All jobs are not released at the same time, but job j is available starting at time r j. 1 r i C i is NP-complete (admitted) When adding preemption, we can derive an optimal algorithm for 1 r i, pmtb C i, which serves as a basis for a 2-approximation algorithm for the initial problem. Theorem 2. Shortest Remaining Processing Time first (SRPT) is optimal for 1 r i, pmtn C i / Proof. Let us consider an optimal schedule S which does not follow the SRPT rule. This means that at a given time t a task k is being processed whereas there exists another task l with x l < x k (x i denotes the fraction of remaining work for task i at time t). Starting at time t, a time x k + x l must be devoted to the pair of tasks k, l. We modify the schedule such that the first x l time units of this time are devoted to l and the remaining x k time units are devoted to x k. Let us denote by S the modified schedule. Since x l < x k, l finishes earlier in S than k in S: C l C k Moreover, C l < C l since at time t, S processes task k and S processes l. Thus, C l = min{c k, C l }. Finally, C k = max{c k, C l } and nothing changes for the other tasks. Thus, the objective is decreased in S. The algorithm A for the original problem (without preemption) is the following: 1. Solve the relaxed version with preemption using SRPT; 2. Sort the tasks by increasing completion time in this solution: C P 1 < C P 2 < < C P n 3. Process the tasks in this order without preemption. If a task is not yet available at given time, wait for it. Definition 3. Consider a minimization problem. Let OP T (x) denote the optimal value of the objective for any instance x. An algorithm A is a ρ-approximation algorithm if and only if for any instance x, A(x) ρop T (x)). Theorem 4. The algorithm proposed above is a 2-approximation algorithm for 1 r i C i. Proof. Instead of comparing directly to the optimal solution (which we do not know), we compare to a lower bound on this optimal solution: Ci C P i, since the solution of the problem without preemption is a solution for the problem with preemption. We will now prove that for each task i, C i 2C P i. Let us denote by t j the time when j is started by A. We say the machine is idle at time t is it is not processing any task at time t. In our algorithm, it can only be idle if the next job to process has not yet be released. Let us denote by idle(t) the total idle time of the machine before t. Note that C j = i j p i + idle(t j ). 5

First, i j p i Cj P since the tasks are completed in the same order in the preemptive schedule. Second, we now prove that there is no idel time from Cj P to t j (such that idle(t j ) Cj P ). By contradiction, assume that the machine is idle at t [Cj P, t j ]. This is possible only if the machine is waiting for some task k to be processed before j and which has not been released yet (r k > t Cj P ). However, the reason for this task to be processed before j is Ck P < CP j. Thus Ck P < r k, which contradicts the assumption. Thus C j 2Cj P, and A is a 2-approximation algorithm. 6