Schedulability of Periodic and Sporadic Task Sets on Uniprocessor Systems

Similar documents
Scheduling Periodic Real-Time Tasks on Uniprocessor Systems. LS 12, TU Dortmund

Task Models and Scheduling

Static priority scheduling

Static-Priority Scheduling. CSCE 990: Real-Time Systems. Steve Goddard. Static-priority Scheduling

Multiprocessor Scheduling I: Partitioned Scheduling. LS 12, TU Dortmund

Scheduling Algorithms for Multiprogramming in a Hard Realtime Environment

Real-Time and Embedded Systems (M) Lecture 5

Real-Time Systems. Lecture #14. Risat Pathan. Department of Computer Science and Engineering Chalmers University of Technology

Multiprocessor Scheduling II: Global Scheduling. LS 12, TU Dortmund

Schedulability and Optimization Analysis for Non-Preemptive Static Priority Scheduling Based on Task Utilization and Blocking Factors

Real-time Scheduling of Periodic Tasks (2) Advanced Operating Systems Lecture 3

Real-Time Systems. Event-Driven Scheduling

Lecture 13. Real-Time Scheduling. Daniel Kästner AbsInt GmbH 2013

Schedulability analysis of global Deadline-Monotonic scheduling

EDF Feasibility and Hardware Accelerators

Optimal Utilization Bounds for the Fixed-priority Scheduling of Periodic Task Systems on Identical Multiprocessors. Sanjoy K.

Non-Preemptive and Limited Preemptive Scheduling. LS 12, TU Dortmund

Priority-driven Scheduling of Periodic Tasks (1) Advanced Operating Systems (M) Lecture 4

Paper Presentation. Amo Guangmo Tong. University of Taxes at Dallas February 11, 2014

Rate-monotonic scheduling on uniform multiprocessors

Real-time Systems: Scheduling Periodic Tasks

Real-time scheduling of sporadic task systems when the number of distinct task types is small

Andrew Morton University of Waterloo Canada

Non-preemptive Fixed Priority Scheduling of Hard Real-Time Periodic Tasks

2.1 Task and Scheduling Model. 2.2 Definitions and Schedulability Guarantees

EDF Scheduling. Giuseppe Lipari May 11, Scuola Superiore Sant Anna Pisa

Embedded Systems Development

A Note on Modeling Self-Suspending Time as Blocking Time in Real-Time Systems

Embedded Systems 15. REVIEW: Aperiodic scheduling. C i J i 0 a i s i f i d i

Real-Time Systems. Event-Driven Scheduling

Bursty-Interference Analysis Techniques for. Analyzing Complex Real-Time Task Models

Real-Time Scheduling. Real Time Operating Systems and Middleware. Luca Abeni

On the Soft Real-Time Optimality of Global EDF on Multiprocessors: From Identical to Uniform Heterogeneous

Segment-Fixed Priority Scheduling for Self-Suspending Real-Time Tasks

EDF Scheduling. Giuseppe Lipari CRIStAL - Université de Lille 1. October 4, 2015

Exact speedup factors and sub-optimality for non-preemptive scheduling

Lecture: Workload Models (Advanced Topic)

Non-Work-Conserving Non-Preemptive Scheduling: Motivations, Challenges, and Potential Solutions

Real-Time Systems. LS 12, TU Dortmund

The Partitioned Dynamic-priority Scheduling of Sporadic Task Systems

Tardiness Bounds under Global EDF Scheduling on a Multiprocessor

Deadline-driven scheduling

arxiv: v1 [cs.os] 28 Feb 2018

Aperiodic Task Scheduling

Multiprocessor Real-Time Scheduling Considering Concurrency and Urgency

Fixed Priority Scheduling

RUN-TIME EFFICIENT FEASIBILITY ANALYSIS OF UNI-PROCESSOR SYSTEMS WITH STATIC PRIORITIES

Multiprocessor EDF and Deadline Monotonic Schedulability Analysis

AS computer hardware technology advances, both

Real-time Scheduling of Periodic Tasks (1) Advanced Operating Systems Lecture 2

Lecture 6. Real-Time Systems. Dynamic Priority Scheduling

EDF and RM Multiprocessor Scheduling Algorithms: Survey and Performance Evaluation

Tardiness Bounds under Global EDF Scheduling on a. Multiprocessor

Task assignment in heterogeneous multiprocessor platforms

PASS: Priority Assignment of Real-Time Tasks with Dynamic Suspending Behavior under Fixed-Priority Scheduling

Mitra Nasri* Morteza Mohaqeqi Gerhard Fohler

arxiv: v1 [cs.os] 21 May 2008

IN4343 Real Time Systems April 9th 2014, from 9:00 to 12:00

arxiv: v3 [cs.ds] 23 Sep 2016

Real-Time Workload Models with Efficient Analysis

There are three priority driven approaches that we will look at

Non-Work-Conserving Scheduling of Non-Preemptive Hard Real-Time Tasks Based on Fixed Priorities

Semi-Partitioned Fixed-Priority Scheduling on Multiprocessors

Complexity of Uniprocessor Scheduling Analysis

CEC 450 Real-Time Systems

Real-Time Calculus. LS 12, TU Dortmund

A New Sufficient Feasibility Test for Asynchronous Real-Time Periodic Task Sets

CPU SCHEDULING RONG ZHENG

Load Regulating Algorithm for Static-Priority Task Scheduling on Multiprocessors

Schedulability Analysis of the Linux Push and Pull Scheduler with Arbitrary Processor Affinities

Multi-core Real-Time Scheduling for Generalized Parallel Task Models

Clock-driven scheduling

A Utilization Bound for Aperiodic Tasks and Priority Driven Scheduling

System Model. Real-Time systems. Giuseppe Lipari. Scuola Superiore Sant Anna Pisa -Italy

CIS 4930/6930: Principles of Cyber-Physical Systems

Partitioned scheduling of sporadic task systems: an ILP-based approach

Probabilistic Preemption Control using Frequency Scaling for Sporadic Real-time Tasks

A New Task Model and Utilization Bound for Uniform Multiprocessors

Lec. 7: Real-Time Scheduling

Improved Priority Assignment for the Abort-and-Restart (AR) Model

The Feasibility Analysis of Multiprocessor Real-Time Systems

Scheduling periodic Tasks on Multiple Periodic Resources

Mixed Criticality in Safety-Critical Systems. LS 12, TU Dortmund

Real-Time Scheduling and Resource Management

Analysis Techniques for Supporting Harmonic Real-Time Tasks with Suspensions

arxiv: v1 [cs.os] 25 May 2011

Bounding the End-to-End Response Times of Tasks in a Distributed. Real-Time System Using the Direct Synchronization Protocol.

Networked Embedded Systems WS 2016/17

Non-preemptive Scheduling of Distance Constrained Tasks Subject to Minimizing Processor Load

Bounding the Maximum Length of Non-Preemptive Regions Under Fixed Priority Scheduling

Predictability of Least Laxity First Scheduling Algorithm on Multiprocessor Real-Time Systems

Online Scheduling Switch for Maintaining Data Freshness in Flexible Real-Time Systems

A Theory of Rate-Based Execution. A Theory of Rate-Based Execution

The Partitioned Scheduling of Sporadic Tasks According to Static-Priorities

Multiprocessor feasibility analysis of recurrent task systems with specified processor affinities

Dynamic-Priority Scheduling. CSCE 990: Real-Time Systems. Steve Goddard. Dynamic-priority Scheduling

Multi-core Real-Time Scheduling for Generalized Parallel Task Models

DESH: overhead reduction algorithms for deferrable scheduling

Schedulability Analysis for the Abort-and-Restart Model

Component-Based Software Design

Transcription:

Schedulability of Periodic and Sporadic Task Sets on Uniprocessor Systems Jan Reineke Saarland University July 4, 2013 With thanks to Jian-Jia Chen! Jan Reineke July 4, 2013 1 / 58

Task Models and Scheduling Uniprocessor Systems Scheduling Theory in Real- Time Systems Schedulability Analysis Resource Sharing and Servers Partitioned Scheduling Multiprocessor Systems Semi- Partitioned Scheduling Global Scheduling Resource Sharing Jan Reineke July 4, 2013 2 / 58

Recurrent Task Models (revisited) When jobs (usually with the same computation requirement) are released recurrently, these jobs can be modeled by a recurrent task Periodic Task i : I A job is released exactly and periodically by a period T i I Aphase i indicates when the first job is released I A relative deadline Di for each job from task i I ( i, C i, T i, D i ) is the specification of periodic task i,wherec i is the worst-case execution time. When i is omitted, we assume i is 0. Sporadic Task i : I Ti is the minimal time between any two consecutive job releases I A relative deadline Di for each job from task i I (Ci, T i, D i ) is the specification of sporadic task i,wherec i is the worst-case execution time. Jan Reineke July 4, 2013 3 / 58

Recurrent Task Models (revisited) When jobs (usually with the same computation requirement) are released recurrently, these jobs can be modeled by a recurrent task Periodic Task i : I A job is released exactly and periodically by a period T i I Aphase i indicates when the first job is released I A relative deadline Di for each job from task i I ( i, C i, T i, D i ) is the specification of periodic task i,wherec i is the worst-case execution time. When i is omitted, we assume i is 0. Sporadic Task i : I Ti is the minimal time between any two consecutive job releases I A relative deadline Di for each job from task i I (Ci, T i, D i ) is the specification of sporadic task i,wherec i is the worst-case execution time. Jan Reineke July 4, 2013 3 / 58

Relative Deadline vs Period (revisited) For a task set, we say that the task set is with implicit deadline when the relative deadline D i is equal to the period T i, i.e., D i = T i, for every task i, constrained deadline when the relative deadline D i is no more than the period T i, i.e., D i apple T i, for every task i,or arbitrary deadline when the relative deadline D i could be larger than the period T i for some task i. Jan Reineke July 4, 2013 4 / 58

Some Definitions for Sporadic/Periodic Tasks The jobs of task i are denoted J i,1, J i,2,... Periodic Tasks: I Synchronous system: Each task has a phase of 0. I Asynchronous system: Phases are arbitrary. Hyperperiod: Least common multiple (LCM) of T i. Task utilization of task i : u i := C i T i. System (total) utilization: U(T ):= P i 2T u i. Jan Reineke July 4, 2013 5 / 58

Outline 1 Schedulability for Static-Priority Scheduling Utilization-Based Analysis (Relative Deadline = Period) Demand-Based Analysis 2 Schedulability for Dynamic-Priority Scheduling Jan Reineke July 4, 2013 6 / 58

Static-Priority Scheduling Different jobs of a task are assigned the same priority. I i is the priority of task i. I HPi is the subset of tasks with higher priority than i. I Note: we will assume that no two tasks have the same priority. We will implicitly index tasks in decreasing priority order, i.e., i has higher priority than k if i < k. Which strategy is better or the best? I largest execution time first? I shortest job first? I least-utilization first? I most importance first? I least period first? Jan Reineke July 4, 2013 7 / 58

Static-Priority Scheduling Different jobs of a task are assigned the same priority. I i is the priority of task i. I HPi is the subset of tasks with higher priority than i. I Note: we will assume that no two tasks have the same priority. We will implicitly index tasks in decreasing priority order, i.e., i has higher priority than k if i < k. Which strategy is better or the best? I largest execution time first? I shortest job first? I least-utilization first? I most importance first? I least period first? Jan Reineke July 4, 2013 7 / 58

Rate-Monotonic (RM) Scheduling (Liu and Layland, 1973) Priority Definition: A task with a smaller period has higher priority, in which ties are broken arbitrarily. Example Schedule: 1 =(1, 6, 6), 2 =(2, 8, 8), 3 =(4, 12, 12). [(C i, T i, D i )] 1 1 1 1 1 0 2 4 6 8 10 12 14 16 18 20 22 24 2 2 2 2 0 2 4 6 8 10 12 14 16 18 20 22 24 3 3 3 3 3 0 2 4 6 8 10 12 14 16 18 20 22 24 Jan Reineke July 4, 2013 8 / 58

Liu and Layland (Journal of the ACM, 1973) Jan Reineke July 4, 2013 9 / 58

Liu and Layland (Journal of the ACM, 1973) Jan Reineke July 4, 2013 9 / 58

Deadline-Monotonic (DM) Scheduling (Leung and Whitehead) Priority Definition: A task with a smaller relative deadline has higher priority, in which ties are broken arbitrarily. Example Schedule: 1 =(2, 8, 4), 2 =(1, 6, 6), 3 =(4, 12, 12). [(C i, T i, D i )] 1 1 1 1 0 2 4 6 8 10 12 14 16 18 20 22 24 2 2 2 2 2 0 2 4 6 8 10 12 14 16 18 20 22 24 3 3 3 3 3 0 2 4 6 8 10 12 14 16 18 20 22 24 Jan Reineke July 4, 2013 10 / 58

Optimality (or not) of RM and DM Example Schedule: 1 =(2, 4, 4), 2 =(5, 10, 10) 1 1 1 1 1 1 0 2 4 6 8 10 12 14 16 18 20 2 2 2 2 2 0 2 4 6 8 10 12 14 16 18 20 The above system is schedulable. No static-priority scheme is optimal for scheduling periodic tasks: However, adeadlinewillbemissed,regardlessofhowwechooseto(statically) prioritize 1 and 2. Corollary Neither RM nor DM is optimal. Jan Reineke July 4, 2013 11 / 58

Optimality (or not) of RM and DM Example Schedule: 1 =(2, 4, 4), 2 =(5, 10, 10) 1 1 1 1 1 1 0 2 4 6 8 10 12 14 16 18 20 2 2 2 2 2 0 2 4 6 8 10 12 14 16 18 20 The above system is schedulable. No static-priority scheme is optimal for scheduling periodic tasks: However, adeadlinewillbemissed,regardlessofhowwechooseto(statically) prioritize 1 and 2. Corollary Neither RM nor DM is optimal. Jan Reineke July 4, 2013 11 / 58

Aside: Critical Instants Definition Acriticalinstantofatask i is a time instant such that: 1 the job of i released at this instant has the maximum response time of all jobs in i, if the response time of every job of i is at most D i, the relative deadline of i, and 2 the response time of the job released at this instant is greater than D i if the response time of some job in i exceeds D i. Informally, a critical instant of i represents a worst-case scenario from i s standpoint. Jan Reineke July 4, 2013 12 / 58

Critical Instants in Static-Priority Systems Theorem [Liu and Layland, JACM 1973] A critical instant of task i for a set of independent, preemptable periodic tasks with relative deadlines equal to their respective periods is to release the first jobs of all the higher-priority tasks at the same time. We are not saying that 1,..., i will all necessarily release their first jobs at the same time, but if this does happen, we are claiming that the time of release will be a critical instant for task i. Task utilization: u i = C i T i. System (total) utilization: U(T )= P i 2T C i T i. Jan Reineke July 4, 2013 13 / 58

Critical Instants: Informal Proof 1 2 3 4 t 1 t 0 t R We will show that shifting the release time of tasks together will increase the response time of task i. Consider a job of i, released at time t 0, with completion time t R. Let t 1 be the latest idle instant for 1,..., i 1 at or before t R. Let J be i s job released at t 0. Jan Reineke July 4, 2013 14 / 58

Critical Instants: Informal Proof 1 2 3 4 t 1 t 0 t R We will show that shifting the release time of tasks together will increase the response time of task i. Moving J from t 0 to t 1 does not decrease the completion time of J. Jan Reineke July 4, 2013 15 / 58

Critical Instants: Informal Proof 1 2 3 4 t 1 t 0 t R We will show that shifting the release time of tasks together will increase the response time of task i. Releasing 1 at t 1 does not decrease the completion time of J. Jan Reineke July 4, 2013 16 / 58

Critical Instants: Informal Proof 1 2 3 4 t 1 t 0 t R We will show that shifting the release time of tasks together will increase the response time of task i. Releasing 2 at t 1 does not decrease the completion time of J. Repeating the above movement and proves the criticality of the critical instant Jan Reineke July 4, 2013 17 / 58

Harmonic Real-Time Systems Definition A system of periodic tasks harmonic (also: simply periodic) if for every pair of tasks i and k in the system where T i < T k, T k is an integer multiple of T i. For example: Periods are 2, 6, 12, 24. Theorem [Kuo and Mok]: A system T of harmonic, independent, preemptable, and implicit-deadline tasks is schedulable on one processor according to the RM algorithm if and only if its total utilization U = P C j j 2T T j is less than or equal to one. Jan Reineke July 4, 2013 18 / 58

Proof for Harmonic Systems The case for the only-if part is similar and left as an exercise. 1 2 3 Suppose that T is not schedulable and i misses its deadline by contradiction. The response time of i is larger than D i. By critical instants, releasing all the tasks 1, 2,..., i at time 0 will lead to a response time of i larger than D i. Jan Reineke July 4, 2013 19 / 58

Proof for Harmonic Systems (cont.) As the schedule is work-conserving, we know that from time 0 to time D i, the whole system is executing jobs. Therefore, D i < the workload released in time interval [0, D i ) ix = C j ( the number of job releases of j in time interval [0, D i )) = j=1 ix C j j=1 Di T j ix = C j Di, T j j=1 where = is because D i = T i is an integer multiple of T j when j apple i. By canceling D i, we reach the contradiction by having 1 < ix j=1 C j T j apple X j 2T C j T j apple 1. Jan Reineke July 4, 2013 20 / 58

Proof for Harmonic Systems (cont.) As the schedule is work-conserving, we know that from time 0 to time D i, the whole system is executing jobs. Therefore, D i < the workload released in time interval [0, D i ) ix = C j ( the number of job releases of j in time interval [0, D i )) = j=1 ix C j j=1 Di T j ix = C j Di, T j j=1 where = is because D i = T i is an integer multiple of T j when j apple i. By canceling D i, we reach the contradiction by having 1 < ix j=1 C j T j apple X j 2T C j T j apple 1. Jan Reineke July 4, 2013 20 / 58

Optimality Among Static-Priority Algorithms Theorem A system T of independent, preemptable, synchronous periodic tasks that have relative deadlines equal to their respective periods can be feasibly scheduled on one processor according to the RM algorithm whenever it can be feasibly scheduled according to any static priority algorithm. We will only discuss systems with 2 tasks, and the generalization is left as an exercise. Suppose that T 1 = D 1 < D 2 = T 2 and 2 is with higher priority. We would like to swap the priorities of 1 and 2. Without loss of generality, the response time of 1 after priority swapping is always equal to (or no more than) C 1. By the critical instant theorem, we only need to check response time of the first job of 2 during a critical instant. Assuming that non-rm priority ordering is schedulable, the critical instant theorem also implies that C 1 + C 2 apple T 1. Jan Reineke July 4, 2013 21 / 58

Optimality Among Static-Priority Algorithms (cont.) After swapping ( 1 has higher priority), there are two cases: Case 1 There is sufficientjtime to complete all F jobs of 1 before the second job arrival T of 2,whereF = 2 T 1 k. In other words, C 1 + F T 1 < T 2. 1 FT 1 2 To be schedulable (F + 1)C 1 + C 2 apple T 2 must hold. By C 1 + C 2 apple T 1, we have F (C 1 + C 2 ) apple F T 1 F 1 )FC 1 + C 2 apple F T 1 (F + 1)C 1 + C 2 apple F T 1 + C 1 )(F + 1)C 1 + C 2 < T 2 Jan Reineke July 4, 2013 22 / 58

Optimality Among Static-Priority Algorithms (cont.) After swapping ( 1 has higher priority), there are two cases: Case 2 The F -th job of 1 does not complete before the j arrival of the second job of 2. T In other words, C 1 + F T 1 T 2,whereF = 2 T 1 k. 1 FT 1 2 By C 1 + C 2 apple T 1, we have To be schedulable FC 1 + C 2 apple FT 1 must hold. F (C 1 + C 2 ) apple F T 1 F 1 )FC 1 + C 2 apple F T 1 Jan Reineke July 4, 2013 23 / 58

Remarks on the Optimality We have shown that if any two-task system with implicit deadlines (D i = T i )isschedulableaccordingtoarbitraryfixed-priorityassignment, then it is also schedulable according to RM. Exercise: Complete proof by extending argument to n periodic tasks. Note: When D i apple T i for all tasks, DM (Deadline Monotonic) can be shown to be an optimal static-priority algorithm using similar argument. Proof left as an exercise. Jan Reineke July 4, 2013 24 / 58

Utilization-Based Schedulability Test Task utilization: System (total) utilization: u i := C i T i. U(T ):= X i 2T C i T i. A task system T fully utilizes the processor under scheduling algorithm A if any increase in execution time (of any task) causes A to miss a deadline. UB(A) is the utilization bound for algorithm A: UB(A) := G {U 2 R 8T.U(T ) apple U ) T is schedulable under A} = l {U(T ) T fully utilizes the processor under A} Jan Reineke July 4, 2013 25 / 58

What is UB(A) useful for? T 1 U(T 1 ) T 2 U(T 2 ) T 3 U(T 3 ) T 4 U(T 4 ) T 5 U(T 5 )... T? U(T? ) 0 UB(A) 1 Feasible Unsure Infeasible Jan Reineke July 4, 2013 26 / 58

Liu and Layland Bound Theorem [Liu and Layland] A set of n independent, preemptable periodic tasks with relative deadlines equal to their respective periods can be scheduled on a processor according to the RM algorithm if its total utilization U is at most n(2 1 n 1). In other words, UB(RM, n) =n(2 1 n 1) 0.693. n UB(RM, n) n UB(RM, n) 2 0.828 3 0.779 4 0.756 5 0.743 6 0.734 7 0.728 8 0.724 9 0.720 10 0.717 ln2 0.693 Jan Reineke July 4, 2013 27 / 58

Utilization Bound in Terms of n Least Upper Bound 0.84 0.82 0.8 0.78 0.76 0.74 0.72 0.7 0.68 0.66 RM Bound 10 20 30 40 50 60 70 80 90 100 n Jan Reineke July 4, 2013 28 / 58

Proof Sketch for UB(RM, n) Note: The original proof for this theorem by Liu and Layland is not correct. For a corrected proof, see R. Devillers & J. Goossens at http://www.ulb.ac.be/di/ssd/goossens/lub.ps. Note the proof we present is a bit different than the one presented by Buttazzo s textbook. Without loss of generality (why?), we will only consider task sets with distinct periods, i.e., T 1 < T 2 < < T n. We will present our proof sketch in two parts: 1 First, we consider the special case where T n apple 2T 1. 2 Second, we show how to relax this constraint. Jan Reineke July 4, 2013 29 / 58

Proof Sketch: Difficult-To-Schedule Definition AtasksetT n is called difficult-to-schedule under scheduling algorithm A if T n fully utilizes the processor if scheduled under A. Our Strategy We seek the most difficult-to-schedule task set T n for n tasks: A task set that is difficult-to-schedule with the minimal utilization. We derive a tight lower bound on the utilization of the most difficult-to-schedule task set T n in terms of n, the UB(RM, n). Jan Reineke July 4, 2013 30 / 58

Proof Sketch: Three Steps 1 For a task set with given periods, define the execution times for the most difficult-to-schedule task set. 2 Show that any difficult-to-schedule task set whose execution times differ from those in step 1 has a greater utilization than the most difficult-to-schedule task set provided in step 1. 3 Compute a closed-form expression for UB(RM, n). Jan Reineke July 4, 2013 31 / 58

Step 1: Execution Time Suppose T n is a task set with T n apple 2T 1 and C k = T k+1 T k for k = 1, 2,...,n 1 Xn 1 C n = T n 2 C k = 2T 1 T n. (Because: P n 1 k=1 C k = T n T 1 ) 1 k=1 2 3 n 1 n Theorem Such a task set Tn is the most difficult-to-schedule task set. Jan Reineke July 4, 2013 32 / 58

Proof of the Most Difficult-to-Schedule Statement Proof strategy: We show that the difficult-to-schedule task set given on the previous slide can be transformed into any difficult-to-schedule task set without decreasing the task set s utilization: Starting with the highest priority task and working our way down to the lowest priority task, we incrementally modify the execution times of Tn to match any other difficult-to-schedule task set, and for each modification, utilization does not decrease. If we need to increase (decrease) the execution time of task i, then we compensate for this by decreasing (increasing) the execution time of some k, where k > i. Jan Reineke July 4, 2013 33 / 58

Proof of the Most Difficult-to-Schedule: Increase by Let s increase the execution of some task i (i < n) by ( > 0). C 0 i = T i+1 T i + = C i +. To keep the processor busy up to T n, we can decrease the execution time of a task k (k > i) by. C 0 k = C k. Since T i < T k, the utilization of the above task set Tn 0 is no less than the original task set Tn by U(T 0 n) U(T n )= Ti T k 0. Jan Reineke July 4, 2013 34 / 58

Proof of the Most Difficult-to-Schedule: Decrease by Let s decrease the execution of some task i (i < n) by ( > 0). C 0 i = T i+1 T i = C i. To keep the processor busy up to T n, we can increase the execution time of atask k (k > i) by 2.(Why 2?) C 0 k = C k + 2. Since T k apple 2T i, the utilization of the above task set Tn 0 is no less than the original task set Tn by U(T 0 n) U(T n )= Ti + 2 T k 0. Jan Reineke July 4, 2013 35 / 58

Calculating UB(RM, n) = The Utilization of the Most Difficult-ot-Schedule Task Set Utilization of Tn Let x i be T i +1. T i U(Tn )= 2T1 for given task periods: = 2 n 1 i=1 x i Tn T n + n 1 +( Xn 1 T i+1 T i i=1 X x i ) i=1 n T i = 2 T1 Xn 1 +( x i ) T n i=1 n Which task periods minimize the utilization? By getting partial derivatives of U(Tn ) to the variables, we know that U(Tn ) is minimized when @U(T n ) @x k = 1 2( n 1 i=1 x i )/x k ( n 1 i=1 x i ) 2 = 1 2 x k n 1 i=1 x i = 0, 8k = 1, 2,...,n 1. Therefore, all x i need to be equal for k = 1, 2,...,n 1: x 1 = x 2 = = x n 1: x k = 2 n 1 i=1 x i ) x n k = 2 ) x k = 2 1 n. Jan Reineke July 4, 2013 36 / 58

Calculating UB(RM, n) (cont.) By substituting x i = 2 1 n into our utilization formula, we have U(T n ) apple n(2 1 n 1) Jan Reineke July 4, 2013 37 / 58

Removing the Constraint on T n apple 2T 1 Strategy Suppose that T n is a difficult-to-schedule task set with n tasks. We are going to transform any difficult-to-schedule task set T n with n tasks and T n > 2T i for some i to Tn 0 such that 1 the period T n in Tn 0 is no more than twice of T 1 in Tn 0,and 2 U(T n ) U(Tn 0 ). Transform T n step-by-step to get T 0 n: Find a task k in task set T n with `T k < T n apple (` + 1)T k and ` is an integer that is at least 2. Create a task 0 n such that the period is T n and the execution time C 0 n is C n +(` 1)C k. Create a task k 0 such that the period is `T k and the execution time Ck 0 is C k. Let Tn 0 be T n \{ k, n } [ k 0, n 0 Jan Reineke July 4, 2013 38 / 58

Removing the Constraint on T n apple 2T 1 (cont.) k n 0 k 0 n `T k T n Conditions: 1 `T k < T n apple (` + 1)T k with ` 2. 2 T 0 n = T n, C 0 n = C n +(` 1)C k and T 0 k = `T k and C 0 k = C k. Results: since `T k < T n, we know U(T n ) U(Tn) 0 = C n + C k (` 1)C k + C n T n T k T n 1 1 = (` 1)C k > 0 `T k T n C k `T k Jan Reineke July 4, 2013 39 / 58

Concluding the RM Analysis Moreover Tn 0 above is also a difficult-to-schedule task set. By repeating the above procedure, we can transform to a task set Tn 0 with Tn 0 apple 2T1 0 without increasing its utilization. This concludes the proof of the following theorem: Theorem [Liu and Layland, JACM 1973] A set of n independent, preemptable periodic tasks with relative deadlines equal to their respective periods can be scheduled on a processor according to the RM algorithm if its total utilization U is at most n(2 1 n 1). In other words, UB(RM, n) =n(2 1 n 1) 0.693. lim UB(RM, n) = lim n(2 1 n 1) =ln 2 0.693 n!1 n!1 Jan Reineke July 4, 2013 40 / 58

Remarks on Harmonic Task Set If the total utilization is larger than 0.693 but less than or equal to 1, the utilization-bound schedulability test cannot provide guarantees for schedulability or unschedulability. Sometimes, we can manipulate the periods such that the new task set is a harmonic task set and its schedulability can be used. Jan Reineke July 4, 2013 41 / 58