Analysis Techniques for Supporting Hard Real-Time Sporadic Gang Task Systems

Size: px
Start display at page:

Download "Analysis Techniques for Supporting Hard Real-Time Sporadic Gang Task Systems"

Transcription

1 27 IEEE Real-Time Systems Symposium Analysis Techniques for Supporting Hard Real-Time Sporadic Gang Task Systems Zheng Dong and Cong Liu Department of Computer Science, University of Texas at Dallas Abstract. This paper studies the problem of scheduling hard real-time sporadic gang task systems under global earliest-deadline-first, where a gang application s threads need to be concurrently scheduled on distinct processors. A novel approach combining new lag-based reasoning and executing/non-executing gang interval analysis technique is introduced, which is able to characterize the parallelisminduced idleness, as a key challenge of analyzing gang task schedules. To the best of our knowledge, this approach yields the first utilization-based test for hard real-time gang task systems. Figure : Example gang task schedule. Introduction The term gang scheduling refers to all of an application s threads of execution being grouped into a gang and concurrently scheduled on distinct processors. The problem of gang scheduling in high performance computing systems where throughput is a key optimization objective has received much attention [], [2], [3], [4]. Due to the current trend of applying parallel programming models (e.g., OpenMP [5] and MPI [6]) as well as accelerators with extremely parallel architecture (e.g., many-core multiprocessors and GPU) in real-time and embedded systems, an emerging set of real-time workloads implemented under the parallel programming discipline appear in many application domains such as autonomous driving and computer vision [7]. Compared to the related parallel task models such as the multithread or fork/join task models where parallel threads of a task can be scheduled independently, the gang scheduling model of parallel tasks has been proved to have significantly more performance benefits in many cases [8], [9], [], [], [2]. The performance of parallel processing highly depends on the scheduling of parallel tasks [3]. For gang task scheduling, a released job can be scheduled only if the number of idle processors is at least the number of processors required by the corresponding gang task. This simple constraint causes the following critical negative impact on realtime schedulability, namely parallelism-induced idleness, characterizing the scenarios under which idled processing capacity is wasted while there exist released jobs waiting in the system. Consider an intuitive example, where three gang tasks (τ, τ 2 and τ 3 ) are scheduled on four processors under global earliest-deadline-first (GEDF). τ has a parallelism of three processors while both τ 2 and τ 3 have a parallelism Work supported by NSF grant CNS of two. As seen in Fig., at time, although there is an idle processor, the scheduler cannot schedule neither τ 2 or τ 3 onto it because both tasks have a parallelism of two processors. The parallelism-induced idleness thus occurs during [, ), which quite negatively impacts schedulability because idleness is the fundamental reason why tasks may miss their deadlines in a system that is not over-utilized. Parallelism-induced idleness causes challenges in deriving real-time schedulability analysis techniques. For instance, for ordinary sporadic task systems, it is safe to assume that at most M (where M denotes the number of processors) tasks have pending jobs at any non-busy instant, which is critical to upper-bound the amount of workload at the instant so that a schedulability condition can be derived. Unfortunately, with parallelism-induced idleness, it is only safe to assume that all tasks have pending jobs at any non-busy instant, thus causing the workload upper bound unnecessarily pessimistic and the final test derivation either impossible or unacceptably pessimistic. To resolve this challenge, this work aims at developing techniques that can characterize parallelism-induced idleness for any sporadic gang task systems and derive a corresponding utilization-based schedulability test that is nontrivial. In particular, we study gang task system scheduled on a homogeneous multiprocessor under the classical GEDF scheduling policy. To the best of our knowledge, this is the first utilization-based schedulability test derived for hard real-time (HRT) sporadic gang task systems. Overview of related work. The problem of scheduling realtime parallel tasks has received much recent attention. [4], [5], [6], [7], [8], [9] study parallel task models that are related to the gang task model, including the periodic multithread task model [6], [7], [2], the fork-join task model [2], and the DAG task model [8], [22]. A fundamental difference is that under these models, parallel threads /7/3. 27 IEEE DOI.9/RTSS

2 of a task can be considered and scheduled independently; while under the gang task model, parallel threads of a task must execute simultaneously on a set of processors. A recent set of works study the problem of fixed taskpriority scheduling hard real-time periodic gang tasks, where exact/sufficient schedulability tests are given [3], [23], [24]. A notable recent work [3] presents two algorithms based on DP-fair (deadline partitioning) for scheduling periodic gang task systems on a multiprocessor. The fundamental idea is to define optimal static patterns that are stretched at run-time in a DP-Fair manner. However, computing such a pattern (particularly those optimal ones) may become a hard combinatorial problem for systems with large number of processors or tasks, thus exhibiting rather high runtime complexity. Also, this technique may not be easily extended to the sporadic gang task model (as discussed in [3]) as the sporadic job arrival pattern is non-deterministic. The work that is most related to this paper is [], where the authors present sufficient schedulability tests for sporadic gang tasks scheduled under GEDF. Unfortunately, the techniques presented in [] may exhibit high runtime complexity (no utilization-based tests yielded) and has some fundamental flaws in the analysis (discussed in detail in Sec. 3). Our contribution. In this paper, we derive a schedulability test showing that any given HRT gang task system τ is schedulable under GEDF if U sum (M +) ( u i )+2u i () holds for all τ i τ, where U sus the total system utilization, M is the number of processing units, u i is τ i s utilization, and is τ i s degree of parallelism (i.e., the number of concurrent processors needed to execute any job released by τ i ). To derive this test, we invent a novel approach for analyzing sporadic gang task schedules: a new lag-based reasoning combined with a new executing/nonexecuting gang interval analysis technique (Sec. 4.2). This approach resolves a critical issue in deriving such a schedulability test: efficiently quantifying the parallelism-induced idleness by GEDF-scheduled gang tasks on a multiprocessor. The above test can be intuitively viewed as a gang-version of the classical density test [25] designed for ordinary sporadic task systems. When =holds for all τ i in the system, the schedulability condition Eq. () becomes identical to the density test. An observation used to derive the above schedulability test is that if a gang job τ i,j is released before but does not execute during an interval, then the number of idle processors at any time during this interval is at most. For given task systems with known task parameters, we can further improve schedulability through more precisely examing this observation. Specifically, the number of idle processors in this case may be much smaller than, which implies that more workloads may actually be executed during such an interval. We propose an optimization tech-. Under the density test, an implicit-deadline sporadic task systes schedulable under GEDF if U sum M (M ) u max holds, where u max denotes the maximum task utilization in the system. nique (Sec. 5) through precisely characterizing such number of idle processors according to tasks specific parameters. With the improved schedulability test, any given HRT parallel task system τ is schedulable under GEDF if U sum (M Δ i ) ( u i )+u i. holds for all τ i τ, whereδ i denotes the maximudle parallelism (defined in Def. 9) at any time during intervals in which τ i has released jobs that cannot execute. We prove that our proposed optimization algorithm, which is used to calculate Δ i, is optimal while running in only polynomial time with complexity O(M 2 n) (n denotes the number of tasks). 2 System Model We consider the problem of scheduling a set τ = {τ,..., τ n } of n independent sporadic gang tasks on M identical processors. The essential difference between a sporadic gang task and a traditional sporadic task is that the number of processors simultaneously used by a sporadic gang task (also called a task s degree of parallelism) can be greater than. Each sporadic gang task τ i generates an infinite sequence of jobs, with arrival times of successive jobs separated by at least time units. The j th job of τ i, denoted by τ i,j, is released at time r i,j and has a deadline at time d i,j. Associated with each task τ i is a deadline d i, which specifies the relative deadline of each of its released jobs, i.e., d i,j = r i,j + d i. Each job of τ i reserves processors for at most e i time units. Thus, the execution demand of a job of τ i can be represented as an e i rectangle in the time space. Each gang task is specified by four parameters: τ i (e i,,d i, ). The utilization of each sporadic gang task τ i in τ, denoted u i,is ei.note that the utilization of a gang task may be larger than one, which differs from the traditional sporadic task model. The utilization of the task systes U sum = n i= u i.note that a gang task system becomes identical to an ordinary sporadic task systef =holds for all τ i τ. Successive jobs of the same task are required to execute in sequence. We require e i,andu sum M; otherwise, deadlines will be missed. A gang task system τ is said to be an implicit-deadline systef d i = holds for each τ i.we focus on implicit-deadline task systems in this paper. We study the preemptive GEDF scheduling: jobs with earlier deadlines are assigned higher priorities. For sporadic gang tasks, any task τ i can be scheduled at time t only if at least processors are available at t. We assume that ties are broken by task ID (lower IDs are favored). Throughout the paper, we assume that time is integral. Thus, a job that executes at time instant t executes during the entire time interval [t, t + ). Example. Fig. 2 shows a sporadic gang task system containing three gang tasks τ (2, 3, 4, 4), τ 2 (2, 2, 8, 8), and τ 3 (4, 2,, ) scheduled on four processors. For this task system, the total utilization is u + u 2 + u 3 =

3 Figure 2: Example gang task schedule. Figure 3: A job of task τ k arrives at t a and misses its deadline at t d. The latest non-busy time instant prior to t a at which at least one processor is idle is denoted t o. 4 2 = 4 5. As seen in this schedule, τ 2, and τ 3, cannot execute at time even if there is an idle processing unit, which results in parallelism-induced idleness. 3 Challenges In this section, we first provide a brief summary of an existing GEDF-schedulability analysis technique [26] designed for ordinary sporadic task systems. Then, we explain the attempt made by [], which applies this analysis technique [26] to derive new schedulability tests for GEDFscheduled gang task systems. We will show the challenges of analyzing gang-scheduled parallel task systems by pointing out a key flaw found in []. Let [BAR] and [KAT] denote the schedulability test derived in [26] and [], respectively. Overview of the [BAR] test. Consider any legal sequence of job requests of an ordinary sporadic task system τ, on which deadlines are missed. Suppose that a job of task τ k is the one to first miss a deadline, and that deadline miss occurs at time t d, as shown in Fig. 3. Let t a denote this job s arrival time: t a = t d d k. Discard from the legal sequence of job requests all jobs with deadline >d k, and consider the EDF schedule of the remaining (legal) sequence of job requests. Since jobs with later deadlines have no impact on the scheduling of jobs with earlier deadlines under GEDF, it follows that a deadline miss of T k occurs at time t d (and this is the earliest deadline miss) in the new GEDF schedule. Let t o denote the latest time-instant t a at which at least one processor is idle. Let A k = t a t o. The goal of the [BAR] test is to identify conditions necessary for a deadline miss to occur; i.e., for τ k s job to execute for strictly less than e k time units over [t a,t d ). A necessary condition is that all M processors must be executing jobs other than p k s job for strictly more than (d k e k ) time units over [t a,t d ).LetΓ k denote a collection of intervals, not necessarily contiguous, of cumulative length (d k e k ) over [t a,t d ), during which all M processors are executing jobs other than τ k s job in the schedule. For each τ i τ, leti(τ i ) denote the contribution of τ i to the work done in this schedule during [t o,t a ) Γ k.inorder for the deadline miss to occur, it is necessary that the total amount of work that executes over [t o,t a ) Γ k satisfies the following condition 2 I(τ i ) M (A k + d k e k ), (2) τ i τ 2. Note that the [BAR] test given in [26] incorrectly presents inequality (2), where should have been used instead of using >. We thus use the corrected inequality herein according to [27] and [28]. which follows from the definition that all M processors are completely busy executing tasks for A k time units over interval [t o,t a ), as well as the intervals in Γ k of total length (d k e k ). The fundamental idea of the derived schedulability test is based on Eq. 2. If we can upper bound τ I(τ i τ i) as I up and let I up M (A k + d k e k ), the necessary condition breaks and thus no deadline miss can occur. The resulting schedulability test is given in Theorem 2 in [26]. The key observation used by the [BAR] test for calculating I us: at most (M ) tasks have jobs pending (i.e., jobs that are released but not completed) at time t o because at least one processor is idle at t o. This observation allows to significantly reduce the pessimism when upper bounding T I(T i T i) because at most M tasks may carry-in workloads into the analysis interval [t o,t d ). Overview of the [KAT] test and a critical flaw. The [KAT] test attempts to apply the above-discussed idea to derive a schedulability test for gang task systems. [KAT] seeks to identify necessary conditions for a deadline miss to occur, using the same observations as [BAR] on analyzing the same analysis interval [t o,t d ). In this case, t o is defined to be the latest time instant earlier than or equal to t a, at which at least m k processors are idle, where m k is the degree of parallelism of the task τ k who incurs the first deadline miss at t d. In order for the problem job of τ k to miss the deadline at t d, it is necessary that the job is blocked for strictly more than d k e k time units over [t a,t d ).Sinceτ k requires m k processors for parallel execution, it is blocked while M m k + or more processors are busy. Given the definition of t o, if the deadline of the problem job τ k,j is missed, the total length of intervals over [t o,t d ), during which at least M m k + processors are executing jobs other than the problem job, must be greater than A k + d k e k. Following the same approach in [BAR], [KAT] upper bounds the contribution of each gang task in the system during [t o,t d ),andthen sets this upper bound to be less than (A k + d k e k ) (M m k +) to break the necessary condition. The critical flaw of [KAT] occurs when it attempts to use the similar observation as [BAR] to reduce the pessimisnduced by carry-in workloads at t o. [KAT] makes the following flawed observation (see Sec. 4.3, paragraph 3, Eq.(24) on page 7 of []): Observation. By the definition of t o, the number of busy processors at time t o is at most M m k. Thus, the total number of tasks that can have jobs pending at t o is at most the number of tasks in any subset σ of tasks in τ, which satisfies that τ σ i M m k. 9

4 Unfortunately, this observation does not hold for gang task systems. For instance, if there is a task τ j with m j = M, then it may have jobs pending at t o but cannot execute due to low priorities. It will carry in workloads into the analysis interval and it obviously does not belong to any subset σ defined above. In the worst case, if all other tasks in the system have a degree of parallelism greater than m k, then all tasks in the system may have jobs pending at t o but cannot execute at t o. Note that this critical flaw has also been reported recently in [29]. Insight. For ordinary sporadic task systems, the definition of t o can accurately reflects an upper bound on the number of tasks having pending jobs at t o, because the idleness at t o is indeed due to lacking enough released jobs. However, for gang task systems, t o cannot be used to define this critical variable because the idleness at t o may be parallelisminduced idleness, where jobs that are pending but cannot execute at t o have a degree of parallelism greater than the number of idle processors at t o. By observing this, our insight is that the fundamental busy/non-busy interval analysis (asusedbyboth[bar]and[kat],aswellasmanyother works [3], [3], [32], [33]), which seeks to first categorize an analysis interval into busy and non-busy sub-intervals and then upper bound the workloads within each category, cannot be applied to analyze gang task systems. The key reason is because for gang tasks, a non-busy time instant cannot help upper bound the number of tasks that have pending jobs at this instant due to parallelism-induced idleness. Motivated by this, we develop a new executing/nonexecuting interval analysis technique combined with other approaches for analyzing gang task systems, which allows us to accurately characterize the workload we need to upperor lower-bound within the analysis interval. 4 A Utilization-based Schedulability Test In this section, we derive a sufficient multiprocessor schedulability test for sporadic gang task systems scheduled under GEDF. Our approach is fundamentally based on the classical lag-based reasoning which has been used extensively to analyze soft real-time sporadic task systems that allow deadlines to be missed but require tardiness to be quantitatively bounded [3]. We develop a new lag-based reasoning for the HRT case, where we focus on analyzing what happens when a deadline is missed given any parallel task system τ. Lett d denote the first time instant in any such schedule S at which a deadline is missed. Let job τ p,q be the job that misses its deadline d p,q at t d. We focus on analyzing τ p,q and the time interval [,t d ). We first describe the proof setup and then derive a utilization-based schedulability test. 4. Lag-based Reasoning Definition. Ataskτ i is active at time t if there exists a job τ i,j such that r i,j t<d i,j. Definition 2. Let f i,j denote the completion time of job τ i,j. Job τ i,j misses its deadline if it completes after its deadline. Definition 3. Job τ i,j is pending at time t if r i,j t<f i,j. If job τ i,j is pending and does not execute at t, thenitis preempted at t. Definition 4. Job set d contains jobs having the following relationship with τ p,q : d = {τ i,j : (d i,j < t d ) (d i,j = t d i p)}. Definition 5. For any given gang task system τ, a processor share (PS) schedule is an ideal schedule where each task τ i executes with a rate equal to u i when it is active (which ensures that each of its jobs completes exactly at its deadline). Note that for a sporadic gang task system, u i can be larger than. A valid PS schedule exists for τ if U sum M holds. Fig. 4 shows an example PS schedule. Figure 4: The task system given in Example contains three tasks, τ with utilization.5, τ 2 with utilization, and τ 3 with utilization. τ, τ 2,andτ 3 haveaperiodof4 time units, 8 time units, and time units, respectively. This figure shows the PS schedule for this system where each task executes according to its utilization rate when it is active. By Def. 4, d is the set of jobs with deadlines at most t d and with priorities at least that of τ p,q. These jobs do not execute beyond t d in the PS schedule. Note that τ p,q is in d. Also note that jobs not in d have lower priorities than those in d and thus do not affect the scheduling of jobs in d. Thus, we remove every job with a deadline later than t d from τ. Note that jobs originally in τ but not in d do not execute in either the GEDF schedule S or the corresponding PS schedule. Our schedulability test is obtained by comparing the allocations to τ in the GEDF schedule S and the corresponding PS schedule, both on M processors, and quantifying the difference between the two. We analyze task allocations on a per-task basis. Let A(τ i,j,t,t 2, S) denote the total allocation to job τ i,j in S in [t,t 2 ). Then, the total time allocated to all jobs of τ i in [t,t 2 ) in S is given by A(τ i,t,t 2, S) = j A(τ i,j,t,t 2, S). (3) Let PS denote the PS schedule that corresponds to the GEDF schedule S (i.e., the total allocation to any job of any task in PS is identical to the total allocation of the job in S). The difference between the allocation to a job τ i,j up to time t in PS and S, denoted the lag of job τ i,j at time t in schedule S, is defined by lag(τ i,j,t,s) =A(τ i,j,,t,ps) A(τ i,j,,t,s). (4) 2

5 (a) Busy/non-busy intervals. (b) Executing/non-executing intervals. Figure 5: Busy/non-busy VS. Executing/non-executing Similarly, the difference between the allocation to a task τ i up to time t in PS and S, denoted the lag of task τ i at time t in schedule S, is defined by lag(τ i,t,s) = j lag(τ i,j,t,s) (5) = j (A(τ i,j,,t,ps) A(τ i,j,,t,s)). (6) The LAG for τ at time t in schedule S is defined as LAG(τ,t,S) = τ i τ Claim. LAG(τ,t d, S) >. lag(τ i,t,s). (7) Proof. Since lag(τ i,t d, S) = holds for any task τ i that does not have a job deadline miss by t d in S, and lag(τ j,t d, S) > holds for any task τ j that has jobs missing deadlines at t d (e.g., τ p ) because all jobs of any such task would complete by t d in PS, by Eq. (7), we have LAG(τ,t d, S) >. Definition 6. A time instant t is busy (resp. non-busy) for ajobsetj if all (resp. not all) M processing units execute jobs in J at t. A time interval is busy (resp. non-busy) for J if each instant within it is busy (resp. non-busy) for J. A time instant t is busy on processor M i (resp. non-busy on processor M i )forj if M i executes (resp. does not execute) ajobinj at t. A time interval is busy on processor M i (resp. non-busy on processor M i )forj if each instant within it is busy (resp., non-busy) on M i for J. A time instant t is idle if all M processing units are idle at t. A time interval is idle if each instant within it is idle. Claim 2. If LAG(τ,t 2, S) >LAG(τ,t, S), wheret 2 >t, then [t,t 2 ) is non-busy for τ. Inotherwords,LAGforτ can increase only throughout a non-busy interval for τ. 4.2 A Utilization-based Test In this section, we present our new lag-based analysis for GEDF-scheduled gang task systems by proving the following theorem. Many prior works including those using lag-based reasoning [3], [34] on analyzing ordinary sporadic task systems apply the busy/non-busy interval analysis technique, which leverages a key observation: if a job τ i,j is released before a non-busy interval and does not complete after this interval, the job executes at every time instant within this interval. This allows the workload in a non-busy interval to be safely lower bounded by the length of this interval because at least τ i,j is executing within it. However, this observation does not hold for gang task systems. This is due to the fact that if τ i,j is a gang job, then it may not execute during non-busy intervals even if it is pending, since the number of idle processors during such intervals may be smaller than. In order to analyze such behaviors, we develop a new technique based on the executing/non-executing gang interval analysis, defined as follows. Definition 7. (Executing/non-executing interval): A time interval [t,t 2 ), where t 2 > t, is considered to be an executing interval for τ i if out of M processing units are executing some job of τ i throughout the interval; otherwise, it is considered to be a non-executing interval for τ i. Example 2. Fig. 5 shows an example schedule (for the same task set given in Example ) to illustrate the fundamental difference between busy/non-busy intervals and executing/non-executing intervals. (Note that we use two identical GEDF schedules to illustrate these two kinds of intervals for better illustration clarity.) In this example, τ 3 releases its first job at time, which is completed at time 8. In Fig 5(a), during (, 2] and (4, 8], at lease one processor is idle. According to Def. 6, such intervals are non-busy for τ 3. During (2, 4], all processors are busy, according to Def. 6, this interval is a busy interval for τ 3.Onthe other hand, as shown in Fig 5(b), during (, 2] and (4, 6], τ 3 does not execute. According to Def. 7, such intervals are non-executing intervals for τ 3.During(2, 4] and (6, 8], τ 3 is executing. According to Def. 7, such intervals are executing intervals for τ 3. A fundamental difference between the busy/non-busy interval analysis and the executing/nonexecuting interval analysis is that during both busy and nonbusy intervals, whether the analyzed gang job is executing is unknown; while during executing intervals, we know that the analyzed gang job is executing. The executing/non-executing interval analysis thus allows us to lower bound the execution of the analyzed gang job during any of its executing intervals within the entire analysis window. The executing/non-executing gang interval analysis tech- 2

6 nique leverages the following key observation: if a gang job τ i,j is released before but does not execute during a nonexecuting interval of τ i, then the number of idle processing units at any time during this non-executing interval is at most. Through exploring this observation, our analysis technique seeks to first determine a lower bound on the total system lag, LAG, that is necessary for a deadline miss, then an upper bound on LAG that is possible for a task system, and finally use these bounds to derive a utilizationbased schedulability test. Theorem. GEDF correctly schedules any HRT implicitdeadline sporadic gang task system τ on M processors if U sum (M +) ( u i )+2u i (8) holds for all τ i τ. Proof. Our proof is by contradiction. Assume that the theorem does not hold. This implies that there exists a concrete instantiation τ, for which U sum (M +) ( ui )+ 2u i holds for all τ i τ, and one or more jobs of τ miss the deadline in GEDF. Let t d denote the first time instant in GEDF at which a deadline is missed. Let job τ p,q be the job that misses its deadline d p,q at t d. We focus on analyzing τ p,q and the time interval [,t d ). Claim 3. LAG(τ,t d, GEDF) >. Proof. The proof is exactly the same as the proof of Claim, except for replacing S with GEDF. Let t ( <t <t d ) denote the earliest instant in [,t d ) at which LAG(τ,t, GEDF) >. (9) holds. By Claim 3 and because LAG(τ,, GEDF) =, t is well defined. We next derive an upper bound on LAG(τ,t, GEDF). By the definition, the LAG of τ at t is equal to the sum of the lags of all its tasks at t. Therefore, Claim 3 implies that there exists at least one task in τ whose lag at t is greater than zero. Let τ l denote such a task. That is, lag(τ l,t, GEDF) >. Because τ l has a positive lag at t, at least one job of τ l released before t is pending at t. Also, t d is the earliest time that any job misses its deadline in GEDF, and t t d holds. Therefore, exactly one job of τ l released before t can be pending at t.letτ l,h denote the pending job of τ l at t. Let r l,h denote the release time of τ l,h. Then, because no job with deadline before t d misses its deadline and r l,h <t t d holds, all jobs of τ l released before r l,h complete execution by r l,h in GEDF. All of these jobs complete execution by r l,h in PS as well. Note that the lag of τ l at r l,h is zero. Let E (resp., E) denote the cumulative time in [r l,h,t ) in which τ l,h is executing (resp., τ l,h is not executing) in GEDF. That is, t r l,h = E + E. Since τ l,h is release at r l,h and does not complete by t, based on the definitions of executing/non-executing intervals, we know that τ l,h executes at every time instant during E and at least M m l + processing units are busy at every time instant during E. Thus, we have A(τ,r l,h,t, GEDF) m l E +(M m l +) E. () The lag of τ l at t can be computed as follows. lag(τ l,t, GEDF) = lag(τ l,r l,h, GEDF)+A(τ l,r l,h,t,ps) A(τ l,r l,h,t, GEDF) {Because lag(τ l,r l,h, GEDF) =.} = A(τ l,r l,h,t,ps) A(τ l,r l,h,t, GEDF) (E + E) u l m l E = E (u l m l )+E u l. By lag(τ l,t, GEDF) >, the above inequality, and u l m l,wehave E (u l m l )+E u l > = E< E u l. () Note that if m l = u l,thenu sum (M +) ( u l m l )+ 2u l m l = u l. U sum u l is not possible since n> and e k > for all k, which leads to a contradiction. We thus consider the case where m l >u l.wehave LAG(τ,t, GEDF) LAG(τ,r l,h, GEDF) = LAG(τ,r l,h, GEDF)+A(τ,r l,h,t,ps) A(τ,r l,h,t, GEDF) LAG(τ,r l,h, GEDF) = A(τ,r l,h,t,ps) A(τ,r l,h,t, GEDF) (t r l,h ) U sum A(τ,r l,h,t, GEDF) {By Eq. ().} (E + E) U sum m l E (M m l +) E = E (U sum m l )+E (U sum M + m l ) {by Eq. ()} < E u l (U sum m l ) + E (U sum M + m l ). According to Eq. (9), LAG(τ,t, GEDF) > and LAG(τ,r l,h, GEDF), thus LAG(τ,t, GEDF) > LAG(τ,r l,h, GEDF). Thenwehave E u l (U sum m l ) + E (U sum M + m l ) > (2) Since E> must hold in order for Eq. (2) to hold, after dividing E from both sides of Eq. (2), we obtain 22

7 u l (U sum m l ) +(U sum M + m l ) > U sum > (M +) ( u l m l )+2u l m l. This contradicts Eq. (8), thus proving the Theorem. 5 Optimization through Exploring Tasks Parallelism Characteristics Based on the above schedulability test (Theorem ), we now present an effective optimization technique that further improves schedulability. It is motivated by inspecting the key observation used to derive the schedulability test. That is, if a parallel job τ i,j is released before but does not execute during a non-executing interval, then the number of idle processors at any time during the non-executing interval is at most. For a given task system with known task parameters, it is possible to further optimize this observation. Specifically, if the minimum number of processors that are used at any time by any task subset of τ during a non-executing interval is m,andm m < holds, then the above observation is too pessimistic because the amount of workload executed within the interval is actually more than the bound derived by assuming at most processors are idle during this interval (at most M m < processors can be possibly idle). Consider the following example to illustrate this insight. Example 3. Consider a task set containing three tasks: τ (5, 5, 8, 8), τ 2 (3, 4, 8, 8), τ 2 (3, 4,, ), gang-scheduled on ten processors. For this task system, the total utilization is 233 u 4. According to Theorem, (M +) ( m )+2u m =2.25 < This parallel task systes thus deemed unschedulable by Theorem. In Example 3, although the total utilization of this task systes rather small, it is not schedulable according to Theorem. In this schedulability test, we use M + = ( 5+)= 4 to upper bound the number of idle processors during the non-executing intervals of τ.itissafebut pessimistic. According to tasks parameters, the maximum number of idle processors during the non-executing intervals of τ is 2. This is because in order for τ to be preempted and thus not executing during an interval, both τ 2 and τ 3 must have a job with shorter deadlines executing during this interval, and both τ 2 and τ 3 have a degree of parallelism of 4. As motivated by this example, we develop a technique that can improve the accuracy of upper bounding the number of idle processors during non-executing intervals of any task τ i according to tasks parameters, thus improving the schedulability. Definition 8. Let I t denote the number of idle processors at time instant t. Thus, at time instant t, M I t processors are busy executing jobs. Definition 9. Let Δ i denote the maximum possible number of idle processors at any time during τ i s non-executing intervals in which τ i have pending jobs. Finding Δ i. According to the above discussion, setting Δ i to be is too pessimistic and thus results in a less efficient schedulability test. We now present a polynomial time algorithm based on dynamic programming, which finds Δ i through exploring the specific tasks parallelism characteristics. In order to calculate Δ i, we need to find a subset of tasks in τ/τ i satisfying the following two properties: (i) the total degree of parallelism of tasks in this subset is at least M +,and(ii) the total degree of parallelism of tasks in this subset is the smallest one among all subsets satisfying property (i). The first property ensures that the total degree of parallelism of all tasks in the task set is large enough to preempt τ i on M processors; the second property ensures us to identify the minimum total degree of parallelism (thus the maximum possible number of idle processors under all scenarios) during τ i s non-executing intervals. Algorithm description. We develop a polynomial-time algorithm that applies a dynamic programming approach to reduce the complexity of finding the subset of tasks that exhibits the smallest degree of parallelism from all possible subsets. The basic idea behind this algorithm can be explained as follows. First we use dynamic programming to check whether there exists a subset of tasks in τ/τ i satisfying that the total degree of parallelism of tasks in this subset is M +. If yes, Δ i = ; otherwise, we check whether there exists a subset of tasks in τ/τ i satisfying that the total degree of parallelism of tasks in this subset is M +2. We continue this iteration process until we find Δ i. Since the total degree of parallelism of all tasks in τ is at least M, Δ i can always be found. We put the pseudocode and its detailed description in an appendix. In the following Theorem 2, we will show how to improve the schedulability test by using the Δ i identified using Algorithm. Theorem 2. GEDF correctly schedules any HRT implicitdeadline sporadic gang task system τ on M processors if U sum (M Δ i ) ( u i )+u i. (3) holds for all τ i τ. Proof. Our proof is by contradiction, which follows the similar reasoning as the proof of Theorem. Assume that the theorem does not hold. This implies that there exists a concrete instantiation τ, for which U sum (M Δ i ) ( ui )+u i holds for all τ i τ, and one or more jobs of τ miss the deadline in GEDF. Again, let t d denote the first time instant in GEDF at which a deadline is missed and t ( <t <t d ) denote the earliest instant in [,t d ) at which LAG(τ,t, GEDF) >. 23

8 holds. Let τ l denote such a task and lag(τ l,t, GEDF) >. Let r l,h denote the release time of τ l,h. Then, because no job with deadline before t d misses its deadline and r l,h <t t d holds, all jobs of τ l released before r l,h complete executing by r l,h in GEDF. All of these jobs complete executing by r l,h in PS as well. The lag of τ l at r l,h is zero. Let E (resp., E) denote the cumulative time in [r l,h,t ) in which τ l,h is executing (resp., τ l,h is not executing) in GEDF. That is, t r l,h = E + E. Since τ l,h is release at r l,h and does not complete by t, based on the definitions of executing/non-executing intervals, we have lag(τ l,t, GEDF) = lag(τ l,r l,h, GEDF)+A(τ l,r l,h,t,ps) A(τ l,r l,h,t, GEDF) {Because lag(τ l,r l,h, GEDF) =} = A(τ l,r l,h,t,ps) A(τ l,r l,h,t, GEDF) (E + E) u l m l E = E (u l m l )+E u l. By lag(τ l,t, GEDF) > and the inequality above, we have E (u l m l )+E u l > = E< E u l. (4) Then, LAG(τ,t, GEDF) LAG(τ,r l,h, GEDF) = LAG(τ,r l,h, GEDF)+A(τ,r l,h,t,ps) A(τ,r l,h,t, GEDF) LAG(τ,r l,h, GEDF) = A(τ,r l,h,t,ps) A(τ,r l,h,t, GEDF) (t r l,h ) U sum A(τ,r l,h,t, GEDF) (E + E) U sum m l E (M Δ l ) E = E (U sum m l )+E (U sum M +Δ l ) {by Eq. (4)} < E u l (U sum m l ) + E (U sum M +Δ l ). Since LAG(τ,t, GEDF) > LAG(τ,r l,h, GEDF), we have E u l (U sum m l ) + E (U sum M +Δ l ) > (5) Since E> must hold in order for Eq. 5 to hold, after dividing E from both sides of Eq. 5, we obtain u l (U sum m l ) +(U sum M +Δ l ) > U sum >M ( u l ) ( u l ) Δ l + u l m l m l =(M Δ l ) ( u l )+u l. m l This contradicts Eq. 8, thus proving the theorem. Example 4. Reconsider Example 3 using the optimized schedulability test given in Theorem 2. The total system utilization is According to Algorithm, Δ =2, Δ 2 =, and Δ 3 =. According to Theorem 2, (M Δ ) ( u m )+ u =6.25 > 233 4, (M Δ 2) ( u2 m 2 )+u 2 =6> and (M Δ 3 ) ( u3 m 3 )+u 3 =6.6 > This parallel task system thus becomes schedulable. Relationship to the classical density test [25]. An interesting observation is that if =holds for all τ i τ, then both tests given in Theorems and 2 become identical to the classical density test [25] designed for ordinary sporadic task systems. Specifically, if =for all τ i τ, then the schedulability condition Eq. 8 in Theorem becomes: U sum (M +) ( ui )+2u i = M (M ) u i holds for any τ i τ; while the optimized schedulability condition Eq. 3 in Theorem 2 becomes: U sum (M Δ i ) ( ui )+u i = M (M ) u i holds for any τ i τ (when =holds for all τ i τ, Δ i =). 6 Experiments In this section, we describe experiments conducted to evaluate the applicability of the schedulability tests proposed in this work. Our goal is to examine how restrictive the derived schedulability tests utilization caps are. Specifically, we evaluated our derived utilization-based tests given by Theorem and Theorem 2, denoted by U-Gang and Uopt-Gang, respectively. Experimental setup. In our experiments, task periods were uniformly distributed over [2ms, 2ms]. for different task sets was distributed differently for each experiment using three uniform distributions. The ranges for the uniform distributions were [.5,.] (light), [., ] (medium), and [, ] (heavy). Parallelism of tasks were also distributed using three uniform distributions: [, M 4 ] (paralleliss small), [ M 4, 5M 8 ] (paralleliss moderate), and [ 5M 8, 7M 8 ] (paralleliss high), where M denotes the number of processors. Task execution costs were calculated from periods, utilization, and parallelism. We varied the total system utilization U sum within {.,,...,m}. For each e i combination of ei, parallelism, and U sum,, task sets were generated for systems with M =8and M =6processors. Each such task set was generated by creating tasks until total utilization exceeds the corresponding utilization cap, and by then reducing the last task s utilization so that the total utilization equalled the utilization cap. For each generated system, hard real-time schedulability was checked for the above mentioned two tests. 24

9 (a) m=8, light e/p (c) m=8, medium e/p (e) m=8, heavy e/p (b) m=6, light e/p (d) m=6, medium e/p (f) m=6, heavy e/p. Figure 6: Schedulability results. In all graphs, the x-axis denotes the task set utilization cap and the y-axis denotes the fraction of generated task sets that were schedulable. In the first (respectively, second) column of graphs, m = 8 (respectively, m = 6) is assumed. In the first (respectively, second and third) row of graphs, light (respectively, medium and heavy) pertask utilizations are assumed. Each graph gives three curves per tested approach for the cases of small, moderate, and large parallelisms, respectively. As seen at the top of the figure, the label U-Gang-s(m/l) indicates the approach of our utilization-based test assuming small (moderate/large) parallelism. Similarly, U-opt-Gang labels are used to denote the optimized test given in Theorem 2 under three scenarios. Schedulability results. The obtained schedulability results are shown in Fig. 6 (the organization of which is explained in the figure s caption). Each curve plots the fraction of the generated task sets successfully scheduled by the corresponding approach, as a function of tasks total utilization. As seen in Fig. 6, U-Gang is able to yield resonablely good performance in almost all cases, particularly when the paralleliss small. Moreover, in all tested scenarios, U- opt-gang improves upon U-Gang by a notable margin. For example, as seen in Fig. 6(c), when tasks paralleliss moderate and pertask ei is medium, U-opt-Gang can achieve % schedulability when U sum equals 5.25 while U-Gang test fails to do so when U sum merely exceeds 4.5. We also observe that both of the two tests perform better under lighter pertask ei. This is because when ei is lighter, u i may become smaller, which clearly helps both two tests achieve higher schedulability. On average, U-opt-Gang yields an over % improvement w.r.t. schedulability compared to the corresponding U-Gang test. Discussion on a counter-intuitive observation. Intuitively, when paralleliss larger, Gang task systems are harder to be scheduled due to a potential increase of the parallelism-induced utilization loss. This trend indeed occurs in Figs. 6(a)-6(d): the schedulability decreases when task parallelisncreases under all cases for both our utilizationbased tests. However, the results shown in Fig. 6(e) and 25

10 Fig. 6(f) where the per-task ei is heavy do not follow this trend. For ordinary sporadic task systems, it is known that heavy per-task ei quite negatively impacts HRT schedulability. Nonetheless, as seen in Fig. 6(e) and Fig. 6(f), for gang task systems, the heavy per-task ei setting actually improves schedulability under our tests. The reason can be found by examing our derived schedulability test. As seen in Eq. (8), when the per-task ei is heavy, the term (2u i ) on the right-hand side becomes a dominant factor as the other term (M +) ( ui ) becomes rather small. Thus, increasing ei may actually help a task set to pass this schedulability condition. The same reasoning applies to the optimized schedulability condition given in Eq. (3). 7 Conclusion In this paper, we study the HRT gang task scheduling problem. We have presented a novel approach combining new lag-based reasoning and executing/non-executing gang interval analysis technique, which efficiently quantifies the parallelism-induced idleness by GEDF-scheduled gang tasks on a multiprocessor and yields the first utilization-based test for HRT sporadic gang task sytems. As demonstrated by experiments, our proposed tests are often able to guarantee schedulability with resonably small utilization loss while providing hard real-time correctness. References [] S. Kato and Y. Ishikawa, Gang edf scheduling of parallel task systems, in Real-Time Systems Symposium, 29, RTSS 29. 3th IEEE. IEEE, 29, pp [2] D. G. Feitelson, Packing schemes for gang scheduling, in Workshop on Job Scheduling Strategies for Parallel Processing. Springer, 996, pp. 89. [3] J. Goossens and P. Richard, Optimal scheduling of periodic gang tasks, Leibniz transactions on embedded systems, vol. 3, no., pp. 4, 26. [4] Z. C. Papazachos and H. D. Karatza, Gang scheduling in multicore clusters implementing migrations, Future Generation Computer Systems, vol. 27, no. 8, pp , 2. [5] L. Dagum and R. Menon, Openmp: an industry standard api for shared-memory programming, IEEE computational science and engineering, vol. 5, no., pp , 998. [6] P. S. Pacheco, Parallel programming with MPI. Morgan Kaufmann, 997. [7] R. Szeliski, Computer vision: algorithms and applications. Springer Science & Business Media, 2. [8] S. Singh, Performance optimization in gang scheduling in cloud computing, International Organization of Scientific Research- Journal of Computer Engineering, vol. 2, no. 4, pp , 22. [9] D. G. Feitelson and L. Rudolph, Gang scheduling performance benefits for fine-grain synchronization, Journal of Parallel and distributed Computing, vol. 6, no. 4, pp , 992. [] K. D. Ryu, N. Pachapurkar, and L. L. Fong, Adaptive memory paging for efficient gang scheduling of parallel applications, in Parallel and Distributed Processing Symposium, 24. Proceedings. 8th International. IEEE, 24, p. 3. [] B. B. Zhou and R. P. Brent, Gang scheduling with a queue for large jobs, in Parallel and Distributed Processing Symposium., Proceedings 5th International. IEEE, 2, pp. 8 pp. [2] M. A. Jette, Performance characteristics of gang scheduling in multiprogrammed environments, in Proceedings of the ACM/IEEE Conference on Supercomputing, SC 997, November 5-2, 997, San Jose, CA, USA, 997, p. 54. [Online]. Available: [3] L. Adhianto, S. Banerjee, M. Fagan, M. Krentel, G. Marin, J. Mellor- Crummey, and N. R. Tallent, Hpctoolkit: Tools for performance analysis of optimized parallel programs, Concurrency and Computation: Practice and Experience, vol. 22, no. 6, pp , 2. [4] S. Collette, L. Cucu, and J. Goossens, Integrating job parallelism in real-time scheduling theory, Information Processing Letters, vol. 6, no. 5, pp. 8 87, 28. [5] G. Manimaran, C. S. R. Murthy, and K. Ramamritham, A new approach for scheduling of parallelizable tasks in real-time multiprocessor systems, Real-Time Systems, vol. 5, no., pp. 39 6, 998. [6] G. Nelissen, V. Berten, J. Goossens, and D. Milojevic, Techniques optimizing the number of processors to schedule multi-threaded tasks, in Real-Time Systems (ECRTS), 22 24th Euromicro Conference on. IEEE, 22, pp [7] A. Saifullah, J. Li, K. Agrawal, C. Lu, and C. Gill, Multi-core real-time scheduling for generalized parallel task models, Real-Time Systems, vol. 49, no. 4, pp , 23. [8] V. Bonifaci, A. Marchetti-Spaccamela, S. Stiller, and A. Wiese, Feasibility analysis in the sporadic dag task model, in 25th Euromicro Conference on Real-Time Systems, 23. [9] J. K. Ousterhout et al., Scheduling techniques for concurrebt systems. in ICDCS, vol. 82, 982, pp [2] P. Courbin, I. Lupu, and J. Goossens, Scheduling of hard real-time multi-phase multi-thread (mpmt) periodic tasks, Real-time systems, vol. 49, no. 2, pp , 23. [2] K. Lakshmanan, S. Kato, and R. Rajkumar, Scheduling parallel realtime tasks on multi-core processors, in Real-Time Systems Symposium (RTSS), 2 IEEE 3st. IEEE, 2, pp [22] J. Li, K. Agrawal, C. Lu, and C. Gill, Outstanding paper award: Analysis of global edf for parallel tasks, in 25th Euromicro Conference on Real-Time Systems, 23. [23] V. Berten, P. Courbin, and J. Goossens, Gang fixed priority scheduling of periodic moldable real-time tasks, in 5th Junior Researcher Workshop on Real-Time Computing, 2, pp [24] J. Goossens and V. Berten, Gang ftp scheduling of periodic and parallel rigid real-time tasks, arxiv preprint arxiv:6.267, 2. [25] J. Goossens, S. Funk, and S. Baruah, Priority-driven scheduling of periodic task systems on multiprocessors, Real-time systems, vol. 25, no. 2-3, pp , 23. [26] S. Baruah, Techniques for multiprocessor global schedulability analysis, in RTSS, pp. 9 28, 27. [27] M. Bertogna, Evaluation of existing schedulability tests for global edf, in ICPPW 9. [28] M. Bertogna and S. Baruah, Tests for global edf schedulability analysis, Journal of systems architecture, vol. 57, no. 5, pp , 2. [29] P. Richard, J. Goossens, and S. Kato, Comments on gang edf schedulability analysis, arxiv preprint arxiv: , 27. [3] U. C. Devi and J. H. Anderson, Tardiness bounds under global edf scheduling on a multiprocessor, in RTSS, pp. 2 24, 25. [3] H. Leontyev, Compositional analysis techniques for multiprocessor soft real-time scheduling, Ph.D. dissertation, University of North Carolina at Chapel Hill, 2. [32] C. Liu, Efficient design, analysis, and implementation of complex multiprocessor real-time systems, Ph.D. dissertation, Citeseer, 23. [33] C. Liu and J. H. Anderson, An o (m) analysis technique for supporting real-time self-suspending task systems, in RTSS, pp , 22. [34] Z. Dong, C. Liu, A. Gatherer, L. McFearin, P. Yan, and J. H. Anderson, Optimal dataflow scheduling on a heterogeneous multiprocessor with reduced response time bounds, in Proceedings of 29th Euromicro Conference on Real-Time Systems, 27.(ECRTS 27). 26

11 Appendix Algorithm Δ i identification algorithm Require: M, m,m 2,...,m n Ensure: Δ i : N =, Δ i = n. 2: if n i= mi M 3: τ is schedulable, and Δ i does not exist. 4: else 5: for p = n do 6: if p<i 7: z p = m p 8: end if 9: if p i : z p = m p+ : end if 2: end for 3: for p = n do 4: Δ[][p] =. 5: end for 6: while N Δ i do 7: for x = n do 8: for y = M N do 9: Δ[x][y] = 2: if z x y 2: if Δ[x ][y z x]+z x Δ[x ][y] 22: Δ[x][y] =Δ[x ][y z x]+z x 23: else 24: Δ[x][y] =Δ[x ][y]} 25: end if 26: end if 27: end for 28: end for 29: Δ i = M Δ[n ][M N] 3: if N Δ i 3: N = N 32: end if 33: end while 34: end if there exists a subset of tasks in τ/τ i satisfying that the total degree of parallelism of tasks in this subset is M N.If yes, Δ i = M N, if not, let N = N and continue this iteration process until this equality is reached. Since n i= M, which is already checked in line 2, the termination condition can always be reached. This algorithm runs in polynomial time with complexity O(M 2 n). Detailed psuedocode description of Algorithm. In line, initially we assume the maximudle number of processors is N =. In the later iterations, we will gradually decrease N to find Δ i. The algorithnitializes Δ i = n. In lines 2 and 3, we validate whether the total degree of parallelism of all tasks is no greater than the number of processors. If yes, then this gang task systes an ordinary sporadic task system, thus Δ i does not exist. From line 5 to line 2, we find z p which denotes the degree of parallelism of tasks in τ/τ i. From line 3 to line 5, we initialize Δ[][p] =. From line 7 to line 28, we apply a dynamic programming algorithm to find a subset of all tasks in τ/τ i satisfying the following two properties: (i) the total degree of parallelism of tasks in this subset is no larger than M N; (ii) the total degree of parallelism of tasks in this subset is the largest among all subsets satisfying property (i). We store the total degree of parallelism of tasks in this subset in Δ[n ][M N]. In line 3, we check whether the total degree of parallelism of tasks in this subset is M N. In other words, from line 7 to line 29, we check whether 27

Closing the Loop for the Selective Conversion Approach: A Utilization-based Test for Hard Real-Time Suspending Task Systems

Closing the Loop for the Selective Conversion Approach: A Utilization-based Test for Hard Real-Time Suspending Task Systems 216 IEEE Real-Time Systems Symposium Closing the Loop for the Selective Conversion Approach: A Utilization-based Test for Hard Real-Time Suspending Task Systems Zheng Dong and Cong Liu Department of Computer

More information

An O(m) Analysis Technique for Supporting Real-Time Self-Suspending Task Systems

An O(m) Analysis Technique for Supporting Real-Time Self-Suspending Task Systems An O(m) Analysis Technique for Supporting Real-Time Self-Suspending Task Systems Cong Liu and James H. Anderson Department of Computer Science, University of North Carolina at Chapel Hill Abstract In many

More information

Supporting Read/Write Applications in Embedded Real-time Systems via Suspension-aware Analysis

Supporting Read/Write Applications in Embedded Real-time Systems via Suspension-aware Analysis Supporting Read/Write Applications in Embedded Real-time Systems via Suspension-aware Analysis Guangmo Tong and Cong Liu Department of Computer Science University of Texas at Dallas ABSTRACT In many embedded

More information

On the Soft Real-Time Optimality of Global EDF on Multiprocessors: From Identical to Uniform Heterogeneous

On the Soft Real-Time Optimality of Global EDF on Multiprocessors: From Identical to Uniform Heterogeneous On the Soft Real-Time Optimality of Global EDF on Multiprocessors: From Identical to Uniform Heterogeneous Kecheng Yang and James H. Anderson Department of Computer Science, University of North Carolina

More information

Tardiness Bounds under Global EDF Scheduling on a Multiprocessor

Tardiness Bounds under Global EDF Scheduling on a Multiprocessor Tardiness ounds under Global EDF Scheduling on a Multiprocessor UmaMaheswari C. Devi and James H. Anderson Department of Computer Science The University of North Carolina at Chapel Hill Abstract This paper

More information

Federated Scheduling for Stochastic Parallel Real-time Tasks

Federated Scheduling for Stochastic Parallel Real-time Tasks Federated Scheduling for Stochastic Parallel Real-time Tasks Jing Li, Kunal Agrawal, Christopher Gill, and Chenyang Lu Department of Computer Science and Engineering Washington University in St. Louis

More information

Optimal Utilization Bounds for the Fixed-priority Scheduling of Periodic Task Systems on Identical Multiprocessors. Sanjoy K.

Optimal Utilization Bounds for the Fixed-priority Scheduling of Periodic Task Systems on Identical Multiprocessors. Sanjoy K. Optimal Utilization Bounds for the Fixed-priority Scheduling of Periodic Task Systems on Identical Multiprocessors Sanjoy K. Baruah Abstract In fixed-priority scheduling the priority of a job, once assigned,

More information

Federated Scheduling for Stochastic Parallel Real-time Tasks

Federated Scheduling for Stochastic Parallel Real-time Tasks Federated Scheduling for Stochastic Parallel Real-time Tasks Jing Li, Kunal Agrawal, Christopher Gill, and Chenyang Lu Department of Computer Science and Engineering Washington University in St. Louis

More information

Schedulability analysis of global Deadline-Monotonic scheduling

Schedulability analysis of global Deadline-Monotonic scheduling Schedulability analysis of global Deadline-Monotonic scheduling Sanjoy Baruah Abstract The multiprocessor Deadline-Monotonic (DM) scheduling of sporadic task systems is studied. A new sufficient schedulability

More information

arxiv: v1 [cs.os] 25 May 2011

arxiv: v1 [cs.os] 25 May 2011 Scheduling of Hard Real-Time Multi-Thread Periodic Tasks arxiv:1105.5080v1 [cs.os] 25 May 2011 Irina Lupu Joël Goossens PARTS Research Center Université libre de Bruxelles (U.L.B.) CP 212, 50 av. F.D.

More information

On the Soft Real-Time Optimality of Global EDF on Uniform Multiprocessors

On the Soft Real-Time Optimality of Global EDF on Uniform Multiprocessors On the Soft Real-Time Optimality of Global EDF on Uniform Multiprocessors Kecheng Yang and James H Anderson Department of Computer Science, University of North Carolina at Chapel Hill Abstract It has long

More information

Multiprocessor Scheduling II: Global Scheduling. LS 12, TU Dortmund

Multiprocessor Scheduling II: Global Scheduling. LS 12, TU Dortmund Multiprocessor Scheduling II: Global Scheduling Prof. Dr. Jian-Jia Chen LS 12, TU Dortmund 28, June, 2016 Prof. Dr. Jian-Jia Chen (LS 12, TU Dortmund) 1 / 42 Global Scheduling We will only focus on identical

More information

Reservation-Based Federated Scheduling for Parallel Real-Time Tasks

Reservation-Based Federated Scheduling for Parallel Real-Time Tasks Reservation-Based Federated Scheduling for Parallel Real-Time Tasks Niklas Ueter 1, Georg von der Brüggen 1, Jian-Jia Chen 1, Jing Li 2, and Kunal Agrawal 3 1 TU Dortmund University, Germany 2 New Jersey

More information

Tardiness Bounds under Global EDF Scheduling on a. Multiprocessor

Tardiness Bounds under Global EDF Scheduling on a. Multiprocessor Tardiness Bounds under Global EDF Scheduling on a Multiprocessor UmaMaheswari C. Devi and James H. Anderson Department of Computer Science The University of North Carolina at Chapel Hill Abstract We consider

More information

Rate-monotonic scheduling on uniform multiprocessors

Rate-monotonic scheduling on uniform multiprocessors Rate-monotonic scheduling on uniform multiprocessors Sanjoy K. Baruah The University of North Carolina at Chapel Hill Email: baruah@cs.unc.edu Joël Goossens Université Libre de Bruxelles Email: joel.goossens@ulb.ac.be

More information

Task assignment in heterogeneous multiprocessor platforms

Task assignment in heterogeneous multiprocessor platforms Task assignment in heterogeneous multiprocessor platforms Sanjoy K. Baruah Shelby Funk The University of North Carolina Abstract In the partitioned approach to scheduling periodic tasks upon multiprocessors,

More information

Tardiness Bounds for FIFO Scheduling on Multiprocessors

Tardiness Bounds for FIFO Scheduling on Multiprocessors Tardiness Bounds for FIFO Scheduling on Multiprocessors Hennadiy Leontyev and James H. Anderson Department of Computer Science, University of North Carolina at Chapel Hill leontyev@cs.unc.edu, anderson@cs.unc.edu

More information

Tardiness Bounds for EDF Scheduling on Multi-Speed Multicore Platforms

Tardiness Bounds for EDF Scheduling on Multi-Speed Multicore Platforms Tardiness Bounds for EDF Scheduling on Multi-Speed Multicore Platforms Hennadiy Leontyev and James H. Anderson Department of Computer Science, University of North Carolina at Chapel Hill Abstract Multicore

More information

Global mixed-criticality scheduling on multiprocessors

Global mixed-criticality scheduling on multiprocessors Global mixed-criticality scheduling on multiprocessors Haohan Li Sanjoy Baruah The University of North Carolina at Chapel Hill Abstract The scheduling of mixed-criticality implicit-deadline sporadic task

More information

Analysis Techniques for Supporting Harmonic Real-Time Tasks with Suspensions

Analysis Techniques for Supporting Harmonic Real-Time Tasks with Suspensions Analysis Techniques for Supporting Harmonic Real-Time Tass with Suspensions Cong Liu, Jian-Jia Chen, Liang He, Yu Gu The University of Texas at Dallas, USA Karlsruhe Institute of Technology (KIT), Germany

More information

Bursty-Interference Analysis Techniques for. Analyzing Complex Real-Time Task Models

Bursty-Interference Analysis Techniques for. Analyzing Complex Real-Time Task Models Bursty-Interference Analysis Techniques for lemma 3 Analyzing Complex Real- Task Models Cong Liu Department of Computer Science The University of Texas at Dallas Abstract Due to the recent trend towards

More information

Real-time scheduling of sporadic task systems when the number of distinct task types is small

Real-time scheduling of sporadic task systems when the number of distinct task types is small Real-time scheduling of sporadic task systems when the number of distinct task types is small Sanjoy Baruah Nathan Fisher Abstract In some real-time application systems, there are only a few distinct kinds

More information

Multi-core Real-Time Scheduling for Generalized Parallel Task Models

Multi-core Real-Time Scheduling for Generalized Parallel Task Models Washington University in St. Louis Washington University Open Scholarship All Computer Science and Engineering Research Computer Science and Engineering Report Number: WUCSE-011-45 011 Multi-core Real-Time

More information

Paper Presentation. Amo Guangmo Tong. University of Taxes at Dallas January 24, 2014

Paper Presentation. Amo Guangmo Tong. University of Taxes at Dallas January 24, 2014 Paper Presentation Amo Guangmo Tong University of Taxes at Dallas gxt140030@utdallas.edu January 24, 2014 Amo Guangmo Tong (UTD) January 24, 2014 1 / 30 Overview 1 Tardiness Bounds under Global EDF Scheduling

More information

arxiv: v1 [cs.os] 21 May 2008

arxiv: v1 [cs.os] 21 May 2008 Integrating job parallelism in real-time scheduling theory Sébastien Collette Liliana Cucu Joël Goossens arxiv:0805.3237v1 [cs.os] 21 May 2008 Abstract We investigate the global scheduling of sporadic,

More information

EDF Feasibility and Hardware Accelerators

EDF Feasibility and Hardware Accelerators EDF Feasibility and Hardware Accelerators Andrew Morton University of Waterloo, Waterloo, Canada, arrmorton@uwaterloo.ca Wayne M. Loucks University of Waterloo, Waterloo, Canada, wmloucks@pads.uwaterloo.ca

More information

Scheduling Stochastically-Executing Soft Real-Time Tasks: A Multiprocessor Approach Without Worst-Case Execution Times

Scheduling Stochastically-Executing Soft Real-Time Tasks: A Multiprocessor Approach Without Worst-Case Execution Times Scheduling Stochastically-Executing Soft Real-Time Tasks: A Multiprocessor Approach Without Worst-Case Execution Times Alex F. Mills Department of Statistics and Operations Research University of North

More information

Multiprocessor feasibility analysis of recurrent task systems with specified processor affinities

Multiprocessor feasibility analysis of recurrent task systems with specified processor affinities Multiprocessor feasibility analysis of recurrent task systems with specified processor affinities Sanjoy Baruah The University of North Carolina baruah@cs.unc.edu Björn Brandenburg Max Planck Institute

More information

The Concurrent Consideration of Uncertainty in WCETs and Processor Speeds in Mixed Criticality Systems

The Concurrent Consideration of Uncertainty in WCETs and Processor Speeds in Mixed Criticality Systems The Concurrent Consideration of Uncertainty in WCETs and Processor Speeds in Mixed Criticality Systems Zhishan Guo and Sanjoy Baruah Department of Computer Science University of North Carolina at Chapel

More information

arxiv: v1 [cs.os] 6 Jun 2013

arxiv: v1 [cs.os] 6 Jun 2013 Partitioned scheduling of multimode multiprocessor real-time systems with temporal isolation Joël Goossens Pascal Richard arxiv:1306.1316v1 [cs.os] 6 Jun 2013 Abstract We consider the partitioned scheduling

More information

An Optimal Semi-Partitioned Scheduler for Uniform Heterogeneous Multiprocessors

An Optimal Semi-Partitioned Scheduler for Uniform Heterogeneous Multiprocessors An Optimal Semi-Partitioned Scheduler for Uniform Heterogeneous Multiprocessors Kecheng Yang and James H. Anderson Department of Computer Science, University of North Carolina at Chapel Hill Abstract A

More information

Real-Time Systems. Lecture #14. Risat Pathan. Department of Computer Science and Engineering Chalmers University of Technology

Real-Time Systems. Lecture #14. Risat Pathan. Department of Computer Science and Engineering Chalmers University of Technology Real-Time Systems Lecture #14 Risat Pathan Department of Computer Science and Engineering Chalmers University of Technology Real-Time Systems Specification Implementation Multiprocessor scheduling -- Partitioned

More information

Multi-core Real-Time Scheduling for Generalized Parallel Task Models

Multi-core Real-Time Scheduling for Generalized Parallel Task Models Noname manuscript No. (will be inserted by the editor) Multi-core Real-Time Scheduling for Generalized Parallel Task Models Abusayeed Saifullah Jing Li Kunal Agrawal Chenyang Lu Christopher Gill Received:

More information

Multiprocessor Real-Time Scheduling Considering Concurrency and Urgency

Multiprocessor Real-Time Scheduling Considering Concurrency and Urgency Multiprocessor Real-Time Scheduling Considering Concurrency Urgency Jinkyu Lee, Arvind Easwaran, Insik Shin Insup Lee Dept. of Computer Science, KAIST, South Korea IPP-HURRAY! Research Group, Polytechnic

More information

Non-preemptive Fixed Priority Scheduling of Hard Real-Time Periodic Tasks

Non-preemptive Fixed Priority Scheduling of Hard Real-Time Periodic Tasks Non-preemptive Fixed Priority Scheduling of Hard Real-Time Periodic Tasks Moonju Park Ubiquitous Computing Lab., IBM Korea, Seoul, Korea mjupark@kr.ibm.com Abstract. This paper addresses the problem of

More information

Multi-core Real-Time Scheduling for Generalized Parallel Task Models

Multi-core Real-Time Scheduling for Generalized Parallel Task Models Multi-core Real-Time Scheduling for Generalized Parallel Task Models Abusayeed Saifullah, Kunal Agrawal, Chenyang Lu, and Christopher Gill Department of Computer Science and Engineering Washington University

More information

Suspension-Aware Analysis for Hard Real-Time Multiprocessor Scheduling

Suspension-Aware Analysis for Hard Real-Time Multiprocessor Scheduling Suspension-Aware Analysis for Hard Real-Time Multiprocessor Scheduling Cong Liu and James H. Anderson Department of Computer Science, University of North Carolina at Chapel Hill Abstract In many real-time

More information

Gang EDF Scheduling of Parallel Task Systems

Gang EDF Scheduling of Parallel Task Systems Gang EDF Scheduling of Parallel Task Systems Shinpei Kato and Yutaka Ishikawa Graduate School of Information Science and Technology The University of Tokyo, Japan Abstract The preemptive real-time scheduling

More information

2.1 Task and Scheduling Model. 2.2 Definitions and Schedulability Guarantees

2.1 Task and Scheduling Model. 2.2 Definitions and Schedulability Guarantees Fixed-Priority Scheduling of Mixed Soft and Hard Real-Time Tasks on Multiprocessors Jian-Jia Chen, Wen-Hung Huang Zheng Dong, Cong Liu TU Dortmund University, Germany The University of Texas at Dallas,

More information

A Note on Modeling Self-Suspending Time as Blocking Time in Real-Time Systems

A Note on Modeling Self-Suspending Time as Blocking Time in Real-Time Systems A Note on Modeling Self-Suspending Time as Blocking Time in Real-Time Systems Jian-Jia Chen 1, Wen-Hung Huang 1, and Geoffrey Nelissen 2 1 TU Dortmund University, Germany Email: jian-jia.chen@tu-dortmund.de,

More information

The preemptive uniprocessor scheduling of mixed-criticality implicit-deadline sporadic task systems

The preemptive uniprocessor scheduling of mixed-criticality implicit-deadline sporadic task systems The preemptive uniprocessor scheduling of mixed-criticality implicit-deadline sporadic task systems Sanjoy Baruah 1 Vincenzo Bonifaci 2 3 Haohan Li 1 Alberto Marchetti-Spaccamela 4 Suzanne Van Der Ster

More information

Power Minimization for Parallel Real-Time Systems with Malleable Jobs and Homogeneous Frequencies

Power Minimization for Parallel Real-Time Systems with Malleable Jobs and Homogeneous Frequencies Power Minimization for Parallel Real-Time Systems with Malleable Jobs and Homogeneous Frequencies Antonio Paolillo, Joël Goossens PARTS Research Centre Université Libre de Bruxelles and Mangogem S.A. {antonio.paolillo,

More information

Scheduling mixed-criticality systems to guarantee some service under all non-erroneous behaviors

Scheduling mixed-criticality systems to guarantee some service under all non-erroneous behaviors Consistent * Complete * Well Documented * Easy to Reuse * Scheduling mixed-criticality systems to guarantee some service under all non-erroneous behaviors Artifact * AE * Evaluated * ECRTS * Sanjoy Baruah

More information

Non-Work-Conserving Non-Preemptive Scheduling: Motivations, Challenges, and Potential Solutions

Non-Work-Conserving Non-Preemptive Scheduling: Motivations, Challenges, and Potential Solutions Non-Work-Conserving Non-Preemptive Scheduling: Motivations, Challenges, and Potential Solutions Mitra Nasri Chair of Real-time Systems, Technische Universität Kaiserslautern, Germany nasri@eit.uni-kl.de

More information

TDDB68 Concurrent programming and operating systems. Lecture: CPU Scheduling II

TDDB68 Concurrent programming and operating systems. Lecture: CPU Scheduling II TDDB68 Concurrent programming and operating systems Lecture: CPU Scheduling II Mikael Asplund, Senior Lecturer Real-time Systems Laboratory Department of Computer and Information Science Copyright Notice:

More information

Optimal Semi-Partitioned Scheduling in Soft Real-Time Systems

Optimal Semi-Partitioned Scheduling in Soft Real-Time Systems Optimal Semi-Partitioned Scheduling in Soft Real-Time Systems James H. Anderson 1, Benjamin N. Casses 1, UmaMaheswari C. Devi 2, and Jeremy P. Erickson 1 1 Dept. of Computer Science, University of North

More information

Controlling Preemption for Better Schedulability in Multi-Core Systems

Controlling Preemption for Better Schedulability in Multi-Core Systems 2012 IEEE 33rd Real-Time Systems Symposium Controlling Preemption for Better Schedulability in Multi-Core Systems Jinkyu Lee and Kang G. Shin Dept. of Electrical Engineering and Computer Science, The University

More information

Lecture 6. Real-Time Systems. Dynamic Priority Scheduling

Lecture 6. Real-Time Systems. Dynamic Priority Scheduling Real-Time Systems Lecture 6 Dynamic Priority Scheduling Online scheduling with dynamic priorities: Earliest Deadline First scheduling CPU utilization bound Optimality and comparison with RM: Schedulability

More information

Optimal Semi-Partitioned Scheduling in Soft Real-Time Systems

Optimal Semi-Partitioned Scheduling in Soft Real-Time Systems Optimal Semi-Partitioned Scheduling in Soft Real-Time Systems James H. Anderson 1, Jeremy P. Erickson 1, UmaMaheswari C. Devi 2, and Benjamin N. Casses 1 1 Dept. of Computer Science, University of North

More information

Semi-Partitioned Fixed-Priority Scheduling on Multiprocessors

Semi-Partitioned Fixed-Priority Scheduling on Multiprocessors Semi-Partitioned Fixed-Priority Scheduling on Multiprocessors Shinpei Kato and Nobuyuki Yamasaki Department of Information and Computer Science Keio University, Yokohama, Japan {shinpei,yamasaki}@ny.ics.keio.ac.jp

More information

Multiprocessor Scheduling I: Partitioned Scheduling. LS 12, TU Dortmund

Multiprocessor Scheduling I: Partitioned Scheduling. LS 12, TU Dortmund Multiprocessor Scheduling I: Partitioned Scheduling Prof. Dr. Jian-Jia Chen LS 12, TU Dortmund 22/23, June, 2015 Prof. Dr. Jian-Jia Chen (LS 12, TU Dortmund) 1 / 47 Outline Introduction to Multiprocessor

More information

The Partitioned Dynamic-priority Scheduling of Sporadic Task Systems

The Partitioned Dynamic-priority Scheduling of Sporadic Task Systems The Partitioned Dynamic-priority Scheduling of Sporadic Task Systems Abstract A polynomial-time algorithm is presented for partitioning a collection of sporadic tasks among the processors of an identical

More information

RUN-TIME EFFICIENT FEASIBILITY ANALYSIS OF UNI-PROCESSOR SYSTEMS WITH STATIC PRIORITIES

RUN-TIME EFFICIENT FEASIBILITY ANALYSIS OF UNI-PROCESSOR SYSTEMS WITH STATIC PRIORITIES RUN-TIME EFFICIENT FEASIBILITY ANALYSIS OF UNI-PROCESSOR SYSTEMS WITH STATIC PRIORITIES Department for Embedded Systems/Real-Time Systems, University of Ulm {name.surname}@informatik.uni-ulm.de Abstract:

More information

Che-Wei Chang Department of Computer Science and Information Engineering, Chang Gung University

Che-Wei Chang Department of Computer Science and Information Engineering, Chang Gung University Che-Wei Chang chewei@mail.cgu.edu.tw Department of Computer Science and Information Engineering, Chang Gung University } 2017/11/15 Midterm } 2017/11/22 Final Project Announcement 2 1. Introduction 2.

More information

Lightweight Real-Time Synchronization under P-EDF on Symmetric and Asymmetric Multiprocessors

Lightweight Real-Time Synchronization under P-EDF on Symmetric and Asymmetric Multiprocessors Consistent * Complete * Well Documented * Easy to Reuse * Technical Report MPI-SWS-216-3 May 216 Lightweight Real-Time Synchronization under P-EDF on Symmetric and Asymmetric Multiprocessors (extended

More information

An Optimal Real-Time Scheduling Algorithm for Multiprocessors

An Optimal Real-Time Scheduling Algorithm for Multiprocessors An Optimal Real-Time Scheduling Algorithm for Multiprocessors Hyeonjoong Cho, Binoy Ravindran, and E. Douglas Jensen ECE Dept., Virginia Tech Blacksburg, VA 24061, USA {hjcho,binoy}@vt.edu The MITRE Corporation

More information

Andrew Morton University of Waterloo Canada

Andrew Morton University of Waterloo Canada EDF Feasibility and Hardware Accelerators Andrew Morton University of Waterloo Canada Outline 1) Introduction and motivation 2) Review of EDF and feasibility analysis 3) Hardware accelerators and scheduling

More information

Optimality Results for Multiprocessor Real-Time Locking

Optimality Results for Multiprocessor Real-Time Locking Optimality Results for Multiprocessor Real-Time Locking Björn B. Brandenburg and James H. Anderson Department of Computer Science, University of North Carolina at Chapel Hill Abstract When locking protocols

More information

Schedulability Analysis and Priority Assignment for Global Job-Level Fixed-Priority Multiprocessor Scheduling

Schedulability Analysis and Priority Assignment for Global Job-Level Fixed-Priority Multiprocessor Scheduling 2012 IEEE 18th Real Time and Embedded Technology and Applications Symposium Schedulability Analysis and Priority Assignment for Global Job-Level Fixed-Priority Multiprocessor Scheduling Hyoungbu Back,

More information

Lecture 13. Real-Time Scheduling. Daniel Kästner AbsInt GmbH 2013

Lecture 13. Real-Time Scheduling. Daniel Kästner AbsInt GmbH 2013 Lecture 3 Real-Time Scheduling Daniel Kästner AbsInt GmbH 203 Model-based Software Development 2 SCADE Suite Application Model in SCADE (data flow + SSM) System Model (tasks, interrupts, buses, ) SymTA/S

More information

Schedulability of Periodic and Sporadic Task Sets on Uniprocessor Systems

Schedulability of Periodic and Sporadic Task Sets on Uniprocessor Systems Schedulability of Periodic and Sporadic Task Sets on Uniprocessor Systems Jan Reineke Saarland University July 4, 2013 With thanks to Jian-Jia Chen! Jan Reineke July 4, 2013 1 / 58 Task Models and Scheduling

More information

Cache-Aware Compositional Analysis of Real- Time Multicore Virtualization Platforms

Cache-Aware Compositional Analysis of Real- Time Multicore Virtualization Platforms University of Pennsylvania ScholarlyCommons Departmental Papers (CIS) Department of Computer & Information Science 12-2013 Cache-Aware Compositional Analysis of Real- Time Multicore Virtualization Platforms

More information

TDDI04, K. Arvidsson, IDA, Linköpings universitet CPU Scheduling. Overview: CPU Scheduling. [SGG7] Chapter 5. Basic Concepts.

TDDI04, K. Arvidsson, IDA, Linköpings universitet CPU Scheduling. Overview: CPU Scheduling. [SGG7] Chapter 5. Basic Concepts. TDDI4 Concurrent Programming, Operating Systems, and Real-time Operating Systems CPU Scheduling Overview: CPU Scheduling CPU bursts and I/O bursts Scheduling Criteria Scheduling Algorithms Multiprocessor

More information

Load Regulating Algorithm for Static-Priority Task Scheduling on Multiprocessors

Load Regulating Algorithm for Static-Priority Task Scheduling on Multiprocessors Technical Report No. 2009-7 Load Regulating Algorithm for Static-Priority Task Scheduling on Multiprocessors RISAT MAHMUD PATHAN JAN JONSSON Department of Computer Science and Engineering CHALMERS UNIVERSITY

More information

CIS 4930/6930: Principles of Cyber-Physical Systems

CIS 4930/6930: Principles of Cyber-Physical Systems CIS 4930/6930: Principles of Cyber-Physical Systems Chapter 11 Scheduling Hao Zheng Department of Computer Science and Engineering University of South Florida H. Zheng (CSE USF) CIS 4930/6930: Principles

More information

EDF Scheduling. Giuseppe Lipari CRIStAL - Université de Lille 1. October 4, 2015

EDF Scheduling. Giuseppe Lipari  CRIStAL - Université de Lille 1. October 4, 2015 EDF Scheduling Giuseppe Lipari http://www.lifl.fr/~lipari CRIStAL - Université de Lille 1 October 4, 2015 G. Lipari (CRIStAL) Earliest Deadline Scheduling October 4, 2015 1 / 61 Earliest Deadline First

More information

Clock-driven scheduling

Clock-driven scheduling Clock-driven scheduling Also known as static or off-line scheduling Michal Sojka Czech Technical University in Prague, Faculty of Electrical Engineering, Department of Control Engineering November 8, 2017

More information

Improved End-to-End Response-Time Bounds for DAG-Based Task Systems

Improved End-to-End Response-Time Bounds for DAG-Based Task Systems Improved End-to-End Response-Time Bounds for DAG-Based Task Systems Kecheng Yang, Ming Yang, and James H. Anderson Department of Computer Science, University of North Carolina at Chapel Hill Abstract This

More information

EDF Scheduling. Giuseppe Lipari May 11, Scuola Superiore Sant Anna Pisa

EDF Scheduling. Giuseppe Lipari   May 11, Scuola Superiore Sant Anna Pisa EDF Scheduling Giuseppe Lipari http://feanor.sssup.it/~lipari Scuola Superiore Sant Anna Pisa May 11, 2008 Outline 1 Dynamic priority 2 Basic analysis 3 FP vs EDF 4 Processor demand bound analysis Generalization

More information

A New Task Model and Utilization Bound for Uniform Multiprocessors

A New Task Model and Utilization Bound for Uniform Multiprocessors A New Task Model and Utilization Bound for Uniform Multiprocessors Shelby Funk Department of Computer Science, The University of Georgia Email: shelby@cs.uga.edu Abstract This paper introduces a new model

More information

Task Reweighting under Global Scheduling on Multiprocessors

Task Reweighting under Global Scheduling on Multiprocessors ask Reweighting under Global Scheduling on Multiprocessors Aaron Block, James H. Anderson, and UmaMaheswari C. Devi Department of Computer Science, University of North Carolina at Chapel Hill March 7 Abstract

More information

Contention-Free Executions for Real-Time Multiprocessor Scheduling

Contention-Free Executions for Real-Time Multiprocessor Scheduling Contention-Free Executions for Real-Time Multiprocessor Scheduling JINKYU LEE, University of Michigan ARVIND EASWARAN, Nanyang Technological University INSIK SHIN, KAIST A time slot is defined as contention-free

More information

The FMLP + : An Asymptotically Optimal Real-Time Locking Protocol for Suspension-Aware Analysis

The FMLP + : An Asymptotically Optimal Real-Time Locking Protocol for Suspension-Aware Analysis The FMLP + : An Asymptotically Optimal Real-Time Locking Protocol for Suspension-Aware Analysis Björn B. Brandenburg Max Planck Institute for Software Systems (MPI-SWS) Abstract Multiprocessor real-time

More information

Partitioned scheduling of sporadic task systems: an ILP-based approach

Partitioned scheduling of sporadic task systems: an ILP-based approach Partitioned scheduling of sporadic task systems: an ILP-based approach Sanjoy K. Baruah The University of North Carolina Chapel Hill, NC. USA Enrico Bini Scuola Superiore Santa Anna Pisa, Italy. Abstract

More information

Schedulability Analysis of the Linux Push and Pull Scheduler with Arbitrary Processor Affinities

Schedulability Analysis of the Linux Push and Pull Scheduler with Arbitrary Processor Affinities Revision 1 July 23, 215 Schedulability Analysis of the Linux Push and Pull Scheduler with Arbitrary Processor Affinities Arpan Gujarati Felipe Cerqueira Björn B. Brandenburg Max Planck Institute for Software

More information

A 2-Approximation Algorithm for Scheduling Parallel and Time-Sensitive Applications to Maximize Total Accrued Utility Value

A 2-Approximation Algorithm for Scheduling Parallel and Time-Sensitive Applications to Maximize Total Accrued Utility Value A -Approximation Algorithm for Scheduling Parallel and Time-Sensitive Applications to Maximize Total Accrued Utility Value Shuhui Li, Miao Song, Peng-Jun Wan, Shangping Ren Department of Engineering Mechanics,

More information

Scheduling Periodic Real-Time Tasks on Uniprocessor Systems. LS 12, TU Dortmund

Scheduling Periodic Real-Time Tasks on Uniprocessor Systems. LS 12, TU Dortmund Scheduling Periodic Real-Time Tasks on Uniprocessor Systems Prof. Dr. Jian-Jia Chen LS 12, TU Dortmund 08, Dec., 2015 Prof. Dr. Jian-Jia Chen (LS 12, TU Dortmund) 1 / 38 Periodic Control System Pseudo-code

More information

Cache-Aware Compositional Analysis of Real- Time Multicore Virtualization Platforms

Cache-Aware Compositional Analysis of Real- Time Multicore Virtualization Platforms University of Pennsylvania ScholarlyCommons Departmental Papers (CIS) Department of Computer & Information Science -25 Cache-Aware Compositional Analysis of Real- Time Multicore Virtualization Platforms

More information

The Feasibility Analysis of Multiprocessor Real-Time Systems

The Feasibility Analysis of Multiprocessor Real-Time Systems The Feasibility Analysis of Multiprocessor Real-Time Systems Sanjoy Baruah Nathan Fisher The University of North Carolina at Chapel Hill Abstract The multiprocessor scheduling of collections of real-time

More information

Scheduling periodic Tasks on Multiple Periodic Resources

Scheduling periodic Tasks on Multiple Periodic Resources Scheduling periodic Tasks on Multiple Periodic Resources Xiayu Hua, Zheng Li, Hao Wu, Shangping Ren* Department of Computer Science Illinois Institute of Technology Chicago, IL 60616, USA {xhua, zli80,

More information

Real-Time Systems. Event-Driven Scheduling

Real-Time Systems. Event-Driven Scheduling Real-Time Systems Event-Driven Scheduling Hermann Härtig WS 2018/19 Outline mostly following Jane Liu, Real-Time Systems Principles Scheduling EDF and LST as dynamic scheduling methods Fixed Priority schedulers

More information

Paper Presentation. Amo Guangmo Tong. University of Taxes at Dallas February 11, 2014

Paper Presentation. Amo Guangmo Tong. University of Taxes at Dallas February 11, 2014 Paper Presentation Amo Guangmo Tong University of Taxes at Dallas gxt140030@utdallas.edu February 11, 2014 Amo Guangmo Tong (UTD) February 11, 2014 1 / 26 Overview 1 Techniques for Multiprocessor Global

More information

the currently active 1 job whose deadline parameter is the smallest, is an optimal scheduling algorithm in the sense that if a system can be scheduled

the currently active 1 job whose deadline parameter is the smallest, is an optimal scheduling algorithm in the sense that if a system can be scheduled Priority-driven scheduling of periodic task systems on multiprocessors Λ Joël Goossens Shelby Funk Sanjoy Baruah Abstract The scheduling of systems of periodic tasks upon multiprocessor platforms is considered.

More information

Embedded Systems Development

Embedded Systems Development Embedded Systems Development Lecture 3 Real-Time Scheduling Dr. Daniel Kästner AbsInt Angewandte Informatik GmbH kaestner@absint.com Model-based Software Development Generator Lustre programs Esterel programs

More information

Segment-Fixed Priority Scheduling for Self-Suspending Real-Time Tasks

Segment-Fixed Priority Scheduling for Self-Suspending Real-Time Tasks Segment-Fixed Priority Scheduling for Self-Suspending Real-Time Tasks Junsung Kim, Björn Andersson, Dionisio de Niz, and Raj Rajkumar Carnegie Mellon University 2/31 Motion Planning on Self-driving Parallel

More information

Laxity dynamics and LLF schedulability analysis on multiprocessor platforms

Laxity dynamics and LLF schedulability analysis on multiprocessor platforms DOI 10.1007/s11241-012-9157-x Laxity dynamics and LLF schedulability analysis on multiprocessor platforms Jinkyu Lee Arvind Easwaran Insik Shin Springer Science+Business Media, LLC 2012 Abstract LLF Least

More information

Mixed-criticality scheduling upon varying-speed multiprocessors

Mixed-criticality scheduling upon varying-speed multiprocessors Mixed-criticality scheduling upon varying-speed multiprocessors Zhishan Guo Sanjoy Baruah The University of North Carolina at Chapel Hill Abstract An increasing trend in embedded computing is the moving

More information

Online Scheduling Switch for Maintaining Data Freshness in Flexible Real-Time Systems

Online Scheduling Switch for Maintaining Data Freshness in Flexible Real-Time Systems Online Scheduling Switch for Maintaining Data Freshness in Flexible Real-Time Systems Song Han 1 Deji Chen 2 Ming Xiong 3 Aloysius K. Mok 1 1 The University of Texas at Austin 2 Emerson Process Management

More information

Non-Preemptive and Limited Preemptive Scheduling. LS 12, TU Dortmund

Non-Preemptive and Limited Preemptive Scheduling. LS 12, TU Dortmund Non-Preemptive and Limited Preemptive Scheduling LS 12, TU Dortmund 09 May 2017 (LS 12, TU Dortmund) 1 / 31 Outline Non-Preemptive Scheduling A General View Exact Schedulability Test Pessimistic Schedulability

More information

Real-time Scheduling of Periodic Tasks (1) Advanced Operating Systems Lecture 2

Real-time Scheduling of Periodic Tasks (1) Advanced Operating Systems Lecture 2 Real-time Scheduling of Periodic Tasks (1) Advanced Operating Systems Lecture 2 Lecture Outline Scheduling periodic tasks The rate monotonic algorithm Definition Non-optimality Time-demand analysis...!2

More information

Task Models and Scheduling

Task Models and Scheduling Task Models and Scheduling Jan Reineke Saarland University June 27 th, 2013 With thanks to Jian-Jia Chen at KIT! Jan Reineke Task Models and Scheduling June 27 th, 2013 1 / 36 Task Models and Scheduling

More information

Embedded Systems 15. REVIEW: Aperiodic scheduling. C i J i 0 a i s i f i d i

Embedded Systems 15. REVIEW: Aperiodic scheduling. C i J i 0 a i s i f i d i Embedded Systems 15-1 - REVIEW: Aperiodic scheduling C i J i 0 a i s i f i d i Given: A set of non-periodic tasks {J 1,, J n } with arrival times a i, deadlines d i, computation times C i precedence constraints

More information

How to deal with uncertainties and dynamicity?

How to deal with uncertainties and dynamicity? How to deal with uncertainties and dynamicity? http://graal.ens-lyon.fr/ lmarchal/scheduling/ 19 novembre 2012 1/ 37 Outline 1 Sensitivity and Robustness 2 Analyzing the sensitivity : the case of Backfilling

More information

Lightweight Real-Time Synchronization under P-EDF on Symmetric and Asymmetric Multiprocessors

Lightweight Real-Time Synchronization under P-EDF on Symmetric and Asymmetric Multiprocessors Consistent * Complete * Well Documented * Easy to Reuse * Lightweight Real-Time Synchronization under P-EDF on Symmetric and Asymmetric Multiprocessors Artifact * AE * Evaluated * ECRTS * Alessandro Biondi

More information

Probabilistic Preemption Control using Frequency Scaling for Sporadic Real-time Tasks

Probabilistic Preemption Control using Frequency Scaling for Sporadic Real-time Tasks Probabilistic Preemption Control using Frequency Scaling for Sporadic Real-time Tasks Abhilash Thekkilakattil, Radu Dobrin and Sasikumar Punnekkat Mälardalen Real-Time Research Center, Mälardalen University,

More information

Non-preemptive multiprocessor scheduling of strict periodic systems with precedence constraints

Non-preemptive multiprocessor scheduling of strict periodic systems with precedence constraints Non-preemptive multiprocessor scheduling of strict periodic systems with precedence constraints Liliana Cucu, Yves Sorel INRIA Rocquencourt, BP 105-78153 Le Chesnay Cedex, France liliana.cucu@inria.fr,

More information

Networked Embedded Systems WS 2016/17

Networked Embedded Systems WS 2016/17 Networked Embedded Systems WS 2016/17 Lecture 2: Real-time Scheduling Marco Zimmerling Goal of Today s Lecture Introduction to scheduling of compute tasks on a single processor Tasks need to finish before

More information

An EDF-based Scheduling Algorithm for Multiprocessor Soft Real-Time Systems

An EDF-based Scheduling Algorithm for Multiprocessor Soft Real-Time Systems An EDF-based Scheduling Algorithm for Multiprocessor Soft Real-Time Systems James H. Anderson, Vasile Bud, and UmaMaheswari C. Devi Department of Computer Science The University of North Carolina at Chapel

More information

A New Sufficient Feasibility Test for Asynchronous Real-Time Periodic Task Sets

A New Sufficient Feasibility Test for Asynchronous Real-Time Periodic Task Sets A New Sufficient Feasibility Test for Asynchronous Real-Time Periodic Task Sets Abstract The problem of feasibility analysis for asynchronous periodic task sets (ie where tasks can have an initial offset

More information

Real-Time Systems. Event-Driven Scheduling

Real-Time Systems. Event-Driven Scheduling Real-Time Systems Event-Driven Scheduling Marcus Völp, Hermann Härtig WS 2013/14 Outline mostly following Jane Liu, Real-Time Systems Principles Scheduling EDF and LST as dynamic scheduling methods Fixed

More information