Improved End-to-End Response-Time Bounds for DAG-Based Task Systems

Size: px
Start display at page:

Download "Improved End-to-End Response-Time Bounds for DAG-Based Task Systems"

Transcription

1 Improved End-to-End Response-Time Bounds for DAG-Based Task Systems Kecheng Yang, Ming Yang, and James H. Anderson Department of Computer Science, University of North Carolina at Chapel Hill Abstract This paper considers for the first time end-to-end responsetime analysis for DAG-based real-time task systems implemented on heterogeneous multiprocessor platforms. The specific analysis problem that is considered was motivated by an ongoing industrial collaboration. The DAG-based systems considered in this work differ from those considered in prior work in that intra-task parallelism is allowed: while each invocation of a task is sequential, successive invocations of the same task may execute in parallel. This difference enables much smaller response-time bounds to be obtained. These bounds are obtained via a task-model transformation process that ultimately enables prior work to be applied. This process allows some leeway in setting task relative deadlines. It is shown that response-time bounds can be lessened by using linear-programming techniques to determine such relative deadlines. The proposed scheduling approach and analysis are evaluated via both case-study and schedulability experiments. These experiments demonstrate that significant improvements in end-to-end responsetime bounds are enabled by the proposed techniques. Introduction The multicore revolution is currently undergoing a second wave of innovation in the form of heterogeneous processing elements. In the domain of real-time embedded systems, heterogeneous hardware platforms may be desirable to use for a variety of reasons. For example, ARM s big.little multicore architecture [] enables performance and energy concerns to be balanced by providing a mix of relatively slower, low-power cores and faster, high-power ones. Also, in settings where data-parallel computations must be supported, using graphics processing units (GPUs) as auxiliary processors to CPUs can enable significant performance gains within a constrained energy budget. Unfortunately, the move towards greater heterogeneity is further complicating software design processes that were already being challenged on account of the significant parallelism that exists in conventional multicore platforms with identical processors. Such complications are in fact impeding advancements in the embedded computing indus- Work supported by NSF grants CPS 9, CNS 097, and CPS, AFOSR grant FA90---0, ARO grant W9NF , and funding from both General Motors and FutureWei Corp. try today. The authors have first-hand knowledge of this, as their group is often sought out by industry colleagues to help address such complications. Problem considered herein. In this paper, we report on the results of one such industry interaction, which pertains to a problem that we describe here. This problem involves the scheduling of real-time dataflows on heterogeneous computational elements (CEs), such as CPUs, digital signal processors (DSPs), or one of many types of hardware accelerators. Each dataflow is represented by a DAG, the nodes (resp., edges) of which represent tasks (resp., producer/consumer relationships). A given task may be restricted to run on a specific CE type. Each DAG has a single source task that is invoked periodically. It is possible for multiple dataflows (i.e., different DAGs) to contend for the same CEs. Task preemption may be impossible for some CEs and should in any case be discouraged. Intra-task parallelism is allowed in the sense that consecutive jobs (i.e., invocations) of the same task can execute in parallel (but each job executes sequentially). In fact, a later job can finish earlier due to variations in running times. The challenge is to devise a multi-resource, real-time scheduler for supporting dataflows as described here with accompanying per-dataflow end-to-end response-time analysis. Related work. In the real-time systems community, there has been much schedulability-oriented work pertaining to DAG-based task systems implemented on identical multiprocessor platforms [,,,, 7, 9,,,,, ]. However, we are not aware of any corresponding work directed at heterogeneous platforms. Moreover, the problem statement above has facets such as allowing intra-task parallelism that, to our knowledge, have not been considered before with respect to DAG-based systems. Beyond the real-time systems community, DAG-based systems implemented on heterogeneous platforms have been considered before (e.g., [,, 8]). However, all such work known to us focuses on one-shot, aperiodic DAGbased jobs, rather than periodic or sporadic DAG-based task systems. Moreover, real-time issues are considered only obliquely from the perspectives of job admission control or job makespan minimization. The particular application domain involves cellular base stations, but knowledge of this domain is not required to understand our results. To maintain order, the data produced by subsequent jobs is buffered until all prior jobs of the same task have completed.

2 Contributions. In this paper, we formalize the problem described above and then address it by proposing a scheduling approach and associated end-to-end response-time analysis. Our results are obtained by following a transformation process whereby successive task models are introduced such that: (i) the first task model directly formalizes the problem above; (ii) prior analysis can be applied to the last model to obtain response-time bounds under earliest-deadline-first (EDF) scheduling; and (iii) each successive model is a refinement of the prior one in the sense that all DAG-based precedence constraints ensured by the former model are also ensured by the latter. Such a transformation approach was previously used by Liu and Anderson [] in work on DAGbased systems, but that work focused on identical multiprocessor platforms. Moreover, our work differs from theirs (and all prior work known to us on real-time DAG-based systems) in that we allow intra-task parallelism. This enables much smaller end-to-end response-time bounds to be derived than otherwise would be the case. Additionally, unlike almost all prior work on real-time scheduling approaches for heterogeneous platforms, we do not concern ourselves with the mapping between tasks and CE types, since in our particular problem, this mapping is given and fixed due to the different CE functionalities. Instead, we focus on end-to-end response-time analysis and develop techniques to optimize end-to-end response-time bounds. As we shall see, our transformation approach allows some leeway in setting the relative deadlines of tasks. We show that it is possible to reduce response-time bounds under EDF scheduling by using a linear program to determine such relative deadlines. We evaluate our proposed techniques via case-study and schedulability experiments. These experiments show that significant improvements in end-to-end response-time bounds are enabled by our techniques. Furthermore, our analysis supports early releasing (see Sec. ) to improve observed s. We experimentally demonstrate the efficacy of this technique as well. Organization. The rest of this paper is organized as follows. In Sec., we define a task model that directly formalizes the problem described above. Then, in Secs., we define two successive task model refinements, where the last obtained model is amenable to response-time analysis via prior work. In Sec., we show that this analysis can be improved by solving a linear program as discussed above. Next, in Sec., we show that a previously proposed technique called early releasing [8], which can improve runtime performance, can be applied in our setting without invalidating our response-time bounds. Finally, we assess the efficacy of our results by considering a case-study system in Sec. 7 and by presenting the results of schedulability experiments in Sec. 8. We conclude the paper in Sec. 9. System Model In this section, we formalize the dataflow-scheduling problem described in Sec. and introduce relevant terminology. Figure : Precedence constraints within a DAG G. Each dataflow is represented by a DAG, as discussed earlier. We specifically consider a system G = {G, G,..., G N } comprised of N DAGs. The DAG G i consists of n i nodes, which correspond to n i tasks, denoted τi, τ i,..., τ ni i. Each task τi v releases a (potentially infinite) series of jobs Ji, v, J i, v,.... The edges in G i reflect producer/consumer relationships. A particular task τi v s producers are those tasks with outgoing edges directed to τi v, and its consumers are those with incoming edges directed from τi v. The jth job of task τi v, J i,j v, cannot commence execution until the j th jobs of all of its producers have completed; this ensures that its necessary input data is available. Such job dependencies only exist with respect to the same invocation of a DAG, and not across different invocations. That is, while jobs must execute sequentially, intra-task parallelism is allowed. Example. Fig. shows an example DAG, G. Task τ s producers are tasks τ and τ, thus for any j, J,j needs input data from each of J,j and J,j, so it must wait until those jobs complete. Because intra-task parallelism is allowed, J,j and J,j+ could potentially execute in parallel. To simplify analysis, we assume that each DAG G i has exactly one source task τi, which has only outgoing edges, and one sink task τ ni i, which has only incoming edges. Multi-source/multi-sink DAGs can be supported with the addition of singular virtual sources and sinks that connect multiple sources and sinks, respectively. Virtual sources and sinks have a worst-case execution time (WCET) of zero. We consider the scheduling of DAGs as just described on a heterogeneous hardware platform consisting of different types of CEs. A given CE might be a CPU, DSP, or some specialized hardware accelerator (HAC). Specifically, we assume that the platform consists of M CE pools. Different CE pools may have different types of CEs, but each CE pool π k consists of m k identical CEs. Each task τi v has a parameter Pi v, which denotes the particular CE pool on which it must run, i.e., Pi v = π k means that any job of τi v can be scheduled on any CE in the CE pool π k but cannot execute on any CE in any other CE pool. The WCET of task τi v is denoted Cv i.

3 τ τ τ 0 J, Job Release J, J, 0 Job Deadline J, J, J, J τ, Job Completion (Assume depicted jobs are scheduled alongside other jobs, which are not shown.) J, CPU Execution Time 0 DSP Execution Figure : An example schedule for the DAG-based tasks in G in Fig.. Although the problem description in Sec. indicated that source tasks are released periodically, we generalize this to allow sporadic releases, i.e., for the DAG G i, the job releases of τi have a minimum separation time, denoted T i. A non-source task τi v (v > ) releases its j th job Ji,j v when the j th jobs of all its producer tasks in G i have completed. That is, letting a v i,j and f i,j v denote the release (or arrival) and finish times of Ji,j v, respectively, a v i,j = max{f w i,j τ w i is a producer of τ v i }. () The response time of job J v i,j is defined as f v i,j av i,j, and the of the DAG G i as f ni i,j a i,j. Example. Fig. depicts an example schedule for the DAG G in Fig., assuming task τ is required to execute on a DSP and the other tasks are required to execute on a CPU. The first (resp., second) job of each task has a lighter (resp., darker) shading to make them easier to distinguish. Tasks τ and τ have only one producer, τ, so when task τ finishes a job at times and 8, τ and τ release a job immediately. In contrast, task τ has two producers, τ and τ. Therefore, τ cannot release a job at time when τ finishes a job, but rather must wait until time when τ also finishes a job. In addition, note that consecutive jobs of the same task might execute in parallel (e.g., J, and J, execute in parallel during [, )). Furthermore, for a given task, a later-released job (e.g., J,) may even finish earlier than an earlier-released one (e.g., J,) due execution-time variations. Scheduling. Since many CEs are non-preemptible, we use the non-preemptive global EDF scheduling algorithm within each CE pool. The deadline of job Ji,j v is given by d v i,j = a v i,j + D v i, () where Di v is the relative deadline of task τ i v. For example, in the example schedule in Fig., relative deadlines of D = 7, D =, D =, and D = 8 are assumed. In the context of this paper, deadlines mainly serve the purpose of determining jobs priorities, rather than strict timing constraints for individual jobs. Therefore, deadline misses are acceptable as long as the end-to-end response time of each DAG can be reasonably bounded. Utilization. We denote the utilization of task τ v i by u v i = Cv i T i. () We also use Γ k to denote the set of tasks that are required to execute on the CE pool π k, i.e., Γ k = {τ v i P v i = π k }. () We require that none of the CE pools is overutilized, i.e., for all k, u v i m k. () τ v i Γ k Clearly, overutilization could lead to unbounded task response times and unbounded s. Offset-Based Independent Tasks In this section, we present a second task model, which can be viewed as a refinement of that just presented. The prior model is somewhat problematic because of dependencies among jobs that can be difficult to analyze. In particular, as seen in (), jobs of non-source tasks have release times that are dependent on the finish times of other jobs, and hence on their execution times. By (), deadlines (and hence priorities) of jobs are affected by similar dependencies. In order to ease analysis difficulties associated with such job dependencies, we introduce here the offset-based independent task (obi-task) model. Under this model, tasks are partitioned into groups. The i th such group consists of tasks denoted τi, τ i,..., τ ni i, where τi is a designated source task that releases jobs sporadically with a minimum separation of T i. That is, for any positive integer j, a i,j+ a i,j T i. () Job releases of each non-source task τi v are governed by a new parameter Φ v i, called the offset of τ i v. Specifically, τ i v releases its j th job exactly Φ v i time units after the release time of the j th job of the source task τi of its group. That is, We define a v i,j = a i,j + Φ v i. (7) Φ i = 0 (8) for consistency. Note that, under the obi-task model, a job can be scheduled at any time after its release independently of the execution of any other jobs. The definitions so far have dealt with job releases. Additionally, the two per-task

4 parameters C v i and P v i from Sec. are retained with the same definitions. From DAG-based task sets to obi-task sets. Any DAGbased task set can be implemented by an obi-task set in an obvious way: each DAG becomes an obi-task group with the same task designated as its source, and all T i, Ci v, and Pi v parameters are retained without modification. What is less obvious is how to define task offsets under the obi-task model. This is done by setting each Φ v i (v ) parameter to be a constant such that Φ v i max {Φ k τi k prod(τ i v) i + Ri k }, (9) where prod(τi v ) denotes the set of obi-tasks corresponding to the DAG-based tasks that are the producers of the DAGbased task τi v in G i, and Ri k denotes a response-time bound for the obi-task τi k. For now, we assume that Rk i is known, but later, we will show how to compute it. Example. Consider again the DAG G in Fig.. Assume that, after applying the above transformation, the obi-tasks have response-time bounds of R = 9, R =, R = 7, and R = 8, respectively. Then, we can set Φ = 0, Φ = 9, Φ = 9, and Φ =, respectively, and satisfy (9). With these response-time bounds, the end-to-end response-time bound that can be guaranteed is determined by R, R, and R and is given by R =. Fig. depicts a possible schedule for these obi-tasks and illustrates the transformation. Like in Fig., the first (resp., second) job of each task has a lighter (resp., darker) shading, and intra-task parallelism is possible (e.g., J, and J, in time interval [, )). The following properties follow from this transformation process. According to Property, a DAG-based task set can be implemented by a corresponding set of obi-tasks, and all consumer/producer constraints in the DAG-based specification will be implicitly guaranteed, provided the offsets of the obi-tasks are properly set (i.e., satisfy (9)). Property. If τ k i is a producer of τ v i in the DAG-based task system, then for the j th jobs of the corresponding two obi-tasks, f k i,j av i,j. Proof. By (7), a k i,j = a i,j + Φk i, and by the definition of Rk i, f k i,j ak i,j + Rk i. Thus, f k i,j a i,j + Φ k i + R k i. (0) By (7), a v i,j = a i,j + Φv i, and by (9), Φv i Φk i + Rk i. Thus, By (0) and (), f k i,j av i,j. a v i,j a i,j + Φ k i + R k i. () According to Property, every obi-task τi v has a minimum job-release separation of T i. Property. For any obi-task τ v i, av i,j+ av i,j T i. Proof. a v i,j+ a v i,j = {by (7)} (a i,j+ + Φ v i ) (a i,j + Φ v i ) {by ()} T i Property shows how to compute an end-to-end response-time bound R i. Property. In the obi-task system, for each j, all jobs Ji,j, J i,j,, J ni i,j finish their execution within R i time units after a i, where R i = Φ ni i + R ni i. () Proof. By (7) and the definition of Ri v, J i,j v finishes by time a i,j +Φv i +Rv ni i. Thus, Ji,j in particular finishes within Φni i + R ni i = R i time units after a i. Also, by (9), Φv i +Rv i Φni i, since τ ni i is the single sink in G i. Because Φ ni i R i, this implies that, for any v, Ji,j v finishes within R i time units after a i. Npc-Sporadic Tasks By the results of the prior section, we can compute endto-end DAG response-time bounds by determining per-task response-time bounds for obi-tasks. In this section, we show that the latter can be done by exploiting prior work pertaining to a task model called the npc-sporadic task model ( npc stands for no precedence constraints ) [0, 9]. An npc-sporadic task τ i is specified by (C i, T i, D i ), where C i is its WCET, T i is the minimum separation time between consecutive job releases of τ i, and D i is its relative deadline. As before, τ i s utilization is u i = C i /T i. The main difference between the conventional sporadic task model and the npc-sporadic task model is that the former requires successive jobs of each task to execute in sequence while the latter allows them to execute in parallel. That is, under the conventional sporadic task model, job J i,j+ cannot commence execution until its predecessor J i,j completes, even if a i,j+, the release time of J i,j+, has elapsed. In contrast, under the npc-sporadic task model, any job can execute as soon as it is released. Note that, although we allow intra-task parallelism, each individual job still must execute sequentially. Yang and Anderson [9] investigated the GEDF scheduling of npc-sporadic tasks on uniform heterogeneous multiprocessor platforms where different processors may have different speeds. By setting each processor s speed to be.0, the following theorem follows from their work. Theorem. (Follows from Theorem in [9]) Consider the scheduling of a set of npc-sporadic tasks τ on m identical multiprocessors. Under non-preemptive global EDF scheduling, each npc-task τ i τ has the following

5 R τ Φ = 0 R τ Φ R τ Φ R τ Φ R : the end-to-end response-time bound for G Time Job Release Job Deadline Job Completion CPU Execution DSP Execution (Assume depicted jobs are scheduled alongside other jobs, which are not shown.) Figure : An example schedule of the obi-tasks corresponding to the DAG-based tasks in G in Fig.. response-time bound, provided τ l τ u l m. ( D i u l + ( ) ) u l max m {0, T l D l } τ l τ τ l τ τ l τ + max {C l} + m τ l τ m C i. We now show that Theorem can be applied to obtain per-task response-time bounds for the obi-task set that implements a DAG-based task system, as defined in Sec.. Concrete vs. non-concrete. A concrete sequence of job releases that satisfies a task s specification (under either the obi- or npc-sporadic task model) is called an instantiation of that task. An instantiation of a task set is defined similarly. In contrast, a task or a task set that can have multiple (potentially infinite) instantiations satisfying its specification (e.g., minimum release separation) is called non-concrete. By Property, any instantiation of an obi-task τi v is an instantiation of the npc-sporadic task τi v = (Ci v, T i, Di v). Therefore, any instantiation of an obi-task set {τi v Pi v = π k } is an instantiation of the npc-sporadic task set {τi v Pi v = π k }. Also, since obi-tasks execute independently of one another, obi-tasks executing in different CE pools cannot affect each other. Since each CE pool π k has m k identical processors, the problem we must consider is that of scheduling an instantiation of the npc-sporadic task set {τi v P i v = π k } on m k identical processors. Since Theorem applies to a non-concrete npc-sporadic task set, it applies to every concrete instantiation of that npc-sporadic task set. Thus, we have the following response-time bound for each obi-task τi v: Ri v = D i v m k w Γ k u w l + w Γ k + max {C w Γ l w } + m k Ci v, k m k (u w l max{0, T l Dl w }) () where Γ k = { w Pl w = π k }. () is applicable as long as all task relative deadlines are non-negative. Furthermore, note that each task s responsetime bound only depends on the parameters of tasks that share the same CE pool, and these response-time bounds do not depend on the offsets {Φ v i }. Therefore, given a system of tasks specified by (Ci v, T i, Di v, P i v ), we can compute a response-time bound Ri v for each task. Also, given the response-time bounds {Ri v }, proper offset settings can be computed according to (9) by considering the tasks in each DAG in topological order; by (8), for each source task τi, Φ i = 0. Thus, by Property, an end-to-end response-time bound R i can be computed for each DAG G i. (Note that the response-time bound for a virtual source/sink is not computed by (), but is zero by definition, since its WCET is zero. Any job of such a task completes in zero time as soon as it is released.) Setting Relative Deadlines In the prior sections, we showed that, by applying our proposed transformation techniques, an end-to-end response-

6 time bound for each DAG can be established, given arbitrary but fixed relative-deadline settings. That is, given Di v 0 for any i, v, we can compute corresponding end-to-end response-time bounds (i.e., R i for each i) by (), (8), (9), and (). One obvious choice for setting relative deadlines would be to make them implicit, i.e., Di v = T i for any i, v. However, other choices might actually lead to smaller response-time bounds. In this section, we show that the problem of determining the best relativedeadline settings can be cast as a linear-programming problem, which can be solved in polynomial time.. Linear Program In the proposed linear program (LP), there are three variables per task τ v i : Dv i, Φv i, and Rv i. The parameters T i, C v i, and m k are viewed as constants. Thus, there are V variables in total, where V is the total number of tasks, i.e., the total number of nodes across all DAGs in the system. Before stating the required constraints, we first establish the following theorem, which shows that a relative-deadline setting of D y x > T x is pointless to consider. Theorem. If D y x > T x, then by setting D y x = T x, R y x, the response-time bound of task τ y x, will decrease, and each other task s response-time bound will remain the same. Proof. To begin, note that, by (), τx y does not impact the response-time bounds of those tasks executing on CE pools other than Px y. Therefore, the response-time bounds {Ri v Pi v Px y } for such tasks are not altered by any change to Dx. y In the remainder of the proof, we consider a task τi v such that Pi v = Px y. Let Ri v and R v i denote the response-time bounds for τi v before and after, respectively, reducing Dy x to T x. If i = x and v = y, then by (), R y x R y x = Dy x T x m k τ w l τ k u w l + u y x m k (max{0, T x D y x} max{0, T x T x }) >{since D y x > T x } 0. Alternatively, if i x or v y, then by (), R v i R v i = uy x m k (max{0, T x D y x} max{0, T x T x }) ={since D y x > T x } 0. Thus, the theorem follows. By Theorem, the mentioned reduction of Dx y does not increase the response-time bound for any task. By (9), this implies that none of the offsets, {Φ v i }, needs to be increased. Therefore, by Property, the end-to-end responsetime bound does not increase. These properties motivate our first set of linear constraints. Constraint Set (i): For each task τ v i, 0 D v i T i. V individual linear inequalities arise from this constraint set, where V is the total number of tasks. Another issue we must address is that of ensuring that the offset settings, given by (9), are encoded in our LP. This gives rise to the next constraint set. Constraint Set (ii): For each edge from τi w G i, Φ v i Φ w i + Ri w. to τ v i in a DAG There are E distinct constraints in this set, where E is the total number of edges in all DAGs in this system. Finally, we have a set of constraints that are linear equality constraints. Constraint Set (iii): With Constraints Set (i), it is clear that we can re-write () as follows, for each task τi v, Ri v = D i v u w l + ( u w m l (T l Dl w ) ) k w Γ k w Γ k + max {C w Γ l w } + m k Ci v, k m k () where Γ k = { w Pl w = π k }. Moreover, by (8), for each DAG G i, Φ i = 0. Constraint Set (iii) yields V + G linear equations, where G denotes the number of DAGs. Constraint Sets (i), (ii), and (ii) fully specify our LP, with the exception of the objective function. In this LP, there are V variables, V + E inequality constraints, and V + G linear equality constraints.. Objective Function Different objective functions can be specified for our LP that optimize end-to-end response-time bounds in different senses. Here, we consider a few examples. Single-DAG systems. For systems where only a single DAG exists, the optimization criterion is rather clear. In order to optimize the end-to-end response-time bound of the single DAG, the objective function should minimize the end-to-end response-time bound of the only DAG, G. That is, the desired LP is as follows. minimize subject to Φ n + Rn Constraint Sets (i), (ii), and (iii) Multiple-DAG systems. For systems containing multiple DAGs, choices exist as to the optimization criteria to consider. We list several here. Minimizing the average end-to-end response-time

7 R τ Φ = 0 R τ Φ R τ Φ R τ Φ R : the end-to-end response-time bound for G Time Job Release Job Deadline Job Completion CPU Execution DSP Execution (Assume depicted jobs are scheduled alongside other jobs, which are not shown.) Figure : An example schedule of the obi-tasks corresponding to the DAG-based tasks in G in Fig., when early releasing is allowed. bound: minimize subject to i (Φ ni i + R ni i ) Constraint Sets (i), (ii), and (iii) Minimizing the maximum end-to-end responsetime bound: minimize Y subject to i : Φ ni i + R ni i Y Constraint Sets (i), (ii), and (iii) Minimizing the maximum proportional end-toend response-time bound: minimize Y subject to i : (Φ ni i + R ni i )/T i Y Constraint Sets (i), (ii), and (iii) Early Releasing Transforming a DAG-based task system to a corresponding obi-task system enabled us to derive an end-to-end response-time bound for each DAG. However, such a transformation may actually cause observed end-to-end response times at runtime to increase, because the offsets introduced in the transformation may prevent a job from executing even if all of its producers have already finished. For example, in Fig., J, cannot execute until time 9, even though its corresponding producer job, J,, has finished by time. Observed response times can be improved under deadline-based scheduling without altering analytical response-time bounds by using a technique called early releasing [8]. When early releasing is allowed, a job is eligible for execution as soon as all of its corresponding producer jobs have finished, even if this condition is satisfied before its actual release time. Early releasing does not impact analytical response-time bounds. To see why, note that any instantiation of a set of obi-tasks with early releasing is an instantiation of a set of npc-sporadic tasks with early releasing. Early releasing does not affect the response-time analysis for npc-sporadic tasks presented previously [9] because this analysis is based on the total demand for processing time due to jobs with deadlines at or before a particular time instant. Early releasing does not change upper bounds on such demand, because every job s actual release time and hence deadline are unaltered by early releasing. Thus, the response-time bounds and therefore the end-toend response-time bounds previously established without early releasing still hold with early releasing. Example. Considering G in Fig. again, Fig. is a possible schedule, without early releasing, for the obi-tasks that implement G, as discussed earlier. When we allow early releasing, we do not change any release times or deadlines, but simply allow a job to become eligible for execution before its release time provided its producers have finished. Fig. depicts a possible schedule where early releasing is allowed, assuming the same releases and deadlines as in Fig.. Several jobs (e.g., J,, J,, J,, J,, and J,) now com- 7

8 T = 000 T = 000 T = 00 τ C = 00 P = π τ τ C = P = π T = 000 C = P = π τ C = 80 P = π τ C = 00 P = π (a) G (b) G τ C = 00 P = π τ τ C = 8 P = π τ C = 78 P = π τ τ τ C = 7 P = π C = P = π C = P = π (c) G Figure : DAGs in the case-study system. τ τ C = P = π C = P = π τ τ C = 8 P = π τ C = 78 P = π C = 97 P = π C = 97 P = π Figure : G, where a virtual sink is created for G. τ C = 0 P = N/A R = 0 mence execution before their release times. As a result, observed s are reduced, while still retaining all response-time bounds (per-task and end-to-end). 7 Case Study To illustrate the computational details of our analysis, we consider here a case-study system consisting of three DAGs, G, G, and G, which are specified in Fig.. π is a CE pool consisting of two identical CPUs, and π is a CE pool consisting of two identical DSPs. Thus, m = m =. Utilization check. First, we must calculate the total utilization of all tasks assigned to each CE pool to make sure that neither is overutilized. We have τ v i Γ uv i = =.8 <, and τ vi uv Γ i = =.0 <. Virtual source/sink. Note that all DAGs have a single source and sink, except for G, which has two sinks. For it, we must create a single virtual sink τ, which has a WCET of 0 and a response-time bound of 0. The resulting DAG G is depicted in Fig.. 7. Implicit Deadlines We now show how to compute response-time bounds assuming implicit deadines, i.e., D v = 00 for v, D v = 000 for v (the relative deadline of the virtual sink is irrelevant), and D v = 000 for v. Later, we will show how the use of LP techniques alters these bounds. In order to derive an end-to-end responsetime bound, we first transform the original DAG-based tasks into obi-tasks as described in Sec.. Next, we calculate a response-time bound Ri v for each obi-task τi v by (). These intermediate results are used in these calcuations: w uw Γ l =.8, w uw Γ l =.0, max{0, T l Dl w} = 0 for all l and w, max w Γ{Cw l } = 00, and max τ w l Γ {Cl w } = 80. The resulting task response-time bounds, {Ri v }, are listed in Table. Note that, as a virtual sink, the response-time bound for τ does not need to be computed by (), but is 0 by definition. By (8) and (9), the offsets of the obi-tasks can now be computed in topological order with respect to each DAG. The resulting offsets, {Φ v i }, are also shown in Table. Finally, by Property, we have an end-to-end response-time bound for each DAG: R = Φ + R = 8., R = Φ + R =., and R = Φ + R = LP-Based Deadline Settings If we use LP techniques to optimize end-to-end responsetime bounds, then choices exist regarding the objective function, because our system has multiple DAGs. We consider three choices here. Minimizing the average end-to-end response-time bound. For this choice, relative-deadline settings, obi-task response-time bounds, and obi-task offsets are as shown in Table (a). The resulting end-to-end response-time bounds are R = Φ + R =., R = Φ + R =., and R = Φ + R = 7.. Minimizing the maximum end-to-end response-time bound. For this choice, relative-deadline settings, obi-task 8

9 R R R R R R R R R R R R R Φ Φ Φ Φ Φ Φ Φ Φ Φ Φ Φ Φ Φ Table : Case-study task response-time bounds and obi-task offsets assuming implicit deadlines. Bold entries denote sinks. D D D D D D D D D D D D D R R R R R R R R R R R R R Φ Φ Φ Φ Φ Φ Φ Φ Φ Φ Φ Φ Φ (a) D D D D D D D D D D D D D R R R R R R R R R R R R R Φ Φ Φ Φ Φ Φ Φ Φ Φ Φ Φ Φ Φ (b) D D D D D D D D D D D D D R R R R R R R R R R R R R Φ Φ Φ Φ Φ Φ Φ Φ Φ Φ Φ Φ Φ (c) Table : Case-study relative-deadline settings, obi-task response-time bounds, and obi-task offsets when using linear programming to (a) minimize average end-to-end response-time bounds, (b) minimize maximum end-to-end response-time bounds, and (c) minimize maximum proportional end-to-end response-time bounds. Bold entries denote sinks. response-time bounds, and obi-task offsets are as shown in Table (b). The resulting end-to-end response-time bounds are R = Φ + R = 0., R = Φ + R = 0., and R = Φ + R = 0.. Minimizing the maximum proportional end-to-end response-time bound. For this choice, relative-deadline settings, obi-task response-time bounds, and obi-task offsets are as shown in Table (c). The resulting end-to-end response-time bounds are R = Φ + R = 08.9, R = Φ + R = 7.8, and R = Φ + R = Early Releasing As discussed in Sec., early releasing can improve observed response times without compromising response-time bounds. The value of allowing early releasing can be seen in the results reported in Table. This table gives the largest observed of each DAG, assuming implicit deadlines with and without early releasing, in a schedule that was simulated for 0,000 time units. Analytical bounds are shown as well. G G G Early releasing No early releasing Bounds Table : Observed s with/without early releasing and analytical end-to-end response-time bounds for the implicit-deadline setting. 8 Schedulability Study The case study presented in prior section demonstrates the value of the proposed techniques in systems comprised of multiple DAGs hosted on a heterogeneous hardware platform. In this section, we examine specifically the two techniques that underlie our work: allowing intra-task parallelism as provided by the npc-sporadic task model, and determining relative-deadline settings by solving an LP. Comparison setup. To examine these techniques, we preformed a schedulability study involving randomly generated DAG-based task systems. Specifically, we compared three strategies: (i) transforming to a conventional sporadic task system and using implicit relative deadlines, which is a strategy used in prior work []; (ii) transforming to an 9

10 npc-sporadic task system and using implicit relative deadlines; and (iii) transforming to an npc-sporadic task system and using LP-based relative deadlines. To keep our study tractable, we limited attention to task systems with only one DAG scheduled on an identical platform. This scoping is sufficient to demonstrate the value of our techniques. Also, as discussed in Sec.., the LP for single-dag task systems has a unique objective function. Furthermore, strategy (i) has been considered previously in prior work [, 9], and that work assumed an identical platform. Random system generation. We generated DAG-based task systems using a method similar to that used by other authors [, ]. We generated systems for different task counts. For a given task count n, we randomly generated 0 DAGs with n nodes, with one node designated as a source, and one as a sink. For each pair of internal nodes (i.e., nodes that are not a source or sink), we added an edge between them with probability edgeprob, a parameter that we varied in our experiments. We directed each such edge from the node with the smaller index to that with the larger index, to ensure that the graph is acyclic. Finally, we added an edge from the source to each internal node with no incoming edges, and to the sink from each internal node with no outgoing edges. We considered a platform comprised of eight identical processors. We considered all choices of total utilization in the range [, 8] in increments of 0.. For each designated total utilization and each given DAG, we randomly generated 0 sets of task utilizations using the MATLAB function randfixedsum() [7], with per-task utilizations ranging in [0, ]. For each given total utilization (i.e., each data point on the plots considered below), we report end-toend response-time bounds under strategies (i) (iii) averaged over 0 (DAGs) 0 (utilizations) =, 00 task sets. Results. In all cases that we considered, the two evaluated techniques improved end-to-end response-time bounds, often significantly. Due to space constraints, we present here only the case of 0 tasks and edgeprob = 0.. For this case, Fig. 7 plots normalized end-to-end response-time bounds (NERTBs) as a function of total task system utilization. A DAG s NERTB is defined as R/(T (h + )), where R is its end-to-end response-time bound, T is its period, and h is its height (i.e., the longest path length from source to sink). This same metric has been considered in prior work [9]. Each data point represents the average NERTB among, 00 task sets with the same total utilization. By comparing the upper two plots, we see that the introduction of intra-task parallelism (i.e., the npc-sporadic model) improves end-to-end response-time bounds significantly. Comparing the lower two curves, we see that using LP-based relative deadlines yields further improvement. Pipelines. We also considered pipelines, which are an important special case of a DAG. A pipeline consists of n nodes linked together linearly with a height of n. Fig. 8 depicts average NERTBs as a function of pipeline length (n) for 00 randomly generated pipelines of total uti- Normalized End-to-End Response-Time Bounds Normalized End-to-End Response-Time Bounds conventional sporadic tasks, implicit deadlines npc-sporadic tasks, implicit deadlines npc-sporadic tasks, LP-based deadlines Total Utilization Figure 7: 0 tasks, edgeprob=0.. conventional sporadic tasks, implicit deadlines, npc-sporadic tasks, implicit deadlines npc-sporadic tasks, LP-based deadlines Pipeline Length Figure 8: Pipelines with total utilization of 8. lization 8.0. As seen, NERTBs decrease as pipeline length increases. This is because longer pipelines have more tasks, thus per-task utilizations are lower, given that total utilization is fixed. The curves in Fig. 8 show that, for higher pertask utilizations, the introduction of intra-task parallelism (i.e., the npc-sporadic model) has a much more significant impact on end-to-end response-time bounds. As per-task utilizations decrease, the impact of using LP-based relativedeadline settings becomes more significant. 9 Conclusion We presented task transformation techniques to provide end-to-end response-time bounds for DAG-based tasks implemented on heterogenous multiprocessor platforms where intra-task parallelism is allowed. We also presented an LPbased method for setting relative deadlines that can be applied to improve these bounds. We evaluated the efficacy of these results by considering a case-study task system and by conducting a schedulability study. To our knowledge, this paper is the first to present end-to-end response-time analysis for the considered context. In future work, we intend to extend these techniques to deal with synchronization requirements and to incorporate methods for limiting interference caused by contention for shared hardware components such as caches, memory banks, etc. 0

11 References [] R. Bajaj and D. Agrawal. Scheduling multiple task graphs in heterogeneous distributed real-time systems by exploiting schedule holes with bin packing techniques. IEEE Transactions on Parallel and Distributed Systems, ():07 8, 00. [] S. Baruah. Improved multiprocessor global schedulability analysis of sporadic DAG task systems. In Proceedings of the th Euromicro Conference on Real-Time Systems, pages 97 0, 0. [] S. Baruah. The federated scheduling of constrained-deadline sporadic DAG task systems. In Proceedings of the 8th Conference on Design, Automation and Test in Europe, pages 8, 0. [] S. Baruah. Federated scheduling of sporadic DAG task systems. In Proceedings of the 9th IEEE Internation Parallel and Distributed Processing Symposium, pages 79 8, 0. [] S. Baruah, V. Bonifaci, A. Marchetti-Spaccamela, L. Stougie, and A. Wiese. A generalized parallel task model for recurrent real-time processes. In Proceedings of the the rd IEEE Real-Time Systems Symposium, pages 7, 0. [] big.little Processing. /technologies/biglittleprocessing.php. [7] V. Bonifaci, A. Marchetti-Spaccamela, S. Stiller, and A. Wiese. Feasibility analysis in the sporadic DAG task model. In Proceedings of the th Euromicro Conference on Real-Time Systems, pages, 0. [8] U. Devi. Soft Real-Time Scheduling on Multiprocessors. PhD thesis, University of North Carolina, Chapel Hill, NC, 00. [9] G. Elliott, N. Kim, J. Erickson, C. Liu, and J. Anderson. Minimizing response times of automotive dataflows on multicore. In Proceedings of the 0th IEEE International Conference on Embedded and Real- Time Computing Systems and Applications, pages 0, 0. [0] J. Erickson and J. Anderson. Response time bounds for G-EDF without intra-task precedence constraints. In Proceedings of the th International Conference On Principles Of Distributed Systems, pages 8, 0. [] T. Grandpierre, C. Lavarenne, and Y. Sorel. Optimized rapid prototyping for real-time embedded heterogeneous multiprocessors. In Proceedings of the 7th international workshop on Hardware/software codesign, pages 7 78, 999. [] J. Li, K. Agrawal, C. Lu, and C. Gill. Analysis of global EDF for parallel tasks. In Proceedings of the th Euromicro Conference on Real-Time Systems, pages, 0. [] J. Li, A. Saifullah, K. Agrawal, C. Gill, and C. Lu. Analysis of federated and global scheduling for parallel real-time tasks. In Proceedings of the th Euromicro Conference on Real-Time Systems, pages 9 9, 0. [] J. Li, A. Saifullah, K. Agrawal, C. Gill, and C. Lu. Capacity augmentation bound of federated scheduling for parallel DAG tasks. Technical Report WUCSE-0-, Washington University in St Louis, 0. [] C. Liu and J. Anderson. Supporting soft real-time DAG-based systems on multiprocessors with no utilization loss. In Proceedings of the st IEEE Real-Time Systems Symposium, pages, 00. [] A Saifullah, K. Agrawal, C. Lu, and C. Gill. Multi-core real-time scheduling for generalized parallel task models. In Proceedings of the nd IEEE Real-Time Systems Symposium, pages 7, 0. [7] R. Stafford. Random vectors with fixed sum. mathworks.com/matlabcentral/fileexchange/ 9700-random-vectors-with-fixed-sum. [8] G. Stavrinides and H. Karatza. Scheduling multiple task graphs in heterogeneous distributed real-time systems by exploiting schedule holes with bin packing techniques. Simulation Modelling Practice and Theory, 9():0, 0. [9] K. Yang and J. Anderson. Optimal GEDF-based schedulers that allow intra-task parallelism on heterogeneous multiprocessors. In Proceedings of the th IEEE Symposium on Embedded Systems for Real-Time Multimedia, pages 0 9, 0.

On the Soft Real-Time Optimality of Global EDF on Multiprocessors: From Identical to Uniform Heterogeneous

On the Soft Real-Time Optimality of Global EDF on Multiprocessors: From Identical to Uniform Heterogeneous On the Soft Real-Time Optimality of Global EDF on Multiprocessors: From Identical to Uniform Heterogeneous Kecheng Yang and James H. Anderson Department of Computer Science, University of North Carolina

More information

Multi-core Real-Time Scheduling for Generalized Parallel Task Models

Multi-core Real-Time Scheduling for Generalized Parallel Task Models Washington University in St. Louis Washington University Open Scholarship All Computer Science and Engineering Research Computer Science and Engineering Report Number: WUCSE-011-45 011 Multi-core Real-Time

More information

On the Soft Real-Time Optimality of Global EDF on Uniform Multiprocessors

On the Soft Real-Time Optimality of Global EDF on Uniform Multiprocessors On the Soft Real-Time Optimality of Global EDF on Uniform Multiprocessors Kecheng Yang and James H Anderson Department of Computer Science, University of North Carolina at Chapel Hill Abstract It has long

More information

Multi-core Real-Time Scheduling for Generalized Parallel Task Models

Multi-core Real-Time Scheduling for Generalized Parallel Task Models Noname manuscript No. (will be inserted by the editor) Multi-core Real-Time Scheduling for Generalized Parallel Task Models Abusayeed Saifullah Jing Li Kunal Agrawal Chenyang Lu Christopher Gill Received:

More information

Andrew Morton University of Waterloo Canada

Andrew Morton University of Waterloo Canada EDF Feasibility and Hardware Accelerators Andrew Morton University of Waterloo Canada Outline 1) Introduction and motivation 2) Review of EDF and feasibility analysis 3) Hardware accelerators and scheduling

More information

An O(m) Analysis Technique for Supporting Real-Time Self-Suspending Task Systems

An O(m) Analysis Technique for Supporting Real-Time Self-Suspending Task Systems An O(m) Analysis Technique for Supporting Real-Time Self-Suspending Task Systems Cong Liu and James H. Anderson Department of Computer Science, University of North Carolina at Chapel Hill Abstract In many

More information

Rate-monotonic scheduling on uniform multiprocessors

Rate-monotonic scheduling on uniform multiprocessors Rate-monotonic scheduling on uniform multiprocessors Sanjoy K. Baruah The University of North Carolina at Chapel Hill Email: baruah@cs.unc.edu Joël Goossens Université Libre de Bruxelles Email: joel.goossens@ulb.ac.be

More information

Real-time scheduling of sporadic task systems when the number of distinct task types is small

Real-time scheduling of sporadic task systems when the number of distinct task types is small Real-time scheduling of sporadic task systems when the number of distinct task types is small Sanjoy Baruah Nathan Fisher Abstract In some real-time application systems, there are only a few distinct kinds

More information

Tardiness Bounds under Global EDF Scheduling on a. Multiprocessor

Tardiness Bounds under Global EDF Scheduling on a. Multiprocessor Tardiness Bounds under Global EDF Scheduling on a Multiprocessor UmaMaheswari C. Devi and James H. Anderson Department of Computer Science The University of North Carolina at Chapel Hill Abstract We consider

More information

A New Task Model and Utilization Bound for Uniform Multiprocessors

A New Task Model and Utilization Bound for Uniform Multiprocessors A New Task Model and Utilization Bound for Uniform Multiprocessors Shelby Funk Department of Computer Science, The University of Georgia Email: shelby@cs.uga.edu Abstract This paper introduces a new model

More information

Global mixed-criticality scheduling on multiprocessors

Global mixed-criticality scheduling on multiprocessors Global mixed-criticality scheduling on multiprocessors Haohan Li Sanjoy Baruah The University of North Carolina at Chapel Hill Abstract The scheduling of mixed-criticality implicit-deadline sporadic task

More information

Scheduling Stochastically-Executing Soft Real-Time Tasks: A Multiprocessor Approach Without Worst-Case Execution Times

Scheduling Stochastically-Executing Soft Real-Time Tasks: A Multiprocessor Approach Without Worst-Case Execution Times Scheduling Stochastically-Executing Soft Real-Time Tasks: A Multiprocessor Approach Without Worst-Case Execution Times Alex F. Mills Department of Statistics and Operations Research University of North

More information

Scheduling mixed-criticality systems to guarantee some service under all non-erroneous behaviors

Scheduling mixed-criticality systems to guarantee some service under all non-erroneous behaviors Consistent * Complete * Well Documented * Easy to Reuse * Scheduling mixed-criticality systems to guarantee some service under all non-erroneous behaviors Artifact * AE * Evaluated * ECRTS * Sanjoy Baruah

More information

Partitioned scheduling of sporadic task systems: an ILP-based approach

Partitioned scheduling of sporadic task systems: an ILP-based approach Partitioned scheduling of sporadic task systems: an ILP-based approach Sanjoy K. Baruah The University of North Carolina Chapel Hill, NC. USA Enrico Bini Scuola Superiore Santa Anna Pisa, Italy. Abstract

More information

Non-Preemptive and Limited Preemptive Scheduling. LS 12, TU Dortmund

Non-Preemptive and Limited Preemptive Scheduling. LS 12, TU Dortmund Non-Preemptive and Limited Preemptive Scheduling LS 12, TU Dortmund 09 May 2017 (LS 12, TU Dortmund) 1 / 31 Outline Non-Preemptive Scheduling A General View Exact Schedulability Test Pessimistic Schedulability

More information

Non-preemptive Fixed Priority Scheduling of Hard Real-Time Periodic Tasks

Non-preemptive Fixed Priority Scheduling of Hard Real-Time Periodic Tasks Non-preemptive Fixed Priority Scheduling of Hard Real-Time Periodic Tasks Moonju Park Ubiquitous Computing Lab., IBM Korea, Seoul, Korea mjupark@kr.ibm.com Abstract. This paper addresses the problem of

More information

The Concurrent Consideration of Uncertainty in WCETs and Processor Speeds in Mixed Criticality Systems

The Concurrent Consideration of Uncertainty in WCETs and Processor Speeds in Mixed Criticality Systems The Concurrent Consideration of Uncertainty in WCETs and Processor Speeds in Mixed Criticality Systems Zhishan Guo and Sanjoy Baruah Department of Computer Science University of North Carolina at Chapel

More information

Real-Time Scheduling and Resource Management

Real-Time Scheduling and Resource Management ARTIST2 Summer School 2008 in Europe Autrans (near Grenoble), France September 8-12, 2008 Real-Time Scheduling and Resource Management Lecturer: Giorgio Buttazzo Full Professor Scuola Superiore Sant Anna

More information

Task Models and Scheduling

Task Models and Scheduling Task Models and Scheduling Jan Reineke Saarland University June 27 th, 2013 With thanks to Jian-Jia Chen at KIT! Jan Reineke Task Models and Scheduling June 27 th, 2013 1 / 36 Task Models and Scheduling

More information

Embedded Systems 15. REVIEW: Aperiodic scheduling. C i J i 0 a i s i f i d i

Embedded Systems 15. REVIEW: Aperiodic scheduling. C i J i 0 a i s i f i d i Embedded Systems 15-1 - REVIEW: Aperiodic scheduling C i J i 0 a i s i f i d i Given: A set of non-periodic tasks {J 1,, J n } with arrival times a i, deadlines d i, computation times C i precedence constraints

More information

Optimal Semi-Partitioned Scheduling in Soft Real-Time Systems

Optimal Semi-Partitioned Scheduling in Soft Real-Time Systems Optimal Semi-Partitioned Scheduling in Soft Real-Time Systems James H. Anderson 1, Benjamin N. Casses 1, UmaMaheswari C. Devi 2, and Jeremy P. Erickson 1 1 Dept. of Computer Science, University of North

More information

Making OpenVX Really Real Time

Making OpenVX Really Real Time Making OpenVX Really Real Time Ming Yang, Tanya Amert, Kecheng Yang,, Nathan Otterness, James H. Anderson, F. Donelson Smith, and Shige Wang 3 University of North Carolina at Chapel Hill Texas State University

More information

The preemptive uniprocessor scheduling of mixed-criticality implicit-deadline sporadic task systems

The preemptive uniprocessor scheduling of mixed-criticality implicit-deadline sporadic task systems The preemptive uniprocessor scheduling of mixed-criticality implicit-deadline sporadic task systems Sanjoy Baruah 1 Vincenzo Bonifaci 2 3 Haohan Li 1 Alberto Marchetti-Spaccamela 4 Suzanne Van Der Ster

More information

Multiprocessor Scheduling II: Global Scheduling. LS 12, TU Dortmund

Multiprocessor Scheduling II: Global Scheduling. LS 12, TU Dortmund Multiprocessor Scheduling II: Global Scheduling Prof. Dr. Jian-Jia Chen LS 12, TU Dortmund 28, June, 2016 Prof. Dr. Jian-Jia Chen (LS 12, TU Dortmund) 1 / 42 Global Scheduling We will only focus on identical

More information

arxiv: v1 [cs.os] 6 Jun 2013

arxiv: v1 [cs.os] 6 Jun 2013 Partitioned scheduling of multimode multiprocessor real-time systems with temporal isolation Joël Goossens Pascal Richard arxiv:1306.1316v1 [cs.os] 6 Jun 2013 Abstract We consider the partitioned scheduling

More information

Tardiness Bounds under Global EDF Scheduling on a Multiprocessor

Tardiness Bounds under Global EDF Scheduling on a Multiprocessor Tardiness ounds under Global EDF Scheduling on a Multiprocessor UmaMaheswari C. Devi and James H. Anderson Department of Computer Science The University of North Carolina at Chapel Hill Abstract This paper

More information

Tardiness Bounds for EDF Scheduling on Multi-Speed Multicore Platforms

Tardiness Bounds for EDF Scheduling on Multi-Speed Multicore Platforms Tardiness Bounds for EDF Scheduling on Multi-Speed Multicore Platforms Hennadiy Leontyev and James H. Anderson Department of Computer Science, University of North Carolina at Chapel Hill Abstract Multicore

More information

Embedded Systems 14. Overview of embedded systems design

Embedded Systems 14. Overview of embedded systems design Embedded Systems 14-1 - Overview of embedded systems design - 2-1 Point of departure: Scheduling general IT systems In general IT systems, not much is known about the computational processes a priori The

More information

Reservation-Based Federated Scheduling for Parallel Real-Time Tasks

Reservation-Based Federated Scheduling for Parallel Real-Time Tasks Reservation-Based Federated Scheduling for Parallel Real-Time Tasks Niklas Ueter 1, Georg von der Brüggen 1, Jian-Jia Chen 1, Jing Li 2, and Kunal Agrawal 3 1 TU Dortmund University, Germany 2 New Jersey

More information

Embedded Systems - FS 2018

Embedded Systems - FS 2018 Institut für Technische Informatik und Kommunikationsnetze Prof. L. Thiele Embedded Systems - FS 2018 Sample solution to Exercise 3 Discussion Date: 11.4.2018 Aperiodic Scheduling Task 1: Earliest Deadline

More information

The Feasibility Analysis of Multiprocessor Real-Time Systems

The Feasibility Analysis of Multiprocessor Real-Time Systems The Feasibility Analysis of Multiprocessor Real-Time Systems Sanjoy Baruah Nathan Fisher The University of North Carolina at Chapel Hill Abstract The multiprocessor scheduling of collections of real-time

More information

An Optimal Semi-Partitioned Scheduler for Uniform Heterogeneous Multiprocessors

An Optimal Semi-Partitioned Scheduler for Uniform Heterogeneous Multiprocessors An Optimal Semi-Partitioned Scheduler for Uniform Heterogeneous Multiprocessors Kecheng Yang and James H. Anderson Department of Computer Science, University of North Carolina at Chapel Hill Abstract A

More information

Federated Scheduling for Stochastic Parallel Real-time Tasks

Federated Scheduling for Stochastic Parallel Real-time Tasks Federated Scheduling for Stochastic Parallel Real-time Tasks Jing Li, Kunal Agrawal, Christopher Gill, and Chenyang Lu Department of Computer Science and Engineering Washington University in St. Louis

More information

RUN-TIME EFFICIENT FEASIBILITY ANALYSIS OF UNI-PROCESSOR SYSTEMS WITH STATIC PRIORITIES

RUN-TIME EFFICIENT FEASIBILITY ANALYSIS OF UNI-PROCESSOR SYSTEMS WITH STATIC PRIORITIES RUN-TIME EFFICIENT FEASIBILITY ANALYSIS OF UNI-PROCESSOR SYSTEMS WITH STATIC PRIORITIES Department for Embedded Systems/Real-Time Systems, University of Ulm {name.surname}@informatik.uni-ulm.de Abstract:

More information

Optimal Utilization Bounds for the Fixed-priority Scheduling of Periodic Task Systems on Identical Multiprocessors. Sanjoy K.

Optimal Utilization Bounds for the Fixed-priority Scheduling of Periodic Task Systems on Identical Multiprocessors. Sanjoy K. Optimal Utilization Bounds for the Fixed-priority Scheduling of Periodic Task Systems on Identical Multiprocessors Sanjoy K. Baruah Abstract In fixed-priority scheduling the priority of a job, once assigned,

More information

Non-Work-Conserving Non-Preemptive Scheduling: Motivations, Challenges, and Potential Solutions

Non-Work-Conserving Non-Preemptive Scheduling: Motivations, Challenges, and Potential Solutions Non-Work-Conserving Non-Preemptive Scheduling: Motivations, Challenges, and Potential Solutions Mitra Nasri Chair of Real-time Systems, Technische Universität Kaiserslautern, Germany nasri@eit.uni-kl.de

More information

CycleTandem: Energy-Saving Scheduling for Real-Time Systems with Hardware Accelerators

CycleTandem: Energy-Saving Scheduling for Real-Time Systems with Hardware Accelerators CycleTandem: Energy-Saving Scheduling for Real-Time Systems with Hardware Accelerators Sandeep D souza and Ragunathan (Raj) Rajkumar Carnegie Mellon University High (Energy) Cost of Accelerators Modern-day

More information

EDF Feasibility and Hardware Accelerators

EDF Feasibility and Hardware Accelerators EDF Feasibility and Hardware Accelerators Andrew Morton University of Waterloo, Waterloo, Canada, arrmorton@uwaterloo.ca Wayne M. Loucks University of Waterloo, Waterloo, Canada, wmloucks@pads.uwaterloo.ca

More information

Process Scheduling for RTS. RTS Scheduling Approach. Cyclic Executive Approach

Process Scheduling for RTS. RTS Scheduling Approach. Cyclic Executive Approach Process Scheduling for RTS Dr. Hugh Melvin, Dept. of IT, NUI,G RTS Scheduling Approach RTS typically control multiple parameters concurrently Eg. Flight Control System Speed, altitude, inclination etc..

More information

Schedulability analysis of global Deadline-Monotonic scheduling

Schedulability analysis of global Deadline-Monotonic scheduling Schedulability analysis of global Deadline-Monotonic scheduling Sanjoy Baruah Abstract The multiprocessor Deadline-Monotonic (DM) scheduling of sporadic task systems is studied. A new sufficient schedulability

More information

CIS 4930/6930: Principles of Cyber-Physical Systems

CIS 4930/6930: Principles of Cyber-Physical Systems CIS 4930/6930: Principles of Cyber-Physical Systems Chapter 11 Scheduling Hao Zheng Department of Computer Science and Engineering University of South Florida H. Zheng (CSE USF) CIS 4930/6930: Principles

More information

Lecture 13. Real-Time Scheduling. Daniel Kästner AbsInt GmbH 2013

Lecture 13. Real-Time Scheduling. Daniel Kästner AbsInt GmbH 2013 Lecture 3 Real-Time Scheduling Daniel Kästner AbsInt GmbH 203 Model-based Software Development 2 SCADE Suite Application Model in SCADE (data flow + SSM) System Model (tasks, interrupts, buses, ) SymTA/S

More information

Tardiness Bounds for FIFO Scheduling on Multiprocessors

Tardiness Bounds for FIFO Scheduling on Multiprocessors Tardiness Bounds for FIFO Scheduling on Multiprocessors Hennadiy Leontyev and James H. Anderson Department of Computer Science, University of North Carolina at Chapel Hill leontyev@cs.unc.edu, anderson@cs.unc.edu

More information

Non-preemptive multiprocessor scheduling of strict periodic systems with precedence constraints

Non-preemptive multiprocessor scheduling of strict periodic systems with precedence constraints Non-preemptive multiprocessor scheduling of strict periodic systems with precedence constraints Liliana Cucu, Yves Sorel INRIA Rocquencourt, BP 105-78153 Le Chesnay Cedex, France liliana.cucu@inria.fr,

More information

Real-Time Systems. Lecture #14. Risat Pathan. Department of Computer Science and Engineering Chalmers University of Technology

Real-Time Systems. Lecture #14. Risat Pathan. Department of Computer Science and Engineering Chalmers University of Technology Real-Time Systems Lecture #14 Risat Pathan Department of Computer Science and Engineering Chalmers University of Technology Real-Time Systems Specification Implementation Multiprocessor scheduling -- Partitioned

More information

Mixed-criticality scheduling upon varying-speed multiprocessors

Mixed-criticality scheduling upon varying-speed multiprocessors Mixed-criticality scheduling upon varying-speed multiprocessors Zhishan Guo Sanjoy Baruah The University of North Carolina at Chapel Hill Abstract An increasing trend in embedded computing is the moving

More information

Task assignment in heterogeneous multiprocessor platforms

Task assignment in heterogeneous multiprocessor platforms Task assignment in heterogeneous multiprocessor platforms Sanjoy K. Baruah Shelby Funk The University of North Carolina Abstract In the partitioned approach to scheduling periodic tasks upon multiprocessors,

More information

Uniprocessor Mixed-Criticality Scheduling with Graceful Degradation by Completion Rate

Uniprocessor Mixed-Criticality Scheduling with Graceful Degradation by Completion Rate Uniprocessor Mixed-Criticality Scheduling with Graceful Degradation by Completion Rate Zhishan Guo 1, Kecheng Yang 2, Sudharsan Vaidhun 1, Samsil Arefin 3, Sajal K. Das 3, Haoyi Xiong 4 1 Department of

More information

arxiv: v1 [cs.os] 21 May 2008

arxiv: v1 [cs.os] 21 May 2008 Integrating job parallelism in real-time scheduling theory Sébastien Collette Liliana Cucu Joël Goossens arxiv:0805.3237v1 [cs.os] 21 May 2008 Abstract We investigate the global scheduling of sporadic,

More information

An Optimal k-exclusion Real-Time Locking Protocol Motivated by Multi-GPU Systems

An Optimal k-exclusion Real-Time Locking Protocol Motivated by Multi-GPU Systems An Optimal k-exclusion Real-Time Locking Protocol Motivated by Multi-GPU Systems Glenn A. Elliott and James H. Anderson Department of Computer Science, University of North Carolina at Chapel Hill Abstract

More information

On Non-Preemptive Scheduling of Periodic and Sporadic Tasks

On Non-Preemptive Scheduling of Periodic and Sporadic Tasks On Non-Preemptive Scheduling of Periodic and Sporadic Tasks Kevin Jeffay * Donald F. Stanat University of North Carolina at Chapel Hill Department of Computer Science Charles U. Martel ** University of

More information

Federated Scheduling for Stochastic Parallel Real-time Tasks

Federated Scheduling for Stochastic Parallel Real-time Tasks Federated Scheduling for Stochastic Parallel Real-time Tasks Jing Li, Kunal Agrawal, Christopher Gill, and Chenyang Lu Department of Computer Science and Engineering Washington University in St. Louis

More information

Task Reweighting under Global Scheduling on Multiprocessors

Task Reweighting under Global Scheduling on Multiprocessors ask Reweighting under Global Scheduling on Multiprocessors Aaron Block, James H. Anderson, and UmaMaheswari C. Devi Department of Computer Science, University of North Carolina at Chapel Hill March 7 Abstract

More information

System Model. Real-Time systems. Giuseppe Lipari. Scuola Superiore Sant Anna Pisa -Italy

System Model. Real-Time systems. Giuseppe Lipari. Scuola Superiore Sant Anna Pisa -Italy Real-Time systems System Model Giuseppe Lipari Scuola Superiore Sant Anna Pisa -Italy Corso di Sistemi in tempo reale Laurea Specialistica in Ingegneria dell Informazione Università di Pisa p. 1/?? Task

More information

Multiprocessor Scheduling I: Partitioned Scheduling. LS 12, TU Dortmund

Multiprocessor Scheduling I: Partitioned Scheduling. LS 12, TU Dortmund Multiprocessor Scheduling I: Partitioned Scheduling Prof. Dr. Jian-Jia Chen LS 12, TU Dortmund 22/23, June, 2015 Prof. Dr. Jian-Jia Chen (LS 12, TU Dortmund) 1 / 47 Outline Introduction to Multiprocessor

More information

Real-Time Systems. Event-Driven Scheduling

Real-Time Systems. Event-Driven Scheduling Real-Time Systems Event-Driven Scheduling Hermann Härtig WS 2018/19 Outline mostly following Jane Liu, Real-Time Systems Principles Scheduling EDF and LST as dynamic scheduling methods Fixed Priority schedulers

More information

AS computer hardware technology advances, both

AS computer hardware technology advances, both 1 Best-Harmonically-Fit Periodic Task Assignment Algorithm on Multiple Periodic Resources Chunhui Guo, Student Member, IEEE, Xiayu Hua, Student Member, IEEE, Hao Wu, Student Member, IEEE, Douglas Lautner,

More information

The Partitioned Dynamic-priority Scheduling of Sporadic Task Systems

The Partitioned Dynamic-priority Scheduling of Sporadic Task Systems The Partitioned Dynamic-priority Scheduling of Sporadic Task Systems Abstract A polynomial-time algorithm is presented for partitioning a collection of sporadic tasks among the processors of an identical

More information

Contention-Free Executions for Real-Time Multiprocessor Scheduling

Contention-Free Executions for Real-Time Multiprocessor Scheduling Contention-Free Executions for Real-Time Multiprocessor Scheduling JINKYU LEE, University of Michigan ARVIND EASWARAN, Nanyang Technological University INSIK SHIN, KAIST A time slot is defined as contention-free

More information

Dependency Graph Approach for Multiprocessor Real-Time Synchronization. TU Dortmund, Germany

Dependency Graph Approach for Multiprocessor Real-Time Synchronization. TU Dortmund, Germany Dependency Graph Approach for Multiprocessor Real-Time Synchronization Jian-Jia Chen, Georg von der Bru ggen, Junjie Shi, and Niklas Ueter TU Dortmund, Germany 14,12,2018 at RTSS Jian-Jia Chen 1 / 21 Multiprocessor

More information

Semi-Partitioned Fixed-Priority Scheduling on Multiprocessors

Semi-Partitioned Fixed-Priority Scheduling on Multiprocessors Semi-Partitioned Fixed-Priority Scheduling on Multiprocessors Shinpei Kato and Nobuyuki Yamasaki Department of Information and Computer Science Keio University, Yokohama, Japan {shinpei,yamasaki}@ny.ics.keio.ac.jp

More information

Cache-Aware Compositional Analysis of Real- Time Multicore Virtualization Platforms

Cache-Aware Compositional Analysis of Real- Time Multicore Virtualization Platforms University of Pennsylvania ScholarlyCommons Departmental Papers (CIS) Department of Computer & Information Science 12-2013 Cache-Aware Compositional Analysis of Real- Time Multicore Virtualization Platforms

More information

Clock-driven scheduling

Clock-driven scheduling Clock-driven scheduling Also known as static or off-line scheduling Michal Sojka Czech Technical University in Prague, Faculty of Electrical Engineering, Department of Control Engineering November 8, 2017

More information

Embedded Systems Development

Embedded Systems Development Embedded Systems Development Lecture 3 Real-Time Scheduling Dr. Daniel Kästner AbsInt Angewandte Informatik GmbH kaestner@absint.com Model-based Software Development Generator Lustre programs Esterel programs

More information

Multi-core Real-Time Scheduling for Generalized Parallel Task Models

Multi-core Real-Time Scheduling for Generalized Parallel Task Models Multi-core Real-Time Scheduling for Generalized Parallel Task Models Abusayeed Saifullah, Kunal Agrawal, Chenyang Lu, and Christopher Gill Department of Computer Science and Engineering Washington University

More information

Reducing Tardiness Under Global Scheduling by Splitting Jobs

Reducing Tardiness Under Global Scheduling by Splitting Jobs Reducing Tardiness Under Global Scheduling by Splitting Jobs Jeremy P. Erickson and James H. Anderson The University of North Carolina at Chapel Hill Abstract Under current analysis, soft real-time tardiness

More information

A Theory of Rate-Based Execution. A Theory of Rate-Based Execution

A Theory of Rate-Based Execution. A Theory of Rate-Based Execution Kevin Jeffay Department of Computer Science University of North Carolina at Chapel Hill jeffay@cs cs.unc.edu Steve Goddard Computer Science & Engineering University of Nebraska Ð Lincoln goddard@cse cse.unl.edu

More information

Cache-Aware Compositional Analysis of Real- Time Multicore Virtualization Platforms

Cache-Aware Compositional Analysis of Real- Time Multicore Virtualization Platforms University of Pennsylvania ScholarlyCommons Departmental Papers (CIS) Department of Computer & Information Science -25 Cache-Aware Compositional Analysis of Real- Time Multicore Virtualization Platforms

More information

Predictability of Least Laxity First Scheduling Algorithm on Multiprocessor Real-Time Systems

Predictability of Least Laxity First Scheduling Algorithm on Multiprocessor Real-Time Systems Predictability of Least Laxity First Scheduling Algorithm on Multiprocessor Real-Time Systems Sangchul Han and Minkyu Park School of Computer Science and Engineering, Seoul National University, Seoul,

More information

Real-Time Systems. Event-Driven Scheduling

Real-Time Systems. Event-Driven Scheduling Real-Time Systems Event-Driven Scheduling Marcus Völp, Hermann Härtig WS 2013/14 Outline mostly following Jane Liu, Real-Time Systems Principles Scheduling EDF and LST as dynamic scheduling methods Fixed

More information

Reducing Tardiness Under Global Scheduling by Splitting Jobs

Reducing Tardiness Under Global Scheduling by Splitting Jobs Reducing Tardiness Under Global Scheduling by Splitting Jobs Jeremy P. Erickson and James H. Anderson The University of North Carolina at Chapel Hill Abstract Under current analysis, tardiness bounds applicable

More information

Scheduling periodic Tasks on Multiple Periodic Resources

Scheduling periodic Tasks on Multiple Periodic Resources Scheduling periodic Tasks on Multiple Periodic Resources Xiayu Hua, Zheng Li, Hao Wu, Shangping Ren* Department of Computer Science Illinois Institute of Technology Chicago, IL 60616, USA {xhua, zli80,

More information

Compositional Analysis Techniques For Multiprocessor Soft Real-Time Scheduling

Compositional Analysis Techniques For Multiprocessor Soft Real-Time Scheduling Compositional Analysis Techniques For Multiprocessor Soft Real-Time Scheduling Hennadiy Leontyev A dissertation submitted to the faculty of the University of North Carolina at Chapel Hill in partial fulfillment

More information

Embedded Systems Design: Optimization Challenges. Paul Pop Embedded Systems Lab (ESLAB) Linköping University, Sweden

Embedded Systems Design: Optimization Challenges. Paul Pop Embedded Systems Lab (ESLAB) Linköping University, Sweden of /4 4 Embedded Systems Design: Optimization Challenges Paul Pop Embedded Systems Lab (ESLAB) Linköping University, Sweden Outline! Embedded systems " Example area: automotive electronics " Embedded systems

More information

Real-time Systems: Scheduling Periodic Tasks

Real-time Systems: Scheduling Periodic Tasks Real-time Systems: Scheduling Periodic Tasks Advanced Operating Systems Lecture 15 This work is licensed under the Creative Commons Attribution-NoDerivatives 4.0 International License. To view a copy of

More information

2.1 Task and Scheduling Model. 2.2 Definitions and Schedulability Guarantees

2.1 Task and Scheduling Model. 2.2 Definitions and Schedulability Guarantees Fixed-Priority Scheduling of Mixed Soft and Hard Real-Time Tasks on Multiprocessors Jian-Jia Chen, Wen-Hung Huang Zheng Dong, Cong Liu TU Dortmund University, Germany The University of Texas at Dallas,

More information

Desynchronized Pfair Scheduling on Multiprocessors

Desynchronized Pfair Scheduling on Multiprocessors Desynchronized Pfair Scheduling on Multiprocessors UmaMaheswari C. Devi and James H. Anderson Department of Computer Science, The University of North Carolina, Chapel Hill, NC Abstract Pfair scheduling,

More information

Paper Presentation. Amo Guangmo Tong. University of Taxes at Dallas February 11, 2014

Paper Presentation. Amo Guangmo Tong. University of Taxes at Dallas February 11, 2014 Paper Presentation Amo Guangmo Tong University of Taxes at Dallas gxt140030@utdallas.edu February 11, 2014 Amo Guangmo Tong (UTD) February 11, 2014 1 / 26 Overview 1 Techniques for Multiprocessor Global

More information

arxiv: v3 [cs.ds] 23 Sep 2016

arxiv: v3 [cs.ds] 23 Sep 2016 Evaluate and Compare Two Utilization-Based Schedulability-Test Framewors for Real-Time Systems arxiv:1505.02155v3 [cs.ds] 23 Sep 2016 Jian-Jia Chen and Wen-Hung Huang Department of Informatics TU Dortmund

More information

Energy-Efficient Real-Time Task Scheduling in Multiprocessor DVS Systems

Energy-Efficient Real-Time Task Scheduling in Multiprocessor DVS Systems Energy-Efficient Real-Time Task Scheduling in Multiprocessor DVS Systems Jian-Jia Chen *, Chuan Yue Yang, Tei-Wei Kuo, and Chi-Sheng Shih Embedded Systems and Wireless Networking Lab. Department of Computer

More information

Improved Priority Assignment for the Abort-and-Restart (AR) Model

Improved Priority Assignment for the Abort-and-Restart (AR) Model Improved Priority Assignment for the Abort-and-Restart (AR) Model H.C. Wong and A. Burns Department of Computer Science, University of York, UK. February 1, 2013 Abstract This paper addresses the scheduling

More information

EDF Scheduling. Giuseppe Lipari CRIStAL - Université de Lille 1. October 4, 2015

EDF Scheduling. Giuseppe Lipari  CRIStAL - Université de Lille 1. October 4, 2015 EDF Scheduling Giuseppe Lipari http://www.lifl.fr/~lipari CRIStAL - Université de Lille 1 October 4, 2015 G. Lipari (CRIStAL) Earliest Deadline Scheduling October 4, 2015 1 / 61 Earliest Deadline First

More information

EDF Scheduling. Giuseppe Lipari May 11, Scuola Superiore Sant Anna Pisa

EDF Scheduling. Giuseppe Lipari   May 11, Scuola Superiore Sant Anna Pisa EDF Scheduling Giuseppe Lipari http://feanor.sssup.it/~lipari Scuola Superiore Sant Anna Pisa May 11, 2008 Outline 1 Dynamic priority 2 Basic analysis 3 FP vs EDF 4 Processor demand bound analysis Generalization

More information

Optimal Semi-Partitioned Scheduling in Soft Real-Time Systems

Optimal Semi-Partitioned Scheduling in Soft Real-Time Systems Optimal Semi-Partitioned Scheduling in Soft Real-Time Systems James H. Anderson 1, Jeremy P. Erickson 1, UmaMaheswari C. Devi 2, and Benjamin N. Casses 1 1 Dept. of Computer Science, University of North

More information

3. Scheduling issues. Common approaches 3. Common approaches 1. Preemption vs. non preemption. Common approaches 2. Further definitions

3. Scheduling issues. Common approaches 3. Common approaches 1. Preemption vs. non preemption. Common approaches 2. Further definitions Common approaches 3 3. Scheduling issues Priority-driven (event-driven) scheduling This class of algorithms is greedy They never leave available processing resources unutilized An available resource may

More information

Load Regulating Algorithm for Static-Priority Task Scheduling on Multiprocessors

Load Regulating Algorithm for Static-Priority Task Scheduling on Multiprocessors Technical Report No. 2009-7 Load Regulating Algorithm for Static-Priority Task Scheduling on Multiprocessors RISAT MAHMUD PATHAN JAN JONSSON Department of Computer Science and Engineering CHALMERS UNIVERSITY

More information

Multiprocessor feasibility analysis of recurrent task systems with specified processor affinities

Multiprocessor feasibility analysis of recurrent task systems with specified processor affinities Multiprocessor feasibility analysis of recurrent task systems with specified processor affinities Sanjoy Baruah The University of North Carolina baruah@cs.unc.edu Björn Brandenburg Max Planck Institute

More information

Paper Presentation. Amo Guangmo Tong. University of Taxes at Dallas January 24, 2014

Paper Presentation. Amo Guangmo Tong. University of Taxes at Dallas January 24, 2014 Paper Presentation Amo Guangmo Tong University of Taxes at Dallas gxt140030@utdallas.edu January 24, 2014 Amo Guangmo Tong (UTD) January 24, 2014 1 / 30 Overview 1 Tardiness Bounds under Global EDF Scheduling

More information

Real-Time Workload Models with Efficient Analysis

Real-Time Workload Models with Efficient Analysis Real-Time Workload Models with Efficient Analysis Advanced Course, 3 Lectures, September 2014 Martin Stigge Uppsala University, Sweden Fahrplan 1 DRT Tasks in the Model Hierarchy Liu and Layland and Sporadic

More information

A 2-Approximation Algorithm for Scheduling Parallel and Time-Sensitive Applications to Maximize Total Accrued Utility Value

A 2-Approximation Algorithm for Scheduling Parallel and Time-Sensitive Applications to Maximize Total Accrued Utility Value A -Approximation Algorithm for Scheduling Parallel and Time-Sensitive Applications to Maximize Total Accrued Utility Value Shuhui Li, Miao Song, Peng-Jun Wan, Shangping Ren Department of Engineering Mechanics,

More information

Bursty-Interference Analysis Techniques for. Analyzing Complex Real-Time Task Models

Bursty-Interference Analysis Techniques for. Analyzing Complex Real-Time Task Models Bursty-Interference Analysis Techniques for lemma 3 Analyzing Complex Real- Task Models Cong Liu Department of Computer Science The University of Texas at Dallas Abstract Due to the recent trend towards

More information

Average-Case Performance Analysis of Online Non-clairvoyant Scheduling of Parallel Tasks with Precedence Constraints

Average-Case Performance Analysis of Online Non-clairvoyant Scheduling of Parallel Tasks with Precedence Constraints Average-Case Performance Analysis of Online Non-clairvoyant Scheduling of Parallel Tasks with Precedence Constraints Keqin Li Department of Computer Science State University of New York New Paltz, New

More information

Supporting Intra-Task Parallelism in Real- Time Multiprocessor Systems José Fonseca

Supporting Intra-Task Parallelism in Real- Time Multiprocessor Systems José Fonseca Technical Report Supporting Intra-Task Parallelism in Real- Time Multiprocessor Systems José Fonseca CISTER-TR-121007 Version: Date: 1/1/2014 Technical Report CISTER-TR-121007 Supporting Intra-Task Parallelism

More information

The FMLP + : An Asymptotically Optimal Real-Time Locking Protocol for Suspension-Aware Analysis

The FMLP + : An Asymptotically Optimal Real-Time Locking Protocol for Suspension-Aware Analysis The FMLP + : An Asymptotically Optimal Real-Time Locking Protocol for Suspension-Aware Analysis Björn B. Brandenburg Max Planck Institute for Software Systems (MPI-SWS) Abstract Multiprocessor real-time

More information

Suspension-Aware Analysis for Hard Real-Time Multiprocessor Scheduling

Suspension-Aware Analysis for Hard Real-Time Multiprocessor Scheduling Suspension-Aware Analysis for Hard Real-Time Multiprocessor Scheduling Cong Liu and James H. Anderson Department of Computer Science, University of North Carolina at Chapel Hill Abstract In many real-time

More information

Supporting Read/Write Applications in Embedded Real-time Systems via Suspension-aware Analysis

Supporting Read/Write Applications in Embedded Real-time Systems via Suspension-aware Analysis Supporting Read/Write Applications in Embedded Real-time Systems via Suspension-aware Analysis Guangmo Tong and Cong Liu Department of Computer Science University of Texas at Dallas ABSTRACT In many embedded

More information

Real-time operating systems course. 6 Definitions Non real-time scheduling algorithms Real-time scheduling algorithm

Real-time operating systems course. 6 Definitions Non real-time scheduling algorithms Real-time scheduling algorithm Real-time operating systems course 6 Definitions Non real-time scheduling algorithms Real-time scheduling algorithm Definitions Scheduling Scheduling is the activity of selecting which process/thread should

More information

Many suspensions, many problems: a review of self-suspending tasks in real-time systems

Many suspensions, many problems: a review of self-suspending tasks in real-time systems Real-Time Systems (2019) 55:144 207 https://doi.org/10.1007/s11241-018-9316-9 Many suspensions, many problems: a review of self-suspending tasks in real-time systems Jian-Jia Chen 1 Geoffrey Nelissen 2

More information

A Note on Modeling Self-Suspending Time as Blocking Time in Real-Time Systems

A Note on Modeling Self-Suspending Time as Blocking Time in Real-Time Systems A Note on Modeling Self-Suspending Time as Blocking Time in Real-Time Systems Jian-Jia Chen 1, Wen-Hung Huang 1, and Geoffrey Nelissen 2 1 TU Dortmund University, Germany Email: jian-jia.chen@tu-dortmund.de,

More information

Aperiodic Task Scheduling

Aperiodic Task Scheduling Aperiodic Task Scheduling Jian-Jia Chen (slides are based on Peter Marwedel) TU Dortmund, Informatik 12 Germany Springer, 2010 2017 年 11 月 29 日 These slides use Microsoft clip arts. Microsoft copyright

More information