EDF Feasibility and Hardware Accelerators

Similar documents
Andrew Morton University of Waterloo Canada

Lecture 6. Real-Time Systems. Dynamic Priority Scheduling

Schedulability analysis of global Deadline-Monotonic scheduling

Embedded Systems 15. REVIEW: Aperiodic scheduling. C i J i 0 a i s i f i d i

Task Models and Scheduling

Scheduling Algorithms for Multiprogramming in a Hard Realtime Environment

RUN-TIME EFFICIENT FEASIBILITY ANALYSIS OF UNI-PROCESSOR SYSTEMS WITH STATIC PRIORITIES

Lecture 13. Real-Time Scheduling. Daniel Kästner AbsInt GmbH 2013

Non-preemptive Fixed Priority Scheduling of Hard Real-Time Periodic Tasks

Scheduling Periodic Real-Time Tasks on Uniprocessor Systems. LS 12, TU Dortmund

EDF Scheduling. Giuseppe Lipari May 11, Scuola Superiore Sant Anna Pisa

EDF Scheduling. Giuseppe Lipari CRIStAL - Université de Lille 1. October 4, 2015

Schedulability of Periodic and Sporadic Task Sets on Uniprocessor Systems

Embedded Systems Development

Embedded Systems 14. Overview of embedded systems design

CPU SCHEDULING RONG ZHENG

Probabilistic Preemption Control using Frequency Scaling for Sporadic Real-time Tasks

There are three priority driven approaches that we will look at

Bounding the End-to-End Response Times of Tasks in a Distributed. Real-Time System Using the Direct Synchronization Protocol.

CIS 4930/6930: Principles of Cyber-Physical Systems

Real-time Scheduling of Periodic Tasks (1) Advanced Operating Systems Lecture 2

Real-Time and Embedded Systems (M) Lecture 5

Aperiodic Task Scheduling

Real-Time Scheduling and Resource Management

Multi-core Real-Time Scheduling for Generalized Parallel Task Models

Optimal Utilization Bounds for the Fixed-priority Scheduling of Periodic Task Systems on Identical Multiprocessors. Sanjoy K.

A New Sufficient Feasibility Test for Asynchronous Real-Time Periodic Task Sets

Clock-driven scheduling

Real-Time Systems. Event-Driven Scheduling

Real-time scheduling of sporadic task systems when the number of distinct task types is small

Networked Embedded Systems WS 2016/17

The Feasibility Analysis of Multiprocessor Real-Time Systems

Non-Preemptive and Limited Preemptive Scheduling. LS 12, TU Dortmund

Real-time Scheduling of Periodic Tasks (2) Advanced Operating Systems Lecture 3

Controlling Preemption for Better Schedulability in Multi-Core Systems

arxiv: v1 [cs.os] 25 May 2011

Static priority scheduling

Multi-core Real-Time Scheduling for Generalized Parallel Task Models

Rate-monotonic scheduling on uniform multiprocessors

Non-preemptive Scheduling of Distance Constrained Tasks Subject to Minimizing Processor Load

Process Scheduling for RTS. RTS Scheduling Approach. Cyclic Executive Approach

Online Scheduling Switch for Maintaining Data Freshness in Flexible Real-Time Systems

TEMPORAL WORKLOAD ANALYSIS AND ITS APPLICATION TO POWER-AWARE SCHEDULING

Real-time operating systems course. 6 Definitions Non real-time scheduling algorithms Real-time scheduling algorithm

Failure Tolerance of Multicore Real-Time Systems scheduled by a Pfair Algorithm

Predictability of Least Laxity First Scheduling Algorithm on Multiprocessor Real-Time Systems

Segment-Fixed Priority Scheduling for Self-Suspending Real-Time Tasks

Supplement of Improvement of Real-Time Multi-Core Schedulability with Forced Non- Preemption

Multiprocessor Real-Time Scheduling Considering Concurrency and Urgency

Real-Time Scheduling Under Time-Interval Constraints

Multiprocessor Scheduling II: Global Scheduling. LS 12, TU Dortmund

Uniprocessor Mixed-Criticality Scheduling with Graceful Degradation by Completion Rate

Real-time Systems: Scheduling Periodic Tasks

Improved Priority Assignment for the Abort-and-Restart (AR) Model

Static-Priority Scheduling. CSCE 990: Real-Time Systems. Steve Goddard. Static-priority Scheduling

Scheduling. Uwe R. Zimmer & Alistair Rendell The Australian National University

arxiv: v1 [cs.os] 21 May 2008

AS computer hardware technology advances, both

Design of Real-Time Software

Semantic Importance Dual-Priority Server: Properties

Paper Presentation. Amo Guangmo Tong. University of Taxes at Dallas February 11, 2014

The Partitioned Dynamic-priority Scheduling of Sporadic Task Systems

Real-Time Systems. Event-Driven Scheduling

Non-Work-Conserving Non-Preemptive Scheduling: Motivations, Challenges, and Potential Solutions

Lec. 7: Real-Time Scheduling

Scheduling Slack Time in Fixed Priority Pre-emptive Systems

A Dynamic Real-time Scheduling Algorithm for Reduced Energy Consumption

A Theory of Rate-Based Execution. A Theory of Rate-Based Execution

Exact speedup factors and sub-optimality for non-preemptive scheduling

Spare CASH: Reclaiming Holes to Minimize Aperiodic Response Times in a Firm Real-Time Environment

EECS 571 Principles of Real-Time Embedded Systems. Lecture Note #7: More on Uniprocessor Scheduling

Scheduling Lecture 1: Scheduling on One Machine

Che-Wei Chang Department of Computer Science and Information Engineering, Chang Gung University

Real-Time Scheduling. Real Time Operating Systems and Middleware. Luca Abeni

Semi-Partitioned Fixed-Priority Scheduling on Multiprocessors

Scheduling periodic Tasks on Multiple Periodic Resources

Partitioned scheduling of sporadic task systems: an ILP-based approach

Real-Time Systems. Lecture 4. Scheduling basics. Task scheduling - basic taxonomy Basic scheduling techniques Static cyclic scheduling

Task assignment in heterogeneous multiprocessor platforms

Priority-driven Scheduling of Periodic Tasks (1) Advanced Operating Systems (M) Lecture 4

Real-Time Systems. LS 12, TU Dortmund

arxiv: v3 [cs.ds] 23 Sep 2016

TDDB68 Concurrent programming and operating systems. Lecture: CPU Scheduling II

Multiprocessor EDF and Deadline Monotonic Schedulability Analysis

CSE 380 Computer Operating Systems

Semi-Partitioned Scheduling of Sporadic Task Systems on Multiprocessors

Tardiness Bounds for FIFO Scheduling on Multiprocessors

System Model. Real-Time systems. Giuseppe Lipari. Scuola Superiore Sant Anna Pisa -Italy

Response Time Analysis for Tasks Scheduled under EDF within Fixed Priorities

3. Scheduling issues. Common approaches 3. Common approaches 1. Preemption vs. non preemption. Common approaches 2. Further definitions

Contention-Free Executions for Real-Time Multiprocessor Scheduling

Research Article Applying Dynamic Priority Scheduling Scheme to Static Systems of Pinwheel Task Model in Power-Aware Scheduling

Improved Deadline Monotonic Scheduling With Dynamic and Intelligent Time Slice for Real-time Systems

Periodic scheduling 05/06/

Online Energy-Aware I/O Device Scheduling for Hard Real-Time Systems with Shared Resources

Design of Real-Time Software

Load Regulating Algorithm for Static-Priority Task Scheduling on Multiprocessors

IN4343 Real Time Systems April 9th 2014, from 9:00 to 12:00

Deadline-driven scheduling

Schedulability Analysis of the Linux Push and Pull Scheduler with Arbitrary Processor Affinities

Transcription:

EDF Feasibility and Hardware Accelerators Andrew Morton University of Waterloo, Waterloo, Canada, arrmorton@uwaterloo.ca Wayne M. Loucks University of Waterloo, Waterloo, Canada, wmloucks@pads.uwaterloo.ca A feasibility analysis is developed for embedded systems that use hardware accelerators to speed up critical portions of the software system. The scheduling policy analyzed is Earliest Deadline First. The concept of an accelerator-induced idle is introduced along with modifications in representation for tasks employing hardware accelerators. These modifications are incorporated into an existing algorithm for feasibility analysis of regular task sets. The changes introduced allow the benefits of the accelerators (parallel execution between processor and accelerator and reduced overall task execution time) to be accounted for, resulting in a less conservative analysis. Keywords: Real-Time Scheduling, Theoretical Scheduling, Hardware Accelerators. 1 Introduction This paper presents an algorithm for analyzing schedule feasibility under the Earliest Deadline First (EDF) policy. Specifically, EDF feasibility is examined for embedded real-time systems that use hardware accelerators to speed up parts of the software application. For a hard real-time system, it is necessary that all tasks not only function properly but also meet their deadlines. Functional correctness without temporal correctness is failure and can have significant consequences. An algorithm to ensure schedule feasibility must be able to check that all tasks will meet their deadlines, every time. The scheduling policy being examined in this paper is EDF. This is an important real-time scheduling policy for two reasons. First, it is optimal, in the sense that it will fail to meet a task s deadline only if no other scheduling policy could meet the deadline. Second, a task s priority is determined by the closeness of it s deadline. This is a more natural way to schedule real-time tasks than the rate monotonic policy, where fixed task priorities are set to approximate task deadlines. The problem of analyzing schedule feasibility can be exacerbated in embedded systems by the use of hardware accelerators to speed up selected kernels of software that have high computation cost. In this paper, the feasibility analysis algorithm proposed by Stankovic et al [3] is extended for tasks employing hardware accelerators for speed-up. In the next section, feasibility analysis for tasks sets scheduled by EDF is introduced. This introduction is followed by motivating the extended analysis for hardware accelerators and then deriving the analysis. Lastly, the implications and usability of the extended analysis are considered and conclusions are drawn. 2 EDF Schedule Analysis Under the EDF policy, of all ready tasks, the task with the earliest deadline is executed first. If another task arrives with an earlier deadline, it will preempt the currently executing task. Periodic tasks occur at regular intervals. Periodic task τ i has start time s i, period, relative deadline D i and worst-case execution time C i. τ i is released at the start of each period at time r i = s i + m, 368

Regular Papers Figure 1: Task Notation m = 0, 1,.... The task must complete by absolute deadline d i = r i + D i. Sporadic tasks are not released at regular intervals but can be characterized by minimum inter-arrival time. For the purpose of analysis, the sporadic task can be conservatively represented by a periodic task. A task set composed of periodic and sporadic tasks is a hybrid task set. In Figure 1, τ 1 is periodic and τ 2 is sporadic. Task releases are indicated with an up arrow and task deadlines are indicated with a down arrow. τ 1 has start time s 1 = 2, relative deadline D 1 = 8, period T 1 = 12 and worst-case execution time C 1 = 4. It has release dates r 1 = 2, 14,... and corresponding deadlines d 1 = 10, 22,.... τ 2 is released sporadically at times t = 0, 16, 21. At t = 16 τ 2 preempts τ 1 because its deadline is closer. τ 1 is resumed at t = 18 after τ 2 terminates. The minimum inter-arrival time for τ 2, as shown in Figure 1, is T 2 = 21 16 = 5. An instance of a task is called a job. In Figure 1, τ 1 has 2 jobs and τ 2 has three jobs. The purpose of EDF analysis is, given task set τ, determine if it can be proved that all n tasks in the set can be scheduled by the EDF policy such that no job misses its deadline. Stankovic et al [3] have developed an analysis for task sets, which is based on processor demand. It is summarized here. 2.1 Algorithm Summary A necessary but not sufficient condition for the feasibility of a task set scheduled by the EDF policy is that the processor utilization U (defined by Liu and Layland [1]) does not exceed one: n C i U = 1 (1) i=1 U sums the fraction of processor time required per task. The rest of the analysis assumes that the task set is synchronous: all tasks are released together at t = 0 ( i: s i = 0). This is the most constraining scenario, otherwise known as the critical instance. After checking U 1, the algorithm checks processor demand. Processor demand, h(t), in the interval [0, t) is defined: h(t) = D i t ( 1 + t Di ) C i. (2) Processor demand at time t measures the work required by all tasks having deadlines in [0, t]. Processor demand is applied in the following theorem: Theorem 1 (Processor Demand Condition [3]). Any given hybrid task set is feasible under EDF scheduling if and only if t : h(t) t. 369

However, it is not necessary to test processor demand on all intervals in [0, t); it is sufficient to test only when task deadlines occur [4]. Furthermore, an upper bound can be placed on the interval over which h(t) is checked. This is done by calculating the length L of the synchronous busy period 1. Theorem 2 (Liu and Layland [1]). When the deadline driven scheduling algorithm is used to schedule a set of tasks on a processor, there is no processor idle time prior to an overflow. The interval of time preceding the first idle period is called the synchronous busy period. Stankovic et at describe the following iterative method for calculating the length L of the synchronous busy period. This algorithm converges in O( n i=1 C i) time, if U < 1 [2]. Apply recursively until L m+1 = L m : { L (0) = n i=1 C i, L (m+1) = W (L (m) ), (3) where W (t) = n t i=1 C i. (4) In essence, all tasks are released at t = 0 and their cumulative execution times (busy period) are calculated L (0). The algorithm iteratively checks how many jobs have been released in L (m) and sums their execution times L (m+1). When it reaches L (m+1) where all tasks have finished executing before any new jobs are released, it terminates. The set S of deadline events is defined over [0, L), and for each event v in S, it is checked that the processor demand doesn t exceed processor time (h(v) v). Algorithm 1 lists the pseudocode for Stankovic et al s feasibility analysis algorithm. Algorithm 1: EDF Feasibility Analysis if U > 1 then return not feasible ; S = n i=1 {m + D i : m = 0, 1,...} = {v 1, v 2,...} // deadlines ; k 1 ; while v k < L do if h(v k ) > v k then return not feasible // check processor demand ; k k + 1 ; end return feasible ; 3 Hardware Accelerators and Scheduling A common platform for embedded systems is the System on Chip (Soc). In a SoC, a general purpose processor (CPU), RAM, ROM, I/O and application specific hardware are integrated on one IC. The application specific hardware typically consists of accelerators for portions, otherwise known as kernels, of the application that do not execute fast enough in software or consume too much processor time. Accelerators that execute independently of any software task are not considered here as they do not pose a problem for schedule analysis. However a software task that invokes a hardware accelerator and then waits, or blocks until the accelerator finishes before continuing is 370

Regular Papers (a) Software Task τ i (b) Hardware Accelerated τ i Figure 2: Accelerator-Blocked Task Table 1: Accelerator-Blocked Task Division Task Target Period Deadline Time τ i,1 proc.,1 = D i,1 = D i,2 C i,2 C i,1 τ i,2 hw,2 = D i,2 = D i,3 C i,3 C i,2 τ i,3 proc.,3 = D i,3 = D i C i,3 of interest for schedule analysis. Such a software task is called an accelerator-blocked task and is illustrated in Figure 2. The task τ i has three parts, τ i,1, τ i,2, τ i,3. Part τ i,2 has been identified for speed-up in hardware. In Figure 2(a) it executes in software and in Figure 2(b) τ i,2 is replaced by a hardware accelerator. A simple approach to the analysis is to calculate the worst-case execution time of task τ i as the sum C i = C i,1 + C i,2 + C i,3 and use Algorithm 1 without modification. This may be a reasonable approach when C i,2 << C i,1 + C i,3. The primary drawback of this approach is that the parallel execution between processor and accelerator is ignored. In other words, while τ i is waiting for τ i,2 to execute on the hardware accelerator, another task could be using the processor. The algorithm could rule a task set infeasible that is actually feasible. The goal of the extended analysis is to account for the parallel execution between the processor and hardware accelerators. 4 Extended Analysis The analysis is initially developed for a task set in which one task blocks on an accelerator once during its execution. Extension to a task set in which one task blocks multiple times on an accelerator during its execution will then be briefly discussed. To aid in the extended analysis, the execution time of each task is represented by vector C i. For a regular task the size of the vector is C i = 1. For the accelerator-blocked task from Figure 2(b), C i = 3. Ci,2 is execution time on the accelerator. In order to perform the analysis, the accelerator-blocked task τ i is replaced by three ordered subtasks as described in Table 1. The deadlines are modified so that τ i,1 finishes in time for τ i,2 and τ i,3 to finish. Likewise, τ i,2 finishes in time for τ i,3 to finish. Because EDF schedules tasks with earlier deadlines first, the subtasks will execute in the correct order. The task set τ is modified by replacing the accelerator-blocked task τ i with its two software subtasks τ i,1 and τ i,3. For now τ i,2, which executes in hardware, is ignored. It will be accounted for later. Now consider the example task set in Figure 3. If τ 1,1, τ 1,3 and τ 2 are used as the task set in the analysis of Algorithm 1, it will pass the analysis and be deemed feasible. That is because there is enough processor time to meet the processor demand at all deadline events S = {2, 7, 11} in the synchronous busy period. The problem arises with the second jobs of τ 1 and τ 2. There is enough processor time to fulfill the software tasks demands but τ i,3 cannot use some of the available time 1 In [3], the upper bound is chosen from the minimum of three values. The other two upper bounds are omitted here as they are not used in the extended algorithm in Section 4. 371

Figure 3: Feasibility Failure because it is waiting for τ i,2 to finish on the accelerator. This causes an accelerator-induced idle that is not accounted for by the analysis. The goal of the extended analysis is to determine whether an accelerator-induced idle can result in a missed deadline. Lemma 1. Given a task set τ in which one task τ x blocks on an accelerator once, with logical subtasks {τ x,1, τ x,2, τ x,3 }, the accelerator-induced idle results in a critical instance when: 1. τ x,3 has no slack, and 2. all other tasks are released synchronously at the end of the accelerator-induced idle (i.e. at t = D x,2 ). A second critical instance has been introduced. The first is the synchronous release of all tasks in τ. The second is an accelerator-induced idle during which no other tasks are ready to execute but at the end of which all tasks (except τ x,1 ) are released. Before proving Lemma 1, loading factor must be introduced. Loading factor on the interval [t 1, t 2 ) is defined as u [t1,t 2 ) = h [t 1,t 2 ) (t 2 t 1 ). h [t1,t 2 ) is the sum of work released not earlier than t 1 with deadline not later than t 2 : h [t1,t 2 ) = t 1 r k,d k t 2 C k, where r k, d k and C k are release, deadline and worst-case execution time for jobs in that interval. Loading factor is the fraction of the interval required to meet its processor demand. The absolute loading factor is the maximum loading factor of all possible intervals: The proof of Lemma 1 now follows. u = sup u [t1,t 2 ). 0 t 1 <t 2 Proof. The worst-case schedule feasibility occurs when: Part 1: τ x,3 has no slack (d x,3 = C x,3 ) If τ x,3 has no slack, then no other work can be done before d x,3. (i.e. if there exists another task with deadline before d x,3, then at t = d x,3, h(t) > t which makes the schedule infeasible.) Adding slack would only improve schedule feasibility. 372

Regular Papers (a) τ x (b) C, τ x,1, τ x,3 Figure 4: Decomposition of τ x Part 2: all other tasks are released synchronously with τ x,3 The remaining part of this proof is the same, in essence, as the proof of Lemma 3.1 in [3] except for modifications to the accelerator-blocked task as described next. Let τ be the asynchronous task set ( i s i 0) and τ the corresponding synchronous task set and let the accelerator-blocked task, τ x, be excluded from τ and τ. When the processor demands, h [t1,t 2 ) and h [t 1,t 2 ), of τ and τ are computed, they are augmented by the acceleratorblocked task as follows: In calculating the processor demand, a unit of work C is released at t 1 with duration C = C x,3, representing the part of the accelerator-blocked task that must execute after the accelerator-induced idle. The deadline of C is D which is equal to C x,3 (no slack). Furthermore τ x is replaced by τ x,1 and τ x,3. Since C represents the remaining work of the previous job of τ x which has a deadline at t = C x,3, the next job of τ x is released at T x D x + C x,3 after t 1. Tasks τ x,1 and τ x,3 are said to have phases φ x,1 = φ x,3 = T x D x + C x,3. Only an accelerator-blocked task is said to have phase; for all other tasks φ i = 0 (note that phase and start time are not related). Figure 4 shows the relationship between τ x and C and τ x,1 and τ x,3 with phases. In order to prove that synchronous release of all other tasks but the accelerator-blocked task is the worst-case scenario, it is sufficient to prove that the absolute loading factor ( loading factor ) u of τ is bounded above by the loading factor u of τ. It is therefore sufficient to prove that [t 1, t 2 ) there exists an interval [t 1, t 2 ) such that h [t 1,t 2 ) h [t 1,t 2 ), where the accelerator-blocked task is included as described in the previous paragraph. For any interval [t 1, t 2 ), the processor demand for τ is bounded above by: h [t1,t 2 ) C + ( 1 + t ) 2 t 1 (D i + φ i ) C i. D i +φ i t 2 t 1 Addition of φ i to this equation only affects τ x,1 and τ x,3. Now consider the synchronous case, τ, and let t 1 = 0 and t 2 = t 2 t 1. Processor demand is 373

Figure 5: φ x,1 and φ x,3 equal to: h [t 1,t 2 ) = C + D i +φ i t 2 t 1 = C + D i +φ i t 2 t 1 ( 1 + t 2 t 1 (D ) i + φ i ) C i ( 1 + t ) 2 t 1 (D i + φ i ) C i. Processor demand of the synchronous case, h [t 1,t 2 ), equals the upper bound of the nonsynchronous case, h [t1,t 2 ). Therefore: h [t1,t 2 ) h [t 1,t 2 ). To apply this result, the analysis of Algorithm 1 is done twice: once for each critical instance (synchronous release and accelerator-induced idle). The synchronous busy period and processor demand calculations need to be redefined for the second critical instance. To facilitate this, the phase of the accelerator-blocked task is redefined. In the proof above, the portion of the accelerator-blocked task remaining after the acceleratorinduced idle was represented by C and the next job of the task had phase φ x,1 = φ x,3 = T x D x + C x,3. This can also be represented by modifying the phase of τ x,3 : φ x,3 = (D x C x,3 ). By doing this τ x,3 has an initial deadline at t = C x,3. Since D x C x,3 = D x,2, the phases of the two subtasks of the accelerator-blocked task τ x become: φ x,1 = T x D x,2 φ x,3 = D x,2, and every other task has phase φ i = 0. The change in φ x,3 can be seen by comparing Figures 4 and 5. In Figure 5, C has been incorporated into τ x,3. The synchronous busy period is now calculated as follows: Apply recursively until L m+1 = L m : { L (0) = φ i 0 C i, L (m+1) = W (L (m) ), (5) where W (t) = φ i t t φi C i. (6) 374

Regular Papers Figure 6: One Task, Multiple Blocks When φ x,1 = φ x,3 = 0, then L = L. When φ x,1 = T D x,2 and φ x,3 = D x,2, then the result is different: at t = 0 all of the regular tasks are released as well as accelerator-blocked subtask τ x,3. The processor demand calculation is also modified for phase: 4.1 Algorithm Analysis h (t) = D i +φ i t ( 1 + t (Di + φ i ) ) C i. (7) Algorithm 1 is run twice: once with φ x,1 = φ x,3 = 0, and once with φ x,1 = T x D x,2 and φ x,3 = D x,2. In both instances, τ x is replaced by τ x,1 and τ x,3 in τ. As a result, the execution on the hardware accelerator is omitted from the processor demand analysis. The implication is that the parallel execution between processor and accelerator is incorporated into the feasibility analysis. However to do this, a second critical instance is added that looks at the worst-case scenario that can follow an accelerator-induced idle. This puts a limitation on the type of task set that will pass the analysis: all regular tasks must have enough slack to permit τ x,3 to execute at t = 0. Since accelerator execution times are generally shorter than task execution times, it is expected that this limitation will not be onerous. Extending the algorithm to task sets in which one task blocks multiple times is described only briefly due to space limitations. The length of the execution time vector C x of the acceleratorblocked task would be incremented by two for each {accelerator,software} pair added to it s execution as in Figure 6. Each time the task blocks on a accelerator, it may result in a accelerator-induced idle. Thus each accelerator block in the vector is associated with a critical instance. The analysis of Algorithm 1 is performed once with all phases equal zero and then is repeated once per block with phases modified to simulate the appropriate accelerator-induced idle. 5 Conclusions Task sets in which a task blocks on an accelerator have been shown to add a second critical instance for feasibility analysis. This is done by including only the parts of that accelerator-blocked task that execute on the processor in τ. Phases for this subtasks were introduced to facilitate analysis of the second critical instance. The analysis incorporates the parallel execution between processor and accelerator but impose some limitations on the characteristics of task sets that can pass the analysis. Analysis of task sets with multiple accelerator-blocked tasks is left for future work. References [1] C. L. Liu and J. W. Layland (1973), Scheduling algorithms for multiprogramming in a hardreal-time environment, Journal of the ACM 20(1), 46 61. [2] I. Ripoll, A. Crespo, and A. K. Mok (1996), Improvement in feasibility testing for real-time tasks, Real-Time Systems 11(1), 19 39. 375

[3] J. A. Stankovic, M. Spuri, K. Ramamritham, and G. C. Buttazzo (1998), Deadline Scheduling for Real-Time Systems: EDF and Related Algorithms, Kluwer Academic Publishers. [4] Q. Zheng and K. G. Shin (1994) On the ability of establishing real-time channels in point-topoint packet-switched networks, IEEE Transactions on Communications 42(2). 376