Fixed-Priority Multiprocessor Scheduling with Liu & Layland s Utilization Bound

Size: px
Start display at page:

Download "Fixed-Priority Multiprocessor Scheduling with Liu & Layland s Utilization Bound"

Transcription

1 Fxed-Prorty Multprocessor Schedulng wth Lu & Layland s Utlzaton Bound Nan Guan, Martn Stgge, Wang Y and Ge Yu Department of Informaton Technology, Uppsala Unversty, Sweden Department of Computer Scence and Technology, Northeastern Unversty, Chna Abstract Lu and Layland dscovered the famous utlzaton bound N(2 N 1 1) for fxed-prorty schedulng on sngleprocessor systems n the 1970 s. Snce then, t has been a long standng open problem to fnd fxed-prorty schedulng algorthms wth the same bound for multprocessor systems. In ths paper, we present a parttonng-based fxed-prorty multprocessor schedulng algorthm wth Lu and Layland s utlzaton bound. Keywords-real-tme systems; utlzaton bound; multprocessor; fxed prorty schedulng I. INTRODUCTION Utlzaton bound s a well-known concept frst ntroduced by Lu and Layland n ther semnal paper [18]. Utlzaton bound can be used as a smple and practcal way to test the schedulablty of real-tme task sets, as well as a good metrc to evaluate the qualty of a schedulng algorthm. It was shown that the utlzaton bound of Rate Monotonc Schedulng (RMS) on sngle processors s N(2 1 N 1). For smplcty of presentaton we let = N(2 1 N 1). Multprocessor schedulng are usually categorzed nto two paradgms [10]: global schedulng, n whch each task can execute on any avalable processor n the run tme, and parttoned schedulng n whch each tasks s assgned to a processor beforehand, and durng the run tme each task can only execute on ths partcular processor. Although global schedulng on average utlzes computng resource better, the best known utlzaton bound of global fxed-prorty schedulng s only 38% [3], whch s much lower than the best known result of parttoned fxed-prorty schedulng 50% [7]. 50% s also known as the maxmum utlzaton bound for both global and parttoned fxed-prorty schedulng [4], [19]. Although there exst schedulng algorthms, lke the pfar famly [2], [9], offerng utlzaton bounds of 100%, these schedulng algorthms are not prorty-based and ncur much hgher context-swtch overhead [11]. Recently a number of works have been done on the semparttoned schedulng, whch can exceed the maxmum Ths work was partally sponsored by CoDeR-MP, UPMARC, and NSF of Chna under Grant No and utlzaton bound 50% of the parttoned schedulng. In semparttoned schedulng, most tasks are statcally assgned to one fxed processor as n parttoned schedulng, whle a few number of tasks are splt nto several subtasks, whch are assgned to dfferent processors. A recent work [17] has shown that the worst-case utlzaton bound of semparttoned fxed-prorty schedulng can acheve 65%, whch s stll lower than 69.3% (the worst-case value of when N s ncreasng to the nfnty). Ths gap s even larger wth a smaller N. In ths paper, we propose a new fxed-prorty schedulng algorthm for multprocessor systems based on semparttoned schedulng, whose utlzaton bound s. The algorthm uses RMS on each processor, and has the same task splttng overhead as n prevous work. We frst propose a sem-parttoned fxed-prorty schedulng algorthm, whose utlzaton bound s for a class of task sets n whch the utlzaton of each task s no larger than /(1 + ). Ths algorthm assgns tasks n decreasng perod order, and always selects the processor wth the least workload assgned so far among all processors, to assgn the next task. Then we remove the constrant on the utlzaton of each task, by ntroducng an extra task pre-assgnng mechansm; the algorthm can acheve the utlzaton bound of for any task set. The rest of the paper s structured as follows: Secton II revews the pror work on sem-parttoned schedulng; Secton III ntroduces the notatons and the basc concept of sem-parttoned schedulng. The frst and second proposed algorthm, as well as ther worst-case utlzaton bound property, s presented n Secton IV and V respectvely. Fnally, the concluson s made n Secton VI. II. PRIOR WORK Sem-parttoned schedulng has been studed wth both EDF schedulng [1], [8], [5], [6], [12], [13], [16] and fxedprorty schedulng [14], [15], [17]. The frst sem-parttoned schedulng algorthm s EDFfm [1] for soft real-tme systems based on EDF schedulng. Andersson et al. proposed EKG [8] for hard real-tme systems, n whch splt tasks are forced to executed n certan

2 tme slots. Later EKG was extended to sporadc and arbtrary deadlne task systems [5] [6] wth the smlar dea. Kato et al. proposed EDDHP and EDDP [12] [13] n whch splt tasks are scheduled based on prorty rather than tme slots. The worst-case utlzaton bound of EDDP s 65%. Later Kato et al. proposed EDF-WM, whch can sgnfcantly reduce the context swtch overhead aganst prevous work. There are relatvely fewer works on the fxed-prorty schedulng sde. Kato et al. proposed RMDP [14] and DMPM [15], both wth the worst-case utlzaton bound of 50%. whch s the same as the parttoned schedulng wthout task splttng. Recently, Lakshmanan et al. [17] proposed the algorthm PDMS HPTS DS, whch can acheve the worstcase utlzaton bound of 65%, and can acheve the bound 69.3% for a specal type of task sets that consst of lght tasks. They also conducted case studes on an Intel Core 2 Duo processor to characterze the practcal overhead of task-splttng, and showed that the cache overheads due to task-splttng can be expected to be neglgble on mult-core platforms. III. BASIC CONCEPTS We frst ntroduce the processor platform and task model. The multprocessor platform conssts of M dentcal processors {P 1, P 2,...P M }. A task set τ = {τ 1, τ 2,..., τ N } conssts of N ndependent tasks. Each task τ s a 2-tuple C, T, where C s the worst-case executon tme, T s the mnmum nter-release separaton (also called perod). T s also τ s relatve deadlne. Tasks n τ are sorted n non-decreasng perod order,.e., j > T j T. Snce our proposed algorthms use ratemonotonc schedulng (RMS) as the schedulng algorthm on each processor, we can use the task ndces to represent the task prortes,.e., τ has hgher prorty than τ j f and only f < j. The utlzaton of each task τ s defned as U = C /T. We recall the classcal result of Lu and Layland: Theorem 1 ([18]). On a sngle-processor system, each task set τ wth U N(2 1 N 1) τ τ s schedulable usng rate-monotonc schedulng (RMS). The utlzaton bound of our proposed sem-parttoned schedulng algorthm s bult upon ths result. In the remander of ths paper, we use to denote the above utlzaton bound for N tasks: = N(2 1 N 1) (1) We further defne the utlzaton of a task set τ n multprocessor schedulng on M processors as U(τ) = τ τ U /M (2) 1 τ 1 2 r r r+r 1 τ 2 3 τ 3 r R 1 R 2 T R 1 r+r 1 +R 2 Fgure 1. T -R 1 -R 2 Subtasks For smplcty of presentng our algorthms, we assume each task τ τ has utlzaton U. Note that ths assumpton does not nvaldate our results on task sets contanng tasks wth utlzaton hgher than : If n a task set wth U(τ) there are tasks wth a hgher (ndvdual) utlzaton than, we can just let them run each exclusvely on an own processor. The remanng task set on the remanng processors stll has a utlzaton of at most. If we are able to show ts schedulablty, then together ths results n the desred bound for the full task set. A sem-parttoned schedulng algorthm conssts of two parts: the parttonng algorthm, whch determnes how to splt and assgn each task (or rather each of ts parts) to a fxed processor, and the schedulng algorthm, whch determnes how to schedule the tasks assgned to each processor. Wth the parttonng algorthm, most tasks are assgned to a processor and only execute on ths processor at run tme. We call these tasks non-splt tasks. Other tasks are called splt tasks, whch are splt nto several subtasks. Each subtask of splt task τ s assgned to (thereby executes on) a dfferent processor, and the sum of the executon tme of all subtasks equals C. For example, n Fgure 1 the task τ s splt nto three subtasks τ 1, τ 2 and τ 3, executng on processor P 1, P 2 and P 3, respectvely. The subtasks of a task need to be synchronzed to execute correctly. For example, n Fgure 1, τ 2 can not start executon untl τ 1 s fnshed. Ths equals deferrng the actual ready tme of τ 2 by up to R1 (relatve to τ s orgnal release tme), where R 1 s the worst-case response tme of τ 1. One can regard ths as shortenng the actual relatve deadlne of τ 2 by up to R 1. Smlarly, the actual ready tme of τ 3 s deferred by up to R 1 + R2, and τ 3 s actual relatve deadlne s shortened by up to R 1 + R2. We use τ k to denote the k th subtask of a splt task τ, and defne τ k s synthetc deadlne as k = T R l (3) l [1,k 1] Thus, we represent each subtask τ k by a 3-tuple d

3 c k, T, k, n whch ck s the executon tme of τ k, T s the orgnal perod, k s the synthetc deadlne. For consstency, each non-splt task τ can be represented by a sngle subtask τ 1 wth c 1 = C and 1 = T. The normal utlzaton of a subtask τ k s U k = c k /T, and we defne another new metrc, the synthetc utlzaton V k, to descrbe τ k s workload wth ts synthetc deadlne: V k = c k / k (4) We call the last subtask of τ ts tal subtask, denoted by τ t and the other subtasks ts body subtasks, as shown n Fgure 1. We use τ bj to denote the j th body subtask. We use τ P q to denote that τ s assgned to processor P q, and say that P q s the host processor of τ. A task set τ s schedulable under a sem-parttoned schedulng algorthm A, f after assgnng tasks to processors by A s parttonng algorthm, each task τ τ can meet ts deadlne under A s schedulng algorthm. IV. THE FIRST ALGORITHM SPA1 A sgnfcant dfference between SPA1 and the algorthms n prevous work s that SPA1 employs a worst-ft parttonng, whle all prevous algorthms employ a frst-ft parttonng [17], [14], [15]. The basc procedure of frst-ft parttonng s as follows: one selects a processor, and assgn tasks to ths processor as much as possble to fll ts capacty, then pck the next processor and repeat the procedure. In contrast, the worst-ft parttonng always selects the processor wth the mnmal total utlzaton of tasks that have been assgned to t, so the occuped capactes of all processors are ncreased roughly n turn. The reason for us to prefer worst-ft parttonng s ntutvely explaned as follows. A subtask τ k s actual deadlne ( k ) s shorter than τ s orgnal deadlne T, and the sum of the synthetc utlzatons of all τ s subtasks s larger than τ s orgnal utlzaton U, whch s the key dffculty for sem-parttoned schedulng to acheve the same utlzaton bound as on sngle-processors. Wth worst-ft parttonng, the occuped capacty of all processors are ncreased n turn, and task splttng only occurs when the capacty of a processor s completely flled. Then, f one parttons all tasks n ncreasng prorty order, the splt tasks n worstft parttonng wll generally have relatvely hgh prorty levels on each processor. Ths s good for the schedulablty of the task set, snce the tasks wth hgh prortes usually have better chance to be schedulable, so they can tolerate the shortened deadlnes better. Consder an extreme scenaro: f one can make sure that all splt tasks subtasks have the hghest prorty on ther host processors, then there s no need to consder the shortened deadlnes of these subtasks, snce, beng of the hghest prorty level on each processor, they are schedulable anyway. Thus, as long as the splt tasks wth shorten deadlnes do not cause any problem, Lu and Layland s utlzaton bound can be easly acheved. The phlosophy behnd our proposed algorthms s makng the splt subtasks get as hgh prorty as possble on each processor. In contrast, wth the frst-ft parttonng, a splt subtask may get qute low prorty on ts host processors 1. For nstance, wth the algorthm n [17] that acheves the utlzaton bound of 65%, n the worst case the second subtask of a splt task wll always get the lowest prorty on ts host processor. As wll be seen later n ths secton, SPA1 does not completely solve the problem. More precsely, SPA1 s restrcted to a class of lght task sets, n whch the utlzaton of each task s no larger than /(1+). Intutvely, ths s because f a task s utlzaton s very large, ts tal subtask mght stll get a relatvely low prorty on ts host processor, even usng worst-ft parttonng. (We wll solve ths problem wth SPA2 n Secton V.) In the followng, we wll ntroduce SPA1 as well as ts utlzaton bound property. The remanng part of ths secton s structured as follows: we frst present the parttonng algorthm of SPA1, and show that any task set τ satsfyng U(τ) can be successfully parttoned by SPA1. Then we ntroduce how the tasks assgned to each processor are scheduled. Next, we prove that f a lght task set s successfully parttoned by SPA1, then all tasks can meet ther deadlnes under the schedulng algorthm of SPA1. Together, ths mples that any lght task set wth U(τ) s schedulable by SPA1, and fnally ndcates the utlzaton bound of SPA1 s for lght task sets. 1: f U(τ) > then abort 2: UQ := [τn 1, τ N 1 1,..., τ 1 1 ] 3: Ψ[1...M] := all zeros 4: whle UQ do 5: P q := the processor wth the mnmal Ψ 6: τ k := pop front(uq) 7: f (U k + Ψ[q] ) then 8: τ k P q 9: Ψ[q] := Ψ[q] + U k 10: else 11: splt τ k nto two parts τ k and τ k+1 such that U k + Ψ[q] = 12: τ k P q 13: Ψ[q] := 14: push front(τ k+1, UQ) 15: end f 16: end whle Algorthm 1: The parttonng algorthm of SPA1. 1 Under the algorthms n [15], a splt subtask s prorty s artfcally advanced to the hghest level on ts host processor, whch breaks down the RMS prorty order and thereby leads to a lower utlzaton bound.

4 A. SPA1: Parttonng and Schedulng The parttonng algorthm of SPA1 s very smple, whch can be brefly descrbed as follows: We assgn tasks n ncreasng prorty order, and always select the processor on whch the total utlzaton of tasks have been assgned so far s mnmal among all the processors. When a task (subtask) can not be assgned entrely to the current selected processor, we splt t nto two parts and assgn the frst part such that the total utlzaton of the current selected processor s, and assgn the second part to the next selected processor. The precse descrpton of the parttonng algorthm s n Algorthm 1. U Q s the lst accommodatng unassgned tasks, sorted n ncreasng prorty order. U Q s ntalzed by {τn 1, τ N 1 1,..., τ 1 1 }, n whch each element τ 1 = c 1 = C, T, 1 k = T s the ntal subtask form of task τ. Each element Ψ[q] n the array Ψ[1...M] denotes the sum of the utlzaton of tasks that have been assgned to processor P q. The work flow of SPA1 s as follows. In each loop teraton, we pck the task at the front of UQ, denoted by τ k, whch has the lowest prorty among all unassgned tasks. We try to assgn τ k to the processor P q, whch has the mnmal Ψ[q] among all elements n Ψ[1...M]. If U k + Ψ[q] then we can assgn the entre τ k to P q, snce there s enough capacty avalable on P q. Otherwse, we splt τ k nto two subtasks τ k and τ k+1, such that U k + Ψ[q] = (Note that wth U k = c k /T we denote the utlzaton of subtask τ k.) We further set Ψ[q] :=, whch means ths processor P q s full and we wll not assgn any more tasks to P q. Then we nsert τ k+1 back to the front of UQ, to assgn t n the next loop teraton. We contnue ths procedure untl all tasks have been assgned. It s easy to see that all task sets below the desred utlzaton bound can be successfully parttoned by SPA1: Lemma 1. Any task set wth U(τ) (5) can be successfully parttoned to M processors wth SPA1. Note that there s no schedulablty guarantee n the parttonng algorthm. It wll be proved n next subsecton. After the tasks are assgned (and possbly splt) to the processors by the parttonng algorthm of SPA1, they wll be scheduled usng RMS on each processor locally,.e., wth ther orgnal prortes. The subtasks of a splt task respect ther precedence relatons,.e., a splt subtask τ k s ready for executon when ts precedng subtask τ k 1 on some other processor has fnshed. release of τ ready for τ k release of τ ready for τ k c j j<k k T c j Fgure 2. Each subtask τ k can be vewed as an ndependent task wth perod of T and deadlne of k. B. Schedulablty We frst show an mportant property of SPA1: Lemma 2. After parttonng accordng to SPA1, each body subtask has the hghest prorty on ts host processor. Proof: In the parttonng algorthm of SPA1, task splttng only occurs when a processor s full. Thus, after a body task was assgned to a processor, there wll be no more tasks assgned to t. Further, the tasks are parttoned n ncreasng prorty order, so all tasks assgned to the processor before have lower prorty. By Lemma 2, we further know that the response tme of each body subtask equals ts executon tme, so the synthetc deadlne t of each tal subtask τ t s calculated as follows: t = T j<k k c bj = T (C c t ) (6) So we can vew the schedulng n SPA1 on each processor wthout consderng the synchronzaton between the subtasks of a splt task, and just regard every splt subtask τ k as an ndependent task wth perod T and a shorter relatve deadlne k calculated by Equaton (6), as shown n Fgure 2. In the followng we prove the schedulablty of non-splt tasks, body subtasks and tal subtasks, respectvely. 1) Non-splt Tasks: Lemma 3. If task set τ wth U(τ) s parttoned by SPA1, then any non-splt task of τ can meet ts deadlne. Proof: The tasks on each processor are scheduled by RMS, and the sum of the utlzaton of all tasks on a processor s no larger than. Further, the deadlnes of the non-splt tasks are unchanged and therefore stll equal ther perods. Thus, each non-splt task s schedulable. Note that although the synthetc deadlnes of other subtasks are shorter than ther orgnal perods, ths does not affect the schedulablty of the non-splt tasks, snce only the perods of these subtasks are relevant to the schedulablty of the non-splt tasks. 2) Body Subtasks: Lemma 4. If task set τ wth U(τ) s parttoned by SPA1, then any body subtask of τ can meet ts deadlne. Proof: The body subtasks have the hghest prortes on ther host processors and wll therefore always meet

5 U b1 U b2 U bb Y t U t hgh prorty b1 X Xb2 XbB Xt P b1 P b2 P bb P t low prorty (a) Γ (b) Γ Fgure 3. Illustraton of X b j, X t and Y t Fgure 4. Illustraton of Γ ther deadlnes. (Ths holds even though the deadlnes were shortened because of the task splttng). 3) Tal Subtasks: Now we prove the schedulablty for an arbtrary tal subtask τ t, durng whch we only focus on τ t, but do not consder whether other tal subtasks are schedulable or not. Snce the same reasonng can be appled to every tal subtask, the proofs guarantee that all tal subtasks are schedulable. Suppose task τ s splt nto B body subtasks and one tal subtask. Recall that we use τ bj, j [1, B] to denote the j th body subtask of τ, and τ t to denote τ s tal subtask. U bj = c bj /T and U t = c t /T denotes τ bj s and τ t s orgnal utlzaton respectvely. Addtonally, we use the followng notatons (cf. Fgure 3): For each body subtask τ bj, let X bj denote the sum of the utlzatons of all the tasks τ k assgned to P bj wth lower prorty than τ bj. For the tal subtask τ t, let Xt denote the sum of the utlzatons of all the tasks assgned to P t wth lower prorty than τ t. For the tal subtask τ t, let Y t denote the sum of the utlzatons of all the tasks assgned to P t wth hgher prorty than τ t. We can use these now for the schedulablty of the tal subtasks: Lemma 5. Suppose a tal subtask τ t s assgned to processor P t. If τ t satsfes then τ t Y t T / t + V t, (7) can meet ts deadlne. Proof: The proof dea s as follows: We consder the set Γ consstng of τ t and all tasks wth hgher prorty than τ t on the same processor,.e., the tasks contrbutng to Y t. For ths set, we construct a new task set Γ, n whch the tasks perods that are larger than t are all reduced to t. The man dea s to frst show that the counterpart of τ t s schedulable wth ths new set Γ by RMS because of the utlzaton bound, and then to prove ths mples the schedulablty of τ t n the orgnal set Γ. In partcular, let P t be the processor to whch τ t s assgned. We defne Γ as follows: Γ = {τ k h τ k h P t h } (8) We now gve the constructon of Γ: For each task τh k Γ, we have a counterpart τ h k n Γ. The only dfference s that we possbly reduce the perods: { c k h = T h, f T h t ck h, Th = t, f T h > t We also keep the same prorty order of tasks n Γ as ther counterparts n Γ, whch s stll a rate-monotonc orderng. Fgure 4 llustrates the constructon. In Fgure 4(a), Γ contans three tasks. τ 1 has a perod that s smaller than t, and τ 2 has a larger one. Further, τ t s contaned n Γ. Accordng to the constructon, Γ n Fgure 4(b) has also three tasks τ 1, τ 2 and τ t, where only the perods of τ 2 and τ t are reduced to t. Now we show the schedulablty of τ t n Γ. We do ths by showng the suffcent upper bound of on the total utlzaton of Γ. U( Γ) = c k h/ T h = c k h/ T h + V k (9) τh k Γ τh k Γ\{τ t} We now do a case dstncton for tasks τ h k Γ, accordng to whether ther perods were reduced or not. If T h t, we have T h = T h. Snce T > t, we have: c k h/ T h = c k h/t h = U k h < U k h T / t If T h > t, we have T h = t. Because of the prorty ordered by perods, we have T h T. Thus: c k h/ T h = c k h/ t c k h/t h T / t = U k h T / t Both cases lead to c k h / T h Uh k T / t, so we can apply ths to (9) from above: U( Γ) (10) τ k h Γ\{τ t } U k h T / t + V k

6 Snce Y t = τh k Γ\{τ t} U h k, we have: U( Γ) Y t T / t + V t Fnally, by the assumpton from Condton (7) we know that the rght-hand sde s at most, and thus U( Γ). Therefore, τ k s schedulable. Note that n Γ there could exst other tal subtasks whose deadlnes are shorter than ther perods. However, ths does not nvaldate that the condton U( Γ) s suffcent to guarantee the schedulablty of τ t under RMS. Now we need to see that ths mples the schedulablty of τ t. Recall that the only dfference between Γ and Γ s that the perod of a task n Γ s possbly larger than ts counterpart n Γ. So the nterference τ t suffered from the hgher-prorty tasks n Γ, s no larger than the nterference τ t suffered n Γ, and snce the deadlnes of τ t and τ t are the same, we know the schedulablty of τ t mples the schedulablty of τ t İt remans to show that Condton (7) holds, whch was the assumpton for ths lemma and thus a suffcent condton for tal subtasks to be schedulable. As n the ntroducton of ths secton, ths condton does not hold n general for SPA1, but only for certan lght task sets: Defnton 1. A task τ s a lght task f U 1 +. Otherwse, τ s a heavy task. A task set τ s a lght task sets f all tasks n τ are lght tasks. Lemma 6. Suppose a tal subtask τ t s assgned to processor P t. If τ s a lght task, we have Y t T / t + V t. Proof: We wll frst derve a general upper bound on Y t based on the propertes of X bj, X t and the subtasks utlzatons. Based on ths, we derve the bound we want to show, usng the assumpton that τ s a lght task. For dervng the upper bound on Y t, we note that as soon as a task s splt nto a body subtask and a rest, the processor hostng ths new body subtask s full,.e., ts utlzaton s. Further, each body subtask has by constructon the hghest prorty on ts host processor, so we have: j [1, B] : U bj + X bj = We sum over all B of these equatons, and get: U bj + X bj = B (11) Now we consder the processor contanng τ t, denoted by P t. Its total utlzaton s X t +U t +Y t and s at most,.e., X t + U t + Y t. We combne ths wth (11) and get: U bj Xbj Y t + U t X t (12) B B In order to smplfy ths, we recall that durng the parttonng phase, we always select the processor wth the smallest total utlzaton of tasks that have been assgned to t so far. (Recall lne 5 n Algorthm 1). Ths mples X bj X t for all subtasks τ bj. Thus, the sum over all X bj s bounded by B X t and we can cancel out both terms n (12): U bj Y t U t B Another smplfcaton s possble usng that B 1 and that τ s utlzaton U s the sum of the utlzatons of all of ts subtasks,.e., U bj = U U t: Y t U 2 U t We are now done wth the frst part,.e., dervng an upper bound for Y t. Ths can easly be transformed nto an upper bound on the term we are nterested n: Y t T t + V t (U 2 U t ) T t + V t (13) For the rest of the proof, we try to bound the rght-hand sde from above by whch wll complete the proof. The key s to brng t nto a form that s sutable to use the assumpton that τ s a lght task. As a frst step, we use that the synthetc deadlne of τ t s the perod T reduced by the total computaton tme of τ s body subtasks,.e., t = T (C c t ), cf. Equaton (6). Further, we use the defntons U = C /T, U t = c t /T and V t = c t / t to derve: (U 2 U t ) T t + V t C c t = T (C c t ) Snce c t > 0, we can fnd a smple upper bound of the rght-hand sde: C c t T (C c t ) = Snce τ s a lght task, we have T T (C c t ) 1 < U 1 + T T C 1 and by applyng U = C /T to the above, we can obtan T 1 T C Thus, we have establshed that s an upper bound of Y t T + V t t wth whch we started n (13). From Lemmas 5 and 6 t follows drectly the desred property: Lemma 7. If task set τ wth U(τ) s parttoned by SPA1, then any tal subtask of a lght task of τ can meet ts deadlne.

7 Table I AN EXAMPLE TASK SET 3 b t 5 Fgure 5. The tal subtask of a task wth large utlzaton may have a low prorty level C. Utlzaton Bound By Lemma 1 we know that a task set τ can be successfully parttoned by the parttonng algorthm of SPA1 f U(τ) s no larger than. If τ has been successfully parttoned, by Lemma 3 and 4 we know that all the non-splt task and body subtasks are schedulable. By Lemma 7 we know a tal subtask τ k s also schedulable f τ s a lght task. Snce, n general, t s a pror unknown whch tasks wll be splt, we pose ths constrant of beng lght to all tasks n τ to have a suffcent schedulablty test condton: Theorem 2. Let τ be a task set only contanng lght tasks. τ s schedulable wth SPA1 on M processors f U(τ) (14) In other words, the utlzaton bound of SPA1 s for task sets only contanng tasks wth utlzaton no larger than /(1 + ). s a decreasng functon wth respect to N, whch means the utlzaton bound s hgher for task sets wth fewer tasks. We use N to denote the maxmal number of tasks (subtasks) assgned to on each processor, so Θ(N ), whch s strctly larger than also serves as the utlzaton bound on each processor. Therefore we can use Θ(N ) to replace n the dervaton above, and get that the utlzaton bound of SPA1 s Θ(N ) for task sets only contanng tasks wth utlzaton no larger than Θ(N )/(1 + Θ(N )). It s easy to see that there s at least one task assgned to each processor, and two subtasks of a task can not be assgned to the same processor. Therefore the number of tasks executng one each processor s at most N M + 1, whch can be used as an over-approxmaton of N. V. THE SECOND ALGORITHM SPA2 In ths secton we ntroduce our second sem-parttoned fxed-prorty schedulng algorthm SPA2, whch has the utlzaton bound of for task sets wthout any constrant. As dscussed n the begnnng of Secton IV, the key pont for our algorthms to acheve hgh utlzaton bounds s to make each splt task gettng a prorty as hgh as possble on ts host processor. Wth SPA1, the tal subtask of a task wth Task C T Heavy Task? Prorty τ yes hghest τ no mddle τ no lowest very large utlzaton could have a relatvely low prorty on ts host processor, as the example n Fgure 5 llustrates. Ths s why the utlzaton bound of SPA1 s not applcable to task sets contanng heavy tasks. To solve ths problem, we propose the second semparttoned algorthm SPA2 n ths secton. The man dea of SPA2 s to pre-assgn each heavy task whose tal subtask mght get a low prorty, before parttonng other tasks, therefore these heavy tasks wll not be splt. Note that f one smply pre-assgns all heavy tasks, t s stll possble for some tal subtask to get a low prorty level on ts host processor. Consder the task set n Table I wth 2 processors, and for smplcty we assume = 0.8, and /(1 + ) = 4/9. If we pre-assgn the heavy task τ 1 to processor P 1, then assgn τ 2 and τ 3 by the parttonng algorthm of SPA1, the task parttonng looks as follows: 1) τ 1 P 1, 2) τ 3 P 2, 3) τ 2 can not be entrely assgned to P 2, so t s splt nto two subtasks τ 1 2 = 3.75, 10, 10 and τ 2 2 = 0.5, 10, 6.25, and τ 1 2 P 2, 4) τ 2 2 P 1. Then the tasks on each processor are scheduled by RMS. We can see that the tal subtask τ2 2 has the lowest prorty on P 1 and wll mss ts deadlne due to the hgher prorty task τ 1. However, f we do not pre-assgn τ 1 and just do the parttonng wth SPA1, ths task set s schedulable. To overcome ths problem, a more sophstcated preassgnng mechansm s employed n our second algorthm SPA2. Intutvely, SPA2 pre-assgns exactly those heavy tasks for whch pre-assgnng them wll not cause any tal subtask to mss deadlne. Ths s checked usng a smple test. Those heavy tasks that don t satsfy ths test wll be assgned (and possbly splt) later together wth the lght tasks. The key for ths to work s, that for these heavy tasks, we can use the property of falng the test n order to show that ther tal subtasks wll not mss the deadlnes ether. A. SPA2: Parttonng and Schedulng We frst ntroduce some notatons. If a heavy task τ s pre-assgned to a processor P q n SPA2, we call τ as a pre-assgned task, otherwse a normal task, and call P q as a pre-assgned processor, otherwse a normal processor. The parttonng algorthm of SPA2 contans three steps: 1) We frst pre-assgn the heavy tasks that satsfy a partcular condton to one processor each.

8 1: f U(τ) > then abort 2: P Q := [P 1, P 2,..., P M ] 3: P Q pre := 4: UQ := 5: Ψ[1...M] := all zeros 6: for := 1 to N do 7: f τ s heavy j> U j ( P Q 1) then 8: P q := pop front(p Q) 9: Pre-assgn τ to P q 10: push front(p q, P Q pre ) 11: Ψ[q] := Ψ[q] + U 12: else 13: push front(τ 1, UQ) 14: end f 15: end for 16: whle UQ do 17: τ k := pop front(uq) 18: f P q P Q : Ψ[q] then 19: P q := the element n P Q wth the mnmal Ψ 20: else 21: P q := pop front(p Q pre ) 22: end f 23: f U k + Ψ[q] then 24: τ k P q 25: 26: Ψ[q] := Ψ[q] + U k f P q came from P Q pre then 27: push front(p q, P Q pre ) 28: end f 29: else 30: splt τ k nto two parts τ k and τ k+1 such that U k + Ψ[q] = 31: τ k P q 32: Ψ[q] = 33: push front(τ k+1, UQ) 34: end f 35: end whle Algorthm 2: The parttonng algorthm of SPA2. 2) We do task parttonng wth the remanng (.e. normal) tasks and remanng (.e. normal) processors usng SPA1 untl all the normal processors are full. 3) The remanng tasks are assgned to the pre-assgned processors; the assgnment selects one processor to assgn as many tasks as possble, untl t becomes full, then select the next processor. The precse descrpton of the parttonng algorthm of SPA2 s shown n Algorthm 2. We frst ntroduce the data structures used n the algorthm: P Q s the lst of all processors. It s ntally [P 1, P 2,..., P M ] and processors are always taken out and put back n the front. Table II AN EXAMPLE DEMONSTRATING SPA2 Task C T Heavy Task? Prorty τ no hghest τ yes τ yes τ no τ no τ yes τ no lowest P Q pre s the lst to accommodate pre-assgned processors, ntally empty. UQ s the lst to accommodate the unassgned tasks after Step 1). Intally t s empty, and durng Step 1), each task τ that s determned not to be pre-assgned wll be put nto UQ (already n ts subtask form τ 1 ). Ψ[1...M] s an array, whch has the same meanng as n SPA1: each element Ψ[q] n the array Ψ[1...M] denotes the sum of the utlzaton of tasks that have been assgned to processor P q. In the followng we use the task set example n Table II wth 4 processors to demonstrate how the parttonng algorthm of SPA2 works. For smplcty, we assume = 0.7, then the utlzaton threshold for lght tasks /(1 + ) s around The ntal state of the data structures are as follows: P Q = [P 1, P 2, P 3, P 4 ] P Q pre = UQ = Ψ[1...4] = [0, 0, 0, 0] In Step 1) (lnes 6 to 15), each task τ n τ s vsted n ncreasng ndex order,.e., decreasng prorty order. If τ s a heavy task, we evaluate the followng condton (lne 7): U j ( P Q 1) (15) j> n whch P Q s the number of processors left n P Q so far. A heavy task τ s determned to be pre-assgned to a processor f ths condton s satsfed. The ntuton for ths s: If we pre-assgn ths task τ, then there s enough space on the remanng processors to accommodate all remanng lower prorty tasks. That way, no lower prorty tal subtask wll end up on the processor whch we assgn τ to. In our example, we frst vst the frst task τ1 1. It s a lght task, so we put t to the front of UQ (lne 13). The next task τ 2 s heavy, but Condton (15) wth P Q = 4 s not satsfed, so we put τ2 1 to the front of UQ. The next task τ 3 s heavy, and Condton (15) wth P Q = 4 s satsfed. Thus, we pre-assgn τ 3 to P 1, and put P 1 to the front of P Q pre (lnes 8 to 10). τ 4 and τ 5 are both lght tasks, so we put them to UQ respectvely. τ 6 s heavy, and Condton (15) wth P Q = 3 (P 1 has been taken out from P Q and put nto P Q pre ) s satsfed, so we pre-assgn τ 5 to P 2, and

9 put P 2 to the front of P Q pre. The last task τ 7 s lght, so t s put to the front of UQ. So far, the Step 1) phase has been fnshed, and the state of the data structures s as follows: P Q = [P 3, P 4 ] P Q pre = [P 2, P 1 ] UQ = [τ7 1, τ5 1, τ4 1, τ2 1, τ1 1 ] Ψ[1...4] = [0.6, 0.6, 0, 0] Note that the processors n P Q pre are n decreasng prorty order of the pre-assgned tasks on them, and the tasks n U Q are n decreasng prorty order. Step 2) and 3) are both n the whle loop of lne In Step 2), the remanng tasks (whch are now n UQ) are assgned to normal processors (the ones n P Q). Only as soon as all processors n P Q are full, the algorthm enters Step 3), n whch tasks are assgned to processors n P Q pre (decson n lnes 18 to 22). The operaton of assgnng a task τ k (lnes 23 to 34) s bascally the same as n SPA1. If τ k can be entrely assgned to P q wthout task splttng, then τ k P q and Ψ[q] s updated (lnes 24 to 28). If P q s a pre-assgned processor, P q s put back to the front of P Q pre (lnes 26 to 28), so that t wll be selected agan n the next loop teraton, otherwse no puttng back operaton s needed snce we never take out elements from P Q, but just select the proper one n t (lne 19). If τ k can not be assgned to P q entrely, τ k s splt nto a new τ k and another subtask τ k+1, such that P q becomes full after the new τ k beng assgned to t, and then we put τ k+1 back to UQ (see lnes 29 to 33). Note that there s an mportant dfference between assgnng tasks to normal processors and to pre-assgned processors. When tasks are assgned to normal processors, the algorthm always selects the processor wth the mnmal Ψ (the same as n SPA1). In the contrast, when tasks are assgned to pre-assgned processors, always the processor at the front of P Q pre s selected,.e., we assgn as many tasks as possble to the processor n P Q pre whose preassgned task has the lowest prorty, untl t s full. As wll be seen later n the schedulablty proof, ths partcular order of selectng pre-assgned processors, together wth the evaluaton of Condton (15), s the key to guarantee the schedulablty of heavy tasks. Wth our runnng example, the remanng tasks are frst assgned to the normal processors P 3 and P 4 n the same way as by SPA1. Thus, τ7 1 P 3, then τ5 1 P 4, then τ4 1 P 3, then τ2 1 s splt nto τ2 1 = 4, 10, 10 and τ2 2 = 0.5, 10, 6, and τ2 1 P 4. So far, all normal processors are full, and the state of the data structures s as follows: P Q = [P 3, P 4 ] (both P 3 and P 4 are full) P Q pre = [P 2, P 1 ] UQ = [τ2 2, τ1 1 ] Ψ[1...4] = [0.6, 0.6, 0.7, 0.7] Then the remanng tasks n UQ are assgned to the preassgned processors. At frst τ2 2 P 2, after whch P 2 s not full and stll at the front of P Q pre. So the next task τ1 1 s also assgned to P 2. There s no unassgned task any more, so the algorthm termnates. It s easy to see that any task set below the desred utlzaton bound can be successfully parttoned by SPA2: Lemma 8. Any task set wth U(τ) can be successfully parttoned to M processors wth SPA2. After descrbng the parttonng part of SPA2, we also need to descrbe the schedulng part. It s the same as SPA1: on each processor the tasks are scheduled by RMS, respectng the precedence relatons between the subtasks of a splt task,.e., a subtask s ready for executon as soon as the executon of ts precedng subtask has been fnshed. Note that under SPA2, each body subtask s also wth the hghest prorty on ts host processor, whch s the same as n SPA1. So we can vew the schedulng on each processor as the RMS wth a set of ndependent tasks, n whch each subtask s deadlne s shortened by the sum of the executon tme of all ts precedng subtasks. B. Propertes Now we ntroduce some useful propertes of SPA2. Lemma 9. Let τ be a heavy task, and there are η preassgned tasks wth hgher prorty than τ. Then we know If τ s a pre-assgned task, t satsfes U j (M η 1) (16) j> If τ s not a pre-assgned task, t satsfes U j > (M η 1) (17) j> Proof: The proof drectly follows the parttonng algorthm of SPA2. Lemma 10. Each pre-assgned task has the lowest prorty on ts host processor. Proof: Wthout loss of generalty, we sort all processors n a lst Q as follows: we frst sort all pre-assgned processors n Q, n decreasng prorty order of the pre-assgned tasks on them; then the normal processors follow n Q n an arbtrary order. We use P x to denote the x th processor n Q. Suppose τ s a heavy task pre-assgned to P q. τ s a pre-assgned task, and the number of pre-assgned task wth hgher prorty than τ s q 1, so by Lemma 9 we know the followng condton s satsfed: U j (M q) (18) j> In the parttonng algorthm of SPA2, normal tasks are assgned to pre-assgned processors only when all normal

10 processors are full, and the pre-assgned processors are selected n ncreasng prorty order of the pre-assgned tasks on them, so we know only when the processors P q+1...p M are all full, normal tasks can be assgned to processor P q. The total capacty of processors P q+1...p M are (M q) (n our algorthms a processor s full as soon as the total utlzaton on t s ), and by (18), we know when we start to assgn tasks to P q, the tasks wth lower prorty than τ all have been assgned to processors P q+1...p M, so all normal tasks (subtasks) assgned to P q have hgher prortes than τ. Lemma 11. Each body subtask has the hghest prorty on ts host processor. Proof: Gven a body subtask τ bj assgned to processor P bj. Snce task splttng only occur when a processor s full, and all the normal tasks are assgned n ncreasng prorty order, we know τ bj has the hghest prorty among all normal tasks on P bj. Addtonally, by lemma 10 we know that f P bj s a pre-assgned processor, the pre-assgned task on P bj also has lower prorty than τ bj. So we know P bj has the hghest prorty on P bj. C. Schedulablty By Lemma 11 we know that under SPA2 each body subtask has the hghest prorty on ts host processor, so we know all body subtasks are schedulable. The schedulng algorthm of SPA2 s stll RMS, and the deadlne of a non-splt task stll equals to ts perod, so the schedulablty of non-splt tasks can be proved n the same way as n SPA1 (Lemma 3). In the followng we wll prove the schedulablty of tal subtasks. Suppose τ s splt nto B body subtasks and one tal subtask. Recall that we use τ bj, j [1, B] to denote the j th body subtask of τ, and τ t to denote τ s tal subtask. X t, Y t and X bj are defned the same as n Secton IV-B. Frst we recall Lemma 5, whch s used to prove the schedulablty of tal subtasks n SPA1: f a tal subtask τ t satsfes Y t T / t + V t (19) τ t can meet ts deadlne. Ths concluson also holds for SPA2, snce the schedulng algorthm on SPA2 s also RMS, whch s only the relevant property requred by the proof of Lemma 5. So provng the schedulablty of tal subtasks s reduced to provng Condton (19) for tal subtasks under SPA2. We call τ t a tal-of-heavy f τ s heavy, otherwse a talof-lght. In the followng we prove Condton (19) for τ t n three cases: 1) τ t s a tal-of-lght, and P t s a normal processor, 2) τ t s a tal-of-lght, and P t s a pre-assgned processor, 3) τ t s a tal-of-heavy. Case 1) can be proved n the same way as n SPA1, snce both the parttonng and schedulng algorthm of SPA2 on normal processors are the same as SPA1. Actually one can regard the parttonng and schedulng of SPA2 on normal processors as the parttonng and schedulng wth a subset of tasks (those are assgned to normal processors) on a subset of processors (normal processors) of SPA1. So the schedulablty of τ t n ths case can be proved by exactly the same reasonng as for Lemma 6. Now we prove Case 2), where τ t s a tal-of-lght, and P t s a pre-assgned processor. Lemma 12. Suppose τ t s a tal-of-lght assgned to a preassgned processor P t under SPA2. We have Y t T / t + V t Proof: By Lemma 10 we know τ t has hgher prorty than the pre-assgned task of P t, so X t s no smaller than the utlzaton of ths pre-assgned task. And snce a preassgned task must be heavy, we have X t > 1 + On the other hand, snce τ s lght, we know C T 1 + (20) We use c B to denote the total executon tme of all τ s body tasks. Snce c B < C and < 1, we have c B 1 < 1 + T T ( ) < T c B T T c B ( 1 + ) < T t ( 1 + U t ) + V t < (21) By (21) and (20) we have T t ( X t U t ) + V t < and snce the total utlzaton on each processor s bounded by,.e., Y t X t U t fnally we have Y t T / t + V t <. Now we prove Case 3), where τ t s a tal-of-heavy. Note that n ths case P t can be ether a pre-assgned or a normal processor. Lemma 13. τ t s the tal subtask of a normal heavy task τ, then we have Y t T / t + V t

11 Proof: By the property n Lemma 9 concernng normal heavy tasks we know τ satsfes the condton U j > (M η 1) j> n whch η s the number of pre-assgned tasks wth hgher prorty than τ. We use M to denote the set of all processors, so M = M, and use H to denote the set of the pre-assgned processors on whch the pre-assgned tasks prortes are hgher than τ, so H = η, so we have: U j > (M H 1) (22) j> By Lemma 10 we know any normal task assgned to a preassgned processor has hgher prorty than the pre-assgned task of ths processor. Therefore, τ s body and tal subtasks are all assgned to processors n M \ H. Moreover, when we start to assgn τ, all tasks wth lower prorty than τ have already been assgned (pre-assgned) to processors n M \ H, snce pre-assgned tasks have already been assgned before dealng wth the normal tasks, and all normal tasks are assgned n ncreasng prorty order. We use K to denote the set of processors n M \ H that contan nether τ s body nor tal subtask, and for each processor P k K we use X k to denote the total utlzaton of the tasks wth lower prorty than τ assgned to P k. Then we have X t + X bj + k [1, K ] X k = j> Snce K = M H (B + 1), and P k K, X k, we have X t + U j (M H (B + 1)) X bj j> (23) By Inequaltes (22) and (23) we have X t + X bj > B (24) Now we look at processor P t, the total utlzaton on whch s bounded by, so we have: By (24) and (25) we have U j Y t X t U t (25) Y t (B and snce U t + U bj = U, we have Y t B U + X bj + X bj ) U t U bj (26) Snce each body task has the hghest prorty on ts host processor, and the total utlzaton of any processor contanng a body subtask s, we have l [1,B] X b l + By (26) and (27) we have Y t T / t + V t l [1,B] Y t U By applyng U = C /T and V t the above nequalty, we get U b l = B (27) ( U ) T / t + V t = c t / t to the RHS of Y t T / t + V t T / t C / t + c t / t (28) We use c B to denote the sum of the executon tme of all τ s body subtasks, so we have c t + cb = C and t = T c B. We apply these to the RHS of (28) and get Y t T / t + V t Snce < 1, we have T c B T c B c B > c B T c B < T c B (29) T c B T c B < (30) So by Inequaltes (29) and (30) we have D. Utlzaton Bound Y t T / t + V t < Now we have known that any task set τ wth U(τ) can be successfully parttoned on M processors by SPA2 (Lemma 8). In last subsecton, we have shown that under the schedulng algorthm of SPA2, the body subtasks are schedulable snce they are always wth the hghest prorty level on ther host processors; the non-splt tasks are also schedulable snce the utlzaton on each processor s bounded by. The schedulablty for the tal subtasks s proved by case dstncton, n whch the schedulablty for the lght tal subtasks on normal processors can be proved by the same reasonng as for Lemma 6, for the lght tal subtasks on pre-assgned processors s proved by Lemma 12, and for the heavy tal subtasks s proved by Lemma 13. So we have the followng theorem: Theorem 3. τ s schedulable by SPA2 on M processors f U(τ) So s the utlzaton bound of SPA2 for any task set.

12 For the same reason as presented at the end of Secton IV-B, we can use Θ(N ), the the maxmal number of tasks (subtasks) assgned to on each processor, to replace n Theorem 3. E. Task Splttng Overhead Wth the algorthms proposed n ths paper, a task could be splt nto more than two subtasks. However, snce the task splttng only occurs when a processor s full, for any task set that s schedulable by SPA2, the number of task splttng s at most M 1, whch s the same as n prevous sem-parttoned fxed-prorty schedulng algorthms [17], [14], [15], and as shown n case studes conducted n [17], ths overhead can be expected to be neglgble on mult-core platforms. VI. CONCLUSIONS AND FUTURE WORK In ths paper, we have developed a sem-parttoned fxedprorty schedulng algorthm for multprocessor systems, wth the well-known Lu and Layland s utlzaton bound for RMS on sngle processors. The algorthm enjoys the followng property. If the utlzaton bound s used for the schedulablty test, and a task set s determned schedulable by fxed-prorty schedulng on a sngle processor of speed M, t s also schedulable by our algorthm on M processors of speed 1 (under the assumpton that each task s executon tme on the processors of speed 1 s stll smaller than ts deadlne). Note that the utlzaton bound test s only suffcent but not necessary. As future work, we wll challenge the problem of constructng algorthms holdng the same property wth respects to the exact schedulablty analyss. REFERENCES [1] J. Anderson, V. Bud, and U.C. Dev. An edf-based schedulng algorthm for multprocessor soft real-tme systems. In Euromcro Conference on Real-Tme Systems (ECRTS), [2] J. Anderson and A. Srnvasan. Mxed pfar/erfar schedulng of asynchronous perodc tasks. In Journal of Computer and System Scences, [3] B. Andersson. Global statc prorty preemptve multprocessor schedulng wth utlzaton bound 38%. In Internatonal Conference on Prncples of Dstrbuted Systems (OPODIS), [4] B. Andersson, S. Baruah, and J. Jonsson. Statc prorty schedulng on multprocessors. In IEEE Real-Tme Systems Symposum (RTSS), [7] B. Andersson and J. Jonsson. The utlzaton bounds of parttoned and pfar statc-prorty schedulng on multprocessors are 50%. In Euromcro Conference on Real-Tme Systems (ECRTS), [8] B. Andersson and E. Tovar. Multprocessor schedulng wth few preemptons. In IEEE conference on Embedded and Real- Tme Computng Systems and Applcatons (RTCSA), [9] S. K. Baruah, N. K. Cohen, C. G. Plaxton, and D. A. Varvel. Proportonate progress: A noton of farness n resource allocaton. In Algorthmca, [10] John Carpenter, Shelby Funk, Phlp Holman, Anand Srnvasan, James Anderson, and Sanjoy Baruah. A Categorzaton of Real-Tme Multprocessor Schedulng Problems and Algorthms [11] U. Dev and J. Anderson. Tardness bounds for global edf schedulng on a multprocessor. In IEEE Real-Tme Systems Symposum (RTSS), [12] S. Kato and N. Yamasak. Real-tme schedulng wth task splttng on multprocessors. In IEEE conference on Embedded and Real-Tme Computng Systems and Applcatons (RTCSA), [13] S. Kato and N. Yamasak. Portoned edf-based schedulng on multprocessors. In Internatonal Conference on Embedded Software (EMSOFT), [14] S. Kato and N. Yamasak. Portoned statc-prorty schedulng on multprocessors. In Internatonal Parellel and Dstrbuted Processng Symposum (IPDPS), [15] S. Kato and N. Yamasak. Sem-parttoned fxed-prorty schedulng on multprocessors. In IEEE Real-Tme and Embedded Technology and Applcatons Symposum (RTAS), [16] S. Kato, N. Yamasak, and Y. Ishkawa. Sem-parttoned schedulng of sporadc task systems on multprocessors. In Euromcro Conference on Real-Tme Systems (ECRTS), [17] K. Lakshmanan, R. Rajkumar, and J. Lehoczky. Parttoned fxed-prorty preemptve schedulng for mult-core processors. In Euromcro Conference on Real-Tme Systems (ECRTS), [18] C. L. Lu and J. W. Layland. Schedulng algorthms for multprogrammng n a hard-real-tme envronment. In Journal of the ACM, [19] D. Oh and T. P. Baker. Utlzaton bounds for n-processor rate monotone schedulng wth statc processor assgnment. In Real-Tme Systems, [5] B. Andersson and K. Bletsas. Sporadc multprocessor schedulng wth few preemptons. In Euromcro Conference on Real-Tme Systems (ECRTS), [6] B. Andersson, K. Bletsas, and S. Baruah. Schedulng arbtrary-deadlne sporadc task systems multprocessors. In IEEE Real-Tme Systems Symposum (RTSS), 2008.

Fixed-Priority Multiprocessor Scheduling with Liu & Layland s Utilization Bound

Fixed-Priority Multiprocessor Scheduling with Liu & Layland s Utilization Bound Fxed-Prorty Multprocessor Schedulng wth Lu & Layland s Utlzaton Bound Nan Guan, Martn Stgge, Wang Y and Ge Yu Department of Informaton Technology, Uppsala Unversty, Sweden Department of Computer Scence

More information

Parametric Utilization Bounds for Fixed-Priority Multiprocessor Scheduling

Parametric Utilization Bounds for Fixed-Priority Multiprocessor Scheduling 2012 IEEE 26th Internatonal Parallel and Dstrbuted Processng Symposum Parametrc Utlzaton Bounds for Fxed-Prorty Multprocessor Schedulng Nan Guan 1,2, Martn Stgge 1, Wang Y 1,2 and Ge Yu 2 1 Uppsala Unversty,

More information

Real-Time Systems. Multiprocessor scheduling. Multiprocessor scheduling. Multiprocessor scheduling

Real-Time Systems. Multiprocessor scheduling. Multiprocessor scheduling. Multiprocessor scheduling Real-Tme Systems Multprocessor schedulng Specfcaton Implementaton Verfcaton Multprocessor schedulng -- -- Global schedulng How are tasks assgned to processors? Statc assgnment The processor(s) used for

More information

Embedded Systems. 4. Aperiodic and Periodic Tasks

Embedded Systems. 4. Aperiodic and Periodic Tasks Embedded Systems 4. Aperodc and Perodc Tasks Lothar Thele 4-1 Contents of Course 1. Embedded Systems Introducton 2. Software Introducton 7. System Components 10. Models 3. Real-Tme Models 4. Perodc/Aperodc

More information

Problem Set 9 Solutions

Problem Set 9 Solutions Desgn and Analyss of Algorthms May 4, 2015 Massachusetts Insttute of Technology 6.046J/18.410J Profs. Erk Demane, Srn Devadas, and Nancy Lynch Problem Set 9 Solutons Problem Set 9 Solutons Ths problem

More information

Two Methods to Release a New Real-time Task

Two Methods to Release a New Real-time Task Two Methods to Release a New Real-tme Task Abstract Guangmng Qan 1, Xanghua Chen 2 College of Mathematcs and Computer Scence Hunan Normal Unversty Changsha, 410081, Chna qqyy@hunnu.edu.cn Gang Yao 3 Sebel

More information

Structure and Drive Paul A. Jensen Copyright July 20, 2003

Structure and Drive Paul A. Jensen Copyright July 20, 2003 Structure and Drve Paul A. Jensen Copyrght July 20, 2003 A system s made up of several operatons wth flow passng between them. The structure of the system descrbes the flow paths from nputs to outputs.

More information

Last Time. Priority-based scheduling. Schedulable utilization Rate monotonic rule: Keep utilization below 69% Static priorities Dynamic priorities

Last Time. Priority-based scheduling. Schedulable utilization Rate monotonic rule: Keep utilization below 69% Static priorities Dynamic priorities Last Tme Prorty-based schedulng Statc prortes Dynamc prortes Schedulable utlzaton Rate monotonc rule: Keep utlzaton below 69% Today Response tme analyss Blockng terms Prorty nverson And solutons Release

More information

Stanford University CS359G: Graph Partitioning and Expanders Handout 4 Luca Trevisan January 13, 2011

Stanford University CS359G: Graph Partitioning and Expanders Handout 4 Luca Trevisan January 13, 2011 Stanford Unversty CS359G: Graph Parttonng and Expanders Handout 4 Luca Trevsan January 3, 0 Lecture 4 In whch we prove the dffcult drecton of Cheeger s nequalty. As n the past lectures, consder an undrected

More information

Partitioned Mixed-Criticality Scheduling on Multiprocessor Platforms

Partitioned Mixed-Criticality Scheduling on Multiprocessor Platforms Parttoned Mxed-Crtcalty Schedulng on Multprocessor Platforms Chuanca Gu 1, Nan Guan 1,2, Qngxu Deng 1 and Wang Y 1,2 1 Northeastern Unversty, Chna 2 Uppsala Unversty, Sweden Abstract Schedulng mxed-crtcalty

More information

The Order Relation and Trace Inequalities for. Hermitian Operators

The Order Relation and Trace Inequalities for. Hermitian Operators Internatonal Mathematcal Forum, Vol 3, 08, no, 507-57 HIKARI Ltd, wwwm-hkarcom https://doorg/0988/mf088055 The Order Relaton and Trace Inequaltes for Hermtan Operators Y Huang School of Informaton Scence

More information

CHAPTER 17 Amortized Analysis

CHAPTER 17 Amortized Analysis CHAPTER 7 Amortzed Analyss In an amortzed analyss, the tme requred to perform a sequence of data structure operatons s averaged over all the operatons performed. It can be used to show that the average

More information

ECE559VV Project Report

ECE559VV Project Report ECE559VV Project Report (Supplementary Notes Loc Xuan Bu I. MAX SUM-RATE SCHEDULING: THE UPLINK CASE We have seen (n the presentaton that, for downlnk (broadcast channels, the strategy maxmzng the sum-rate

More information

College of Computer & Information Science Fall 2009 Northeastern University 20 October 2009

College of Computer & Information Science Fall 2009 Northeastern University 20 October 2009 College of Computer & Informaton Scence Fall 2009 Northeastern Unversty 20 October 2009 CS7880: Algorthmc Power Tools Scrbe: Jan Wen and Laura Poplawsk Lecture Outlne: Prmal-dual schema Network Desgn:

More information

Improved Worst-Case Response-Time Calculations by Upper-Bound Conditions

Improved Worst-Case Response-Time Calculations by Upper-Bound Conditions Improved Worst-Case Response-Tme Calculatons by Upper-Bound Condtons Vctor Pollex, Steffen Kollmann, Karsten Albers and Frank Slomka Ulm Unversty Insttute of Embedded Systems/Real-Tme Systems {frstname.lastname}@un-ulm.de

More information

Module 9. Lecture 6. Duality in Assignment Problems

Module 9. Lecture 6. Duality in Assignment Problems Module 9 1 Lecture 6 Dualty n Assgnment Problems In ths lecture we attempt to answer few other mportant questons posed n earler lecture for (AP) and see how some of them can be explaned through the concept

More information

The optimal delay of the second test is therefore approximately 210 hours earlier than =2.

The optimal delay of the second test is therefore approximately 210 hours earlier than =2. THE IEC 61508 FORMULAS 223 The optmal delay of the second test s therefore approxmately 210 hours earler than =2. 8.4 The IEC 61508 Formulas IEC 61508-6 provdes approxmaton formulas for the PF for smple

More information

Lectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix

Lectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix Lectures - Week 4 Matrx norms, Condtonng, Vector Spaces, Lnear Independence, Spannng sets and Bass, Null space and Range of a Matrx Matrx Norms Now we turn to assocatng a number to each matrx. We could

More information

AN EXTENDIBLE APPROACH FOR ANALYSING FIXED PRIORITY HARD REAL-TIME TASKS

AN EXTENDIBLE APPROACH FOR ANALYSING FIXED PRIORITY HARD REAL-TIME TASKS AN EXENDIBLE APPROACH FOR ANALYSING FIXED PRIORIY HARD REAL-IME ASKS K. W. ndell 1 Department of Computer Scence, Unversty of York, England YO1 5DD ABSRAC As the real-tme computng ndustry moves away from

More information

Lecture 4. Instructor: Haipeng Luo

Lecture 4. Instructor: Haipeng Luo Lecture 4 Instructor: Hapeng Luo In the followng lectures, we focus on the expert problem and study more adaptve algorthms. Although Hedge s proven to be worst-case optmal, one may wonder how well t would

More information

Quantifying the Sub-optimality of Uniprocessor Fixed Priority Pre-emptive Scheduling for Sporadic Tasksets with Arbitrary Deadlines

Quantifying the Sub-optimality of Uniprocessor Fixed Priority Pre-emptive Scheduling for Sporadic Tasksets with Arbitrary Deadlines Quantfyng the Sub-optmalty of Unprocessor Fxed Prorty Pre-emptve Schedulng for Sporadc Tasksets wth Arbtrary Deadlnes Robert Davs, Sanjoy Baruah, Thomas Rothvoss, Alan Burns To cte ths verson: Robert Davs,

More information

Foundations of Arithmetic

Foundations of Arithmetic Foundatons of Arthmetc Notaton We shall denote the sum and product of numbers n the usual notaton as a 2 + a 2 + a 3 + + a = a, a 1 a 2 a 3 a = a The notaton a b means a dvdes b,.e. ac = b where c s an

More information

a b a In case b 0, a being divisible by b is the same as to say that

a b a In case b 0, a being divisible by b is the same as to say that Secton 6.2 Dvsblty among the ntegers An nteger a ε s dvsble by b ε f there s an nteger c ε such that a = bc. Note that s dvsble by any nteger b, snce = b. On the other hand, a s dvsble by only f a = :

More information

Lecture 4: November 17, Part 1 Single Buffer Management

Lecture 4: November 17, Part 1 Single Buffer Management Lecturer: Ad Rosén Algorthms for the anagement of Networs Fall 2003-2004 Lecture 4: November 7, 2003 Scrbe: Guy Grebla Part Sngle Buffer anagement In the prevous lecture we taled about the Combned Input

More information

Errors for Linear Systems

Errors for Linear Systems Errors for Lnear Systems When we solve a lnear system Ax b we often do not know A and b exactly, but have only approxmatons  and ˆb avalable. Then the best thng we can do s to solve ˆx ˆb exactly whch

More information

Difference Equations

Difference Equations Dfference Equatons c Jan Vrbk 1 Bascs Suppose a sequence of numbers, say a 0,a 1,a,a 3,... s defned by a certan general relatonshp between, say, three consecutve values of the sequence, e.g. a + +3a +1

More information

3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X

3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X Statstcs 1: Probablty Theory II 37 3 EPECTATION OF SEVERAL RANDOM VARIABLES As n Probablty Theory I, the nterest n most stuatons les not on the actual dstrbuton of a random vector, but rather on a number

More information

Notes on Frequency Estimation in Data Streams

Notes on Frequency Estimation in Data Streams Notes on Frequency Estmaton n Data Streams In (one of) the data streamng model(s), the data s a sequence of arrvals a 1, a 2,..., a m of the form a j = (, v) where s the dentty of the tem and belongs to

More information

Affine transformations and convexity

Affine transformations and convexity Affne transformatons and convexty The purpose of ths document s to prove some basc propertes of affne transformatons nvolvng convex sets. Here are a few onlne references for background nformaton: http://math.ucr.edu/

More information

find (x): given element x, return the canonical element of the set containing x;

find (x): given element x, return the canonical element of the set containing x; COS 43 Sprng, 009 Dsjont Set Unon Problem: Mantan a collecton of dsjont sets. Two operatons: fnd the set contanng a gven element; unte two sets nto one (destructvely). Approach: Canoncal element method:

More information

Global EDF Scheduling for Parallel Real-Time Tasks

Global EDF Scheduling for Parallel Real-Time Tasks Washngton Unversty n St. Lous Washngton Unversty Open Scholarshp Engneerng and Appled Scence Theses & Dssertatons Engneerng and Appled Scence Sprng 5-15-2014 Global EDF Schedulng for Parallel Real-Tme

More information

NP-Completeness : Proofs

NP-Completeness : Proofs NP-Completeness : Proofs Proof Methods A method to show a decson problem Π NP-complete s as follows. (1) Show Π NP. (2) Choose an NP-complete problem Π. (3) Show Π Π. A method to show an optmzaton problem

More information

Graph Reconstruction by Permutations

Graph Reconstruction by Permutations Graph Reconstructon by Permutatons Perre Ille and Wllam Kocay* Insttut de Mathémathques de Lumny CNRS UMR 6206 163 avenue de Lumny, Case 907 13288 Marselle Cedex 9, France e-mal: lle@ml.unv-mrs.fr Computer

More information

More metrics on cartesian products

More metrics on cartesian products More metrcs on cartesan products If (X, d ) are metrc spaces for 1 n, then n Secton II4 of the lecture notes we defned three metrcs on X whose underlyng topologes are the product topology The purpose of

More information

Lecture Notes on Linear Regression

Lecture Notes on Linear Regression Lecture Notes on Lnear Regresson Feng L fl@sdueducn Shandong Unversty, Chna Lnear Regresson Problem In regresson problem, we am at predct a contnuous target value gven an nput feature vector We assume

More information

Calculation of time complexity (3%)

Calculation of time complexity (3%) Problem 1. (30%) Calculaton of tme complexty (3%) Gven n ctes, usng exhaust search to see every result takes O(n!). Calculaton of tme needed to solve the problem (2%) 40 ctes:40! dfferent tours 40 add

More information

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems Numercal Analyss by Dr. Anta Pal Assstant Professor Department of Mathematcs Natonal Insttute of Technology Durgapur Durgapur-713209 emal: anta.bue@gmal.com 1 . Chapter 5 Soluton of System of Lnear Equatons

More information

2.3 Nilpotent endomorphisms

2.3 Nilpotent endomorphisms s a block dagonal matrx, wth A Mat dm U (C) In fact, we can assume that B = B 1 B k, wth B an ordered bass of U, and that A = [f U ] B, where f U : U U s the restrcton of f to U 40 23 Nlpotent endomorphsms

More information

Economics 101. Lecture 4 - Equilibrium and Efficiency

Economics 101. Lecture 4 - Equilibrium and Efficiency Economcs 0 Lecture 4 - Equlbrum and Effcency Intro As dscussed n the prevous lecture, we wll now move from an envronment where we looed at consumers mang decsons n solaton to analyzng economes full of

More information

= z 20 z n. (k 20) + 4 z k = 4

= z 20 z n. (k 20) + 4 z k = 4 Problem Set #7 solutons 7.2.. (a Fnd the coeffcent of z k n (z + z 5 + z 6 + z 7 + 5, k 20. We use the known seres expanson ( n+l ( z l l z n below: (z + z 5 + z 6 + z 7 + 5 (z 5 ( + z + z 2 + z + 5 5

More information

Simultaneous Optimization of Berth Allocation, Quay Crane Assignment and Quay Crane Scheduling Problems in Container Terminals

Simultaneous Optimization of Berth Allocation, Quay Crane Assignment and Quay Crane Scheduling Problems in Container Terminals Smultaneous Optmzaton of Berth Allocaton, Quay Crane Assgnment and Quay Crane Schedulng Problems n Contaner Termnals Necat Aras, Yavuz Türkoğulları, Z. Caner Taşkın, Kuban Altınel Abstract In ths work,

More information

Common loop optimizations. Example to improve locality. Why Dependence Analysis. Data Dependence in Loops. Goal is to find best schedule:

Common loop optimizations. Example to improve locality. Why Dependence Analysis. Data Dependence in Loops. Goal is to find best schedule: 15-745 Lecture 6 Data Dependence n Loops Copyrght Seth Goldsten, 2008 Based on sldes from Allen&Kennedy Lecture 6 15-745 2005-8 1 Common loop optmzatons Hostng of loop-nvarant computatons pre-compute before

More information

NUMERICAL DIFFERENTIATION

NUMERICAL DIFFERENTIATION NUMERICAL DIFFERENTIATION 1 Introducton Dfferentaton s a method to compute the rate at whch a dependent output y changes wth respect to the change n the ndependent nput x. Ths rate of change s called the

More information

20. Mon, Oct. 13 What we have done so far corresponds roughly to Chapters 2 & 3 of Lee. Now we turn to Chapter 4. The first idea is connectedness.

20. Mon, Oct. 13 What we have done so far corresponds roughly to Chapters 2 & 3 of Lee. Now we turn to Chapter 4. The first idea is connectedness. 20. Mon, Oct. 13 What we have done so far corresponds roughly to Chapters 2 & 3 of Lee. Now we turn to Chapter 4. The frst dea s connectedness. Essentally, we want to say that a space cannot be decomposed

More information

Edge Isoperimetric Inequalities

Edge Isoperimetric Inequalities November 7, 2005 Ross M. Rchardson Edge Isopermetrc Inequaltes 1 Four Questons Recall that n the last lecture we looked at the problem of sopermetrc nequaltes n the hypercube, Q n. Our noton of boundary

More information

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall 2013 Lecture 12 10/21/2013. Martingale Concentration Inequalities and Applications

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall 2013 Lecture 12 10/21/2013. Martingale Concentration Inequalities and Applications MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.65/15.070J Fall 013 Lecture 1 10/1/013 Martngale Concentraton Inequaltes and Applcatons Content. 1. Exponental concentraton for martngales wth bounded ncrements.

More information

Grover s Algorithm + Quantum Zeno Effect + Vaidman

Grover s Algorithm + Quantum Zeno Effect + Vaidman Grover s Algorthm + Quantum Zeno Effect + Vadman CS 294-2 Bomb 10/12/04 Fall 2004 Lecture 11 Grover s algorthm Recall that Grover s algorthm for searchng over a space of sze wors as follows: consder the

More information

Improving the Sensitivity of Deadlines with a Specific Asynchronous Scenario for Harmonic Periodic Tasks scheduled by FP

Improving the Sensitivity of Deadlines with a Specific Asynchronous Scenario for Harmonic Periodic Tasks scheduled by FP Improvng the Senstvty of Deadlnes wth a Specfc Asynchronous Scenaro for Harmonc Perodc Tasks scheduled by FP P. Meumeu Yoms, Y. Sorel, D. de Rauglaudre AOSTE Project-team INRIA Pars-Rocquencourt Le Chesnay,

More information

11 Tail Inequalities Markov s Inequality. Lecture 11: Tail Inequalities [Fa 13]

11 Tail Inequalities Markov s Inequality. Lecture 11: Tail Inequalities [Fa 13] Algorthms Lecture 11: Tal Inequaltes [Fa 13] If you hold a cat by the tal you learn thngs you cannot learn any other way. Mark Twan 11 Tal Inequaltes The smple recursve structure of skp lsts made t relatvely

More information

Lecture 12: Discrete Laplacian

Lecture 12: Discrete Laplacian Lecture 12: Dscrete Laplacan Scrbe: Tanye Lu Our goal s to come up wth a dscrete verson of Laplacan operator for trangulated surfaces, so that we can use t n practce to solve related problems We are mostly

More information

E Tail Inequalities. E.1 Markov s Inequality. Non-Lecture E: Tail Inequalities

E Tail Inequalities. E.1 Markov s Inequality. Non-Lecture E: Tail Inequalities Algorthms Non-Lecture E: Tal Inequaltes If you hold a cat by the tal you learn thngs you cannot learn any other way. Mar Twan E Tal Inequaltes The smple recursve structure of sp lsts made t relatvely easy

More information

Keynote: RTNS Getting ones priorities right

Keynote: RTNS Getting ones priorities right Keynote: RTNS 2012 Gettng ones prortes rght Robert Davs Real-Tme Systems Research Group, Unversty of York rob.davs@york.ac.uk What s ths talk about? Fxed Prorty schedulng n all ts guses Pre-emptve, non-pre-emptve,

More information

Finding Dense Subgraphs in G(n, 1/2)

Finding Dense Subgraphs in G(n, 1/2) Fndng Dense Subgraphs n Gn, 1/ Atsh Das Sarma 1, Amt Deshpande, and Rav Kannan 1 Georga Insttute of Technology,atsh@cc.gatech.edu Mcrosoft Research-Bangalore,amtdesh,annan@mcrosoft.com Abstract. Fndng

More information

The Geometry of Logit and Probit

The Geometry of Logit and Probit The Geometry of Logt and Probt Ths short note s meant as a supplement to Chapters and 3 of Spatal Models of Parlamentary Votng and the notaton and reference to fgures n the text below s to those two chapters.

More information

Resource Allocation with a Budget Constraint for Computing Independent Tasks in the Cloud

Resource Allocation with a Budget Constraint for Computing Independent Tasks in the Cloud Resource Allocaton wth a Budget Constrant for Computng Independent Tasks n the Cloud Wemng Sh and Bo Hong School of Electrcal and Computer Engneerng Georga Insttute of Technology, USA 2nd IEEE Internatonal

More information

U.C. Berkeley CS294: Spectral Methods and Expanders Handout 8 Luca Trevisan February 17, 2016

U.C. Berkeley CS294: Spectral Methods and Expanders Handout 8 Luca Trevisan February 17, 2016 U.C. Berkeley CS94: Spectral Methods and Expanders Handout 8 Luca Trevsan February 7, 06 Lecture 8: Spectral Algorthms Wrap-up In whch we talk about even more generalzatons of Cheeger s nequaltes, and

More information

A new construction of 3-separable matrices via an improved decoding of Macula s construction

A new construction of 3-separable matrices via an improved decoding of Macula s construction Dscrete Optmzaton 5 008 700 704 Contents lsts avalable at ScenceDrect Dscrete Optmzaton journal homepage: wwwelsevercom/locate/dsopt A new constructon of 3-separable matrces va an mproved decodng of Macula

More information

Maximizing the number of nonnegative subsets

Maximizing the number of nonnegative subsets Maxmzng the number of nonnegatve subsets Noga Alon Hao Huang December 1, 213 Abstract Gven a set of n real numbers, f the sum of elements of every subset of sze larger than k s negatve, what s the maxmum

More information

CONTRAST ENHANCEMENT FOR MIMIMUM MEAN BRIGHTNESS ERROR FROM HISTOGRAM PARTITIONING INTRODUCTION

CONTRAST ENHANCEMENT FOR MIMIMUM MEAN BRIGHTNESS ERROR FROM HISTOGRAM PARTITIONING INTRODUCTION CONTRAST ENHANCEMENT FOR MIMIMUM MEAN BRIGHTNESS ERROR FROM HISTOGRAM PARTITIONING N. Phanthuna 1,2, F. Cheevasuvt 2 and S. Chtwong 2 1 Department of Electrcal Engneerng, Faculty of Engneerng Rajamangala

More information

MMA and GCMMA two methods for nonlinear optimization

MMA and GCMMA two methods for nonlinear optimization MMA and GCMMA two methods for nonlnear optmzaton Krster Svanberg Optmzaton and Systems Theory, KTH, Stockholm, Sweden. krlle@math.kth.se Ths note descrbes the algorthms used n the author s 2007 mplementatons

More information

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur Module 3 LOSSY IMAGE COMPRESSION SYSTEMS Verson ECE IIT, Kharagpur Lesson 6 Theory of Quantzaton Verson ECE IIT, Kharagpur Instructonal Objectves At the end of ths lesson, the students should be able to:

More information

Quantifying the Sub-optimality of Uniprocessor Fixed Priority Non-Pre-emptive Scheduling

Quantifying the Sub-optimality of Uniprocessor Fixed Priority Non-Pre-emptive Scheduling Quantfyng the Sub-optmalty of Unprocessor Fxed Prorty Non-Pre-emptve Schedulng Robert I Davs Real-Tme Systems Research Group, Department of Computer Scence, Unversty of York, York, UK robdavs@csyorkacuk

More information

The Minimum Universal Cost Flow in an Infeasible Flow Network

The Minimum Universal Cost Flow in an Infeasible Flow Network Journal of Scences, Islamc Republc of Iran 17(2): 175-180 (2006) Unversty of Tehran, ISSN 1016-1104 http://jscencesutacr The Mnmum Unversal Cost Flow n an Infeasble Flow Network H Saleh Fathabad * M Bagheran

More information

CHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE

CHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE CHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE Analytcal soluton s usually not possble when exctaton vares arbtrarly wth tme or f the system s nonlnear. Such problems can be solved by numercal tmesteppng

More information

APPENDIX A Some Linear Algebra

APPENDIX A Some Linear Algebra APPENDIX A Some Lnear Algebra The collecton of m, n matrces A.1 Matrces a 1,1,..., a 1,n A = a m,1,..., a m,n wth real elements a,j s denoted by R m,n. If n = 1 then A s called a column vector. Smlarly,

More information

5 The Rational Canonical Form

5 The Rational Canonical Form 5 The Ratonal Canoncal Form Here p s a monc rreducble factor of the mnmum polynomal m T and s not necessarly of degree one Let F p denote the feld constructed earler n the course, consstng of all matrces

More information

Kernel Methods and SVMs Extension

Kernel Methods and SVMs Extension Kernel Methods and SVMs Extenson The purpose of ths document s to revew materal covered n Machne Learnng 1 Supervsed Learnng regardng support vector machnes (SVMs). Ths document also provdes a general

More information

ANSWERS. Problem 1. and the moment generating function (mgf) by. defined for any real t. Use this to show that E( U) var( U)

ANSWERS. Problem 1. and the moment generating function (mgf) by. defined for any real t. Use this to show that E( U) var( U) Econ 413 Exam 13 H ANSWERS Settet er nndelt 9 deloppgaver, A,B,C, som alle anbefales å telle lkt for å gøre det ltt lettere å stå. Svar er gtt . Unfortunately, there s a prntng error n the hnt of

More information

Chapter - 2. Distribution System Power Flow Analysis

Chapter - 2. Distribution System Power Flow Analysis Chapter - 2 Dstrbuton System Power Flow Analyss CHAPTER - 2 Radal Dstrbuton System Load Flow 2.1 Introducton Load flow s an mportant tool [66] for analyzng electrcal power system network performance. Load

More information

Feature Selection: Part 1

Feature Selection: Part 1 CSE 546: Machne Learnng Lecture 5 Feature Selecton: Part 1 Instructor: Sham Kakade 1 Regresson n the hgh dmensonal settng How do we learn when the number of features d s greater than the sample sze n?

More information

Assortment Optimization under MNL

Assortment Optimization under MNL Assortment Optmzaton under MNL Haotan Song Aprl 30, 2017 1 Introducton The assortment optmzaton problem ams to fnd the revenue-maxmzng assortment of products to offer when the prces of products are fxed.

More information

Physics 5153 Classical Mechanics. Principle of Virtual Work-1

Physics 5153 Classical Mechanics. Principle of Virtual Work-1 P. Guterrez 1 Introducton Physcs 5153 Classcal Mechancs Prncple of Vrtual Work The frst varatonal prncple we encounter n mechancs s the prncple of vrtual work. It establshes the equlbrum condton of a mechancal

More information

Parallel Real-Time Scheduling of DAGs

Parallel Real-Time Scheduling of DAGs Washngton Unversty n St. Lous Washngton Unversty Open Scholarshp All Computer Scence and Engneerng Research Computer Scence and Engneerng Report Number: WUCSE-013-5 013 Parallel Real-Tme Schedulng of DAGs

More information

Lecture 14: Bandits with Budget Constraints

Lecture 14: Bandits with Budget Constraints IEOR 8100-001: Learnng and Optmzaton for Sequental Decson Makng 03/07/16 Lecture 14: andts wth udget Constrants Instructor: Shpra Agrawal Scrbed by: Zhpeng Lu 1 Problem defnton In the regular Mult-armed

More information

Worst-case response time analysis of real-time tasks under fixed-priority scheduling with deferred preemption

Worst-case response time analysis of real-time tasks under fixed-priority scheduling with deferred preemption Real-Tme Syst (2009) 42: 63 119 DOI 10.1007/s11241-009-9071-z Worst-case response tme analyss of real-tme tasks under fxed-prorty schedulng wth deferred preempton Render J. Brl Johan J. Lukken Wm F.J.

More information

Singular Value Decomposition: Theory and Applications

Singular Value Decomposition: Theory and Applications Sngular Value Decomposton: Theory and Applcatons Danel Khashab Sprng 2015 Last Update: March 2, 2015 1 Introducton A = UDV where columns of U and V are orthonormal and matrx D s dagonal wth postve real

More information

Example: (13320, 22140) =? Solution #1: The divisors of are 1, 2, 3, 4, 5, 6, 9, 10, 12, 15, 18, 20, 27, 30, 36, 41,

Example: (13320, 22140) =? Solution #1: The divisors of are 1, 2, 3, 4, 5, 6, 9, 10, 12, 15, 18, 20, 27, 30, 36, 41, The greatest common dvsor of two ntegers a and b (not both zero) s the largest nteger whch s a common factor of both a and b. We denote ths number by gcd(a, b), or smply (a, b) when there s no confuson

More information

Finding Primitive Roots Pseudo-Deterministically

Finding Primitive Roots Pseudo-Deterministically Electronc Colloquum on Computatonal Complexty, Report No 207 (205) Fndng Prmtve Roots Pseudo-Determnstcally Ofer Grossman December 22, 205 Abstract Pseudo-determnstc algorthms are randomzed search algorthms

More information

The Schedulability Region of Two-Level Mixed-Criticality Systems based on EDF-VD

The Schedulability Region of Two-Level Mixed-Criticality Systems based on EDF-VD The Schedulablty Regon of Two-Level Mxed-Crtcalty Systems based on EDF-VD Drk Müller and Alejandro Masrur Department of Computer Scence TU Chemntz, Germany Abstract The algorthm Earlest Deadlne Frst wth

More information

On the correction of the h-index for career length

On the correction of the h-index for career length 1 On the correcton of the h-ndex for career length by L. Egghe Unverstet Hasselt (UHasselt), Campus Depenbeek, Agoralaan, B-3590 Depenbeek, Belgum 1 and Unverstet Antwerpen (UA), IBW, Stadscampus, Venusstraat

More information

Volume 18 Figure 1. Notation 1. Notation 2. Observation 1. Remark 1. Remark 2. Remark 3. Remark 4. Remark 5. Remark 6. Theorem A [2]. Theorem B [2].

Volume 18 Figure 1. Notation 1. Notation 2. Observation 1. Remark 1. Remark 2. Remark 3. Remark 4. Remark 5. Remark 6. Theorem A [2]. Theorem B [2]. Bulletn of Mathematcal Scences and Applcatons Submtted: 016-04-07 ISSN: 78-9634, Vol. 18, pp 1-10 Revsed: 016-09-08 do:10.1805/www.scpress.com/bmsa.18.1 Accepted: 016-10-13 017 ScPress Ltd., Swtzerland

More information

ELASTIC WAVE PROPAGATION IN A CONTINUOUS MEDIUM

ELASTIC WAVE PROPAGATION IN A CONTINUOUS MEDIUM ELASTIC WAVE PROPAGATION IN A CONTINUOUS MEDIUM An elastc wave s a deformaton of the body that travels throughout the body n all drectons. We can examne the deformaton over a perod of tme by fxng our look

More information

Single-Facility Scheduling over Long Time Horizons by Logic-based Benders Decomposition

Single-Facility Scheduling over Long Time Horizons by Logic-based Benders Decomposition Sngle-Faclty Schedulng over Long Tme Horzons by Logc-based Benders Decomposton Elvn Coban and J. N. Hooker Tepper School of Busness, Carnege Mellon Unversty ecoban@andrew.cmu.edu, john@hooker.tepper.cmu.edu

More information

Canonical transformations

Canonical transformations Canoncal transformatons November 23, 2014 Recall that we have defned a symplectc transformaton to be any lnear transformaton M A B leavng the symplectc form nvarant, Ω AB M A CM B DΩ CD Coordnate transformatons,

More information

Complete subgraphs in multipartite graphs

Complete subgraphs in multipartite graphs Complete subgraphs n multpartte graphs FLORIAN PFENDER Unverstät Rostock, Insttut für Mathematk D-18057 Rostock, Germany Floran.Pfender@un-rostock.de Abstract Turán s Theorem states that every graph G

More information

Two-Phase Low-Energy N-Modular Redundancy for Hard Real-Time Multi-Core Systems

Two-Phase Low-Energy N-Modular Redundancy for Hard Real-Time Multi-Core Systems 1 Two-Phase Low-Energy N-Modular Redundancy for Hard Real-Tme Mult-Core Systems Mohammad Saleh, Alreza Ejlal, and Bashr M. Al-Hashm, Fellow, IEEE Abstract Ths paper proposes an N-modular redundancy (NMR)

More information

For now, let us focus on a specific model of neurons. These are simplified from reality but can achieve remarkable results.

For now, let us focus on a specific model of neurons. These are simplified from reality but can achieve remarkable results. Neural Networks : Dervaton compled by Alvn Wan from Professor Jtendra Malk s lecture Ths type of computaton s called deep learnng and s the most popular method for many problems, such as computer vson

More information

A CLASS OF RECURSIVE SETS. Florentin Smarandache University of New Mexico 200 College Road Gallup, NM 87301, USA

A CLASS OF RECURSIVE SETS. Florentin Smarandache University of New Mexico 200 College Road Gallup, NM 87301, USA A CLASS OF RECURSIVE SETS Florentn Smarandache Unversty of New Mexco 200 College Road Gallup, NM 87301, USA E-mal: smarand@unmedu In ths artcle one bulds a class of recursve sets, one establshes propertes

More information

A 2D Bounded Linear Program (H,c) 2D Linear Programming

A 2D Bounded Linear Program (H,c) 2D Linear Programming A 2D Bounded Lnear Program (H,c) h 3 v h 8 h 5 c h 4 h h 6 h 7 h 2 2D Lnear Programmng C s a polygonal regon, the ntersecton of n halfplanes. (H, c) s nfeasble, as C s empty. Feasble regon C s unbounded

More information

O-line Temporary Tasks Assignment. Abstract. In this paper we consider the temporary tasks assignment

O-line Temporary Tasks Assignment. Abstract. In this paper we consider the temporary tasks assignment O-lne Temporary Tasks Assgnment Yoss Azar and Oded Regev Dept. of Computer Scence, Tel-Avv Unversty, Tel-Avv, 69978, Israel. azar@math.tau.ac.l??? Dept. of Computer Scence, Tel-Avv Unversty, Tel-Avv, 69978,

More information

MODELING TRAFFIC LIGHTS IN INTERSECTION USING PETRI NETS

MODELING TRAFFIC LIGHTS IN INTERSECTION USING PETRI NETS The 3 rd Internatonal Conference on Mathematcs and Statstcs (ICoMS-3) Insttut Pertanan Bogor, Indonesa, 5-6 August 28 MODELING TRAFFIC LIGHTS IN INTERSECTION USING PETRI NETS 1 Deky Adzkya and 2 Subono

More information

Min Cut, Fast Cut, Polynomial Identities

Min Cut, Fast Cut, Polynomial Identities Randomzed Algorthms, Summer 016 Mn Cut, Fast Cut, Polynomal Identtes Instructor: Thomas Kesselhem and Kurt Mehlhorn 1 Mn Cuts n Graphs Lecture (5 pages) Throughout ths secton, G = (V, E) s a mult-graph.

More information

Appendix B. The Finite Difference Scheme

Appendix B. The Finite Difference Scheme 140 APPENDIXES Appendx B. The Fnte Dfference Scheme In ths appendx we present numercal technques whch are used to approxmate solutons of system 3.1 3.3. A comprehensve treatment of theoretcal and mplementaton

More information

Learning Theory: Lecture Notes

Learning Theory: Lecture Notes Learnng Theory: Lecture Notes Lecturer: Kamalka Chaudhur Scrbe: Qush Wang October 27, 2012 1 The Agnostc PAC Model Recall that one of the constrants of the PAC model s that the data dstrbuton has to be

More information

Online Appendix: Reciprocity with Many Goods

Online Appendix: Reciprocity with Many Goods T D T A : O A Kyle Bagwell Stanford Unversty and NBER Robert W. Stager Dartmouth College and NBER March 2016 Abstract Ths onlne Appendx extends to a many-good settng the man features of recprocty emphaszed

More information

Outline. Communication. Bellman Ford Algorithm. Bellman Ford Example. Bellman Ford Shortest Path [1]

Outline. Communication. Bellman Ford Algorithm. Bellman Ford Example. Bellman Ford Shortest Path [1] DYNAMIC SHORTEST PATH SEARCH AND SYNCHRONIZED TASK SWITCHING Jay Wagenpfel, Adran Trachte 2 Outlne Shortest Communcaton Path Searchng Bellmann Ford algorthm Algorthm for dynamc case Modfcatons to our algorthm

More information

princeton univ. F 17 cos 521: Advanced Algorithm Design Lecture 7: LP Duality Lecturer: Matt Weinberg

princeton univ. F 17 cos 521: Advanced Algorithm Design Lecture 7: LP Duality Lecturer: Matt Weinberg prnceton unv. F 17 cos 521: Advanced Algorthm Desgn Lecture 7: LP Dualty Lecturer: Matt Wenberg Scrbe: LP Dualty s an extremely useful tool for analyzng structural propertes of lnear programs. Whle there

More information

The Second Anti-Mathima on Game Theory

The Second Anti-Mathima on Game Theory The Second Ant-Mathma on Game Theory Ath. Kehagas December 1 2006 1 Introducton In ths note we wll examne the noton of game equlbrum for three types of games 1. 2-player 2-acton zero-sum games 2. 2-player

More information

χ x B E (c) Figure 2.1.1: (a) a material particle in a body, (b) a place in space, (c) a configuration of the body

χ x B E (c) Figure 2.1.1: (a) a material particle in a body, (b) a place in space, (c) a configuration of the body Secton.. Moton.. The Materal Body and Moton hyscal materals n the real world are modeled usng an abstract mathematcal entty called a body. Ths body conssts of an nfnte number of materal partcles. Shown

More information

arxiv: v1 [math.ho] 18 May 2008

arxiv: v1 [math.ho] 18 May 2008 Recurrence Formulas for Fbonacc Sums Adlson J. V. Brandão, João L. Martns 2 arxv:0805.2707v [math.ho] 8 May 2008 Abstract. In ths artcle we present a new recurrence formula for a fnte sum nvolvng the Fbonacc

More information