Parametric Utilization Bounds for Fixed-Priority Multiprocessor Scheduling

Size: px
Start display at page:

Download "Parametric Utilization Bounds for Fixed-Priority Multiprocessor Scheduling"

Transcription

1 2012 IEEE 26th Internatonal Parallel and Dstrbuted Processng Symposum Parametrc Utlzaton Bounds for Fxed-Prorty Multprocessor Schedulng Nan Guan 1,2, Martn Stgge 1, Wang Y 1,2 and Ge Yu 2 1 Uppsala Unversty, Sweden 2 Northeastern Unversty, Chna Abstract Future embedded real-tme systems wll be deployed on mult-core processors to meet the dramatcally ncreasng hghperformance and low-power requrements. Ths trend appeals to generalze establshed results on unprocessor schedulng, partcularly the varous utlzaton bounds for schedulablty test used n system desgn, to the multprocessor settng. Recently, ths has been acheved for the famous Lu and Layland utlzaton bound by applyng novel task splttng technques. However, parametrc utlzaton bounds that can guarantee hgher utlzatons (up to 100%) for common classes of systems are not yet known to be generalzable to multprocessors as well. In ths paper, we solve ths problem for most parametrc utlzaton bounds by proposng new task parttonng algorthms based on exact response tme analyss. In addton to the worst-case guarantees, as the exact response tme analyss s used for task parttonng, our algorthms sgnfcantly mprove average-case utlzaton over prevous work. I. INTRODUCTION It has been wdely accepted that future embedded realtme systems wll be deployed on mult-core processors, to satsfy the dramatcally ncreasng hgh-performance and lowpower requrements. Ths trend demands effectve and effcent technques for the desgn and analyss of real-tme systems on mult-cores. A central problem n the real-tme system desgn s tmng analyss, whch examnes whether the system can meet all the specfed tmng requrements. Tmng analyss usually conssts of two steps: task-level tmng analyss, whch for example calculates the worst-case executon tme of each task ndependently, and system-level tmng analyss (also called schedulablty analyss), whch determnes whether all the tasks can co-exst n the system and stll meet all the tme requrements. One of the most commonly used schedulablty analyss approach s based on the utlzaton bound, whch s a safe threshold of the system s workload: under ths threshold the system s guaranteed to meet all the tme requrements. The utlzaton-bound-based schedulablty analyss s very effcent, and s especally sutable to embedded system desgn flow nvolvng teratve desgn space exploraton procedures. A well-known utlzaton bound s the N(2 1/N 1) bound for RMS (Rate Monotonc Schedulng) on un-processors, dscovered by Lu and Layland n the 1970 s [25]. Recently, ths bound has been generalzed to multprocessors schedulng by a part-tonng-based algorthm [16]. The Lu and Layland utlzaton bound (L&L bound for short) s pessmstc: There are a sgnfcant number of task systems that exceed the L&L bound but are ndeed schedulable. Ths means that system resources would be consderably under-utlzed f one only reles on the L&L bound n system desgn. If more nformaton about the task system s avalable n the desgn phase, t s possble to derve hgher parametrc utlzaton bounds regardng known task parameters. A wellknown example of parametrc utlzaton bounds s the 100% bound for harmonc task sets [26]: If the total utlzaton of a harmonc task set τ s no greater than 100%, then every task n τ can meet ts deadlne under RMS on a un-processor platform. Even f the whole task system s not harmonc, one can stll obtan a sgnfcantly hgher bound by explorng the harmonc chans of the task system [21]. In general, durng the system desgn, t s usually possble to employ hgher utlzaton bounds wth avalable task parameter nformaton to better utlze the resources and decrease the system cost. As wll be ntroduced n Secton III, qute a number of hgher parametrc utlzaton bounds regardng dfferent task parameter nformaton have been derved for un-processor schedulng. Ths naturally rases an nterestng queston: Can we generalze these hgher parametrc utlzaton bounds derved for un-processor schedulng to multprocessors? For example, gven a harmonc task system, can we guarantee the schedulablty of the task system on a multprocessor platform of M processors, f the utlzaton sum of all tasks s no larger than M? In ths paper, we wll address the above queston by proposng new RMS-based parttoned schedulng algorthms (wth task splttng). Generalzng the parametrc utlzaton bounds from un-processors to multprocessors s challengng, even wth the nsghts from our prevous work generalzng the L&L bound to multprocessor schedulng. The reason s that task splttng may create new tasks that do not comply wth the parameter propertes of the orgnal task set, and thus nvaldate the parametrc utlzaton bound specfc to the orgnal task set s parameter propertes. Secton III presents ths problem n detal. The man contrbuton of ths paper s a soluton to ths problem, whch generalzes most of the parametrc utlzaton bounds to multprocessors. The approach of ths paper s generc n the sense that t works rrespectve of the form of the parametrc utlzaton /12 $ IEEE DOI /IPDPS

2 bound n consderaton. The only restrcton s a threshold on the parametrc utlzaton bound value when some task has a large ndvdual utlzaton; apart from that, any parametrc utlzaton bound derved for sngle-processor RMS can be used to guarantee the schedulablty of multprocessors systems va our algorthms. More specfcally, we frst proposed an algorthm generalzng all known parametrc utlzaton bounds for RMS to multprocessors, for a class of lght task sets n whch each task s ndvdual utlzaton s at most 1+, where =N(2 1/N 1) s the L&L bound for task set τ. Then we proposed the second algorthm that works for any task set and all parametrc utlzaton bounds under the threshold Besdes the mproved utlzaton bounds, another advantage of our new algorthms s the sgnfcantly mproved averagecase performance. Although the algorthm n [16] can acheve the L&L bound, t has the problem that t never utlzes more than the worst-case bound. The new algorthms n ths paper use exact analyss,.e., Response Tme Analyss (RTA), nstead of the utlzaton bound threshold as n the algorthm of [16], to determne the maxmal workload on each processor. It s well-known that on un-processors, by exact schedulablty analyss, the average breakdown utlzaton of RMS s around 88% [24], whch s much hgher than ts worst-case utlzaton bound 69.3%. Smlarly, our new algorthm has much better performance than the algorthm n [16]. Related Work: Multprocessor schedulng s usually categorzed nto two paradgms [11]: global schedulng, where each task can execute on any avalable processor at run tme, and parttoned schedulng, where each task s assgned to a processor beforehand, and at run tme each task only executes on ts assgned processor. Global schedulng on average utlzes the resources better. However, the standard RMS and EDF global schedulng strateges suffer from the Dhall effect [14], whch may cause a task system wth arbtrarly low utlzaton to be unschedulable. Although the Dhall effect can be mtgated by, e.g., assgnng hgher prortes to tasks wth hgher utlzatons as n RM-US [4], the best known utlzaton bound of global schedulng s stll qute low: 38% for fxedprorty schedulng [3] and 50% for EDF-based schedulng [7]. On the other hand, parttoned schedulng suffers from the resource waste smlar to the bn-packng problem: the worstcase utlzaton bound for any parttoned schedulng can not exceed 50%. Although there exst schedulng algorthms lke the Pfar famly [10], [2], the LLREF famly [13], [15] and the EKG famly [5], [6], offerng utlzaton bounds upto 100%, these algorthms ncur much hgher context-swtch overhead than prorty-drven schedulng, whch s unacceptable n many real-lfe systems. Recently, a number of works [1], [6], [5], [17], [18], [19], [20], [22], [16] have studed parttoned schedulng wth task splttng, whch can overcome the 50% lmt of the strct parttoned schedulng. In ths class of schedulng algorthms, 1 When N goes to nfnty, 2. =81.8% 1+. = 69.3%, 1+. = 40.9% and whle most tasks are assgned to a fxed processor, some tasks may be (sequentally) dvded nto several parts and each part s assgned and thereby executed on a dfferent (but fxed) processor. In ths category, the utlzaton bound of the stateof-the-art EDF-based algorthm s 65% [17], and our recent work [16] has acheved the L&L bound (n the worst case 69.3%) for fxed-prorty based algorthms. II. BASIC CONCEPTS We consder a multprocessor platform consstng of M processors P = {P 1,P 2,...P M }. A task set τ = {τ 1,τ 2,..., τ N } comples wth the L&L task model: Each task τ s a 2-tuple C,T, where C s the worst-case executon tme and T s the mnmal nter-release separaton (also called perod). T s also τ s relatve deadlne. We use the RMS strategy to assgn prortes: tasks wth shorter perods have hgher prortes. Wthout loss of generalty we sort tasks n non-decreasng perod order, and can therefore use the task ndces to represent task prortes,.e., <jmples that τ has hgher prorty than τ j. The utlzaton of each task τ s defned as U = C /T, and the total utlzaton of task set τ s U(τ) = N =1 U.We further defne the normalzed utlzaton of a task set τ on a multprocessor platform wth M processors: U M (τ) = τ τ U /M Note that the subscrpt M n U M (τ) remnds us that the sum of all tasks utlzatons s dvded by the number of processors M. A parttoned schedulng algorthm (wth task splttng) conssts of two parts: the parttonng algorthm, whch determnes how to splt and assgn each task (or rather each of ts parts) to a fxed processor, and the schedulng algorthm, whch determnes how to schedule the tasks assgned to each processor at run tme. Wth the parttonng algorthm, most tasks are assgned to a processor (and thereby wll only execute on ths processor at run tme). We call these tasks non-splt tasks. The other tasks are called splt tasks, snce they are splt nto several subtasks. Each subtask of a splt task τ s assgned to (and thereby executes on) a dfferent processor, and the sum of the executon tmes of all subtasks equals C. For example, n Fgure 1 task τ s splt nto three subtasks τ 1, τ 2 and τ 3, executng on processor P 1, P 2 and P 3, respectvely. The subtasks of a task need to be synchronzed to execute correctly. For example, n Fgure 1, τ 2 should not start executon untl τ 1 s fnshed. Ths equals deferrng the actual ready tme of τ 2 by up to R 1 (relatve to τ s orgnal release tme), where R 1 s τ 1 s worst-case response tme. One can regard ths as shortenng the actual relatve deadlne of τ 2 by up to R 1. Smlarly, the actual ready tme of τ 3 s deferred by up to R 1 + R2, and τ 3 s actual relatve deadlne s shortened by up to R 1 + R2.Weuseτ k to denote the k th subtask of a splt task τ, and defne τ k s synthetc deadlne as Δ k = T R. l (1) l [1,k 1] 262

3 Fg. 1. An Illustraton of Task Splttng. Thus, we represent each subtask τ k by a 3-tuple C k,t, Δ k, n whch C k s the executon tme of τ k, T s the orgnal perod and Δ k s the synthetc deadlne. For consstency, each non-splt task τ can be represented by a sngle subtask τ 1 wth C 1 = C and Δ 1 = T.WeuseU k = C k/t to denote a subtask τ k s utlzaton. We call the last subtask of τ ts tal subtask, denoted by τ t and the other subtasks ts body subtasks, as shown n Fgure 1. We use τ bj to denote the j th body subtask. We use τ(p q ) to denote the set of tasks τ assgned to processor P q, and say P q s the host processor of τ.weuse U(P q ) to denote the sum of the utlzaton of all tasks n τ(p q ). A task set τ s schedulable under a parttoned schedulng algorthm A, f () each task (subtask) has been assgned to some processor by A s parttonng algorthm, and () each task (subtask) s guaranteed to meet ts deadlne under A s schedulng algorthm. III. PARAMETRIC UTILIZATION BOUNDS On un-processors, a Parametrc Utlzaton Bound (PUB for short) Λ(τ) for a task set τ s the result of applyng a functon Λ( ) to τ s task parameters, such that all the tasks n τ are guaranteed to meet ther deadlnes on a un-processor f τ s total utlzaton U(τ) Λ(τ). We can overload ths concept for multprocessor schedulng by usng τ s normalzed utlzaton U M (τ) nstead of U(τ). There have been several PUBs derved for RMS on unprocessors. The followng are some examples: The famous L&L bound, denoted by, sapub regardng the number of tasks N: =N(2 1/N 1) The harmonc chan bound: HC-Bound(τ) =K(2 1/K 1) [21], where K s the number of harmonc chans n the task set. The 100% bound for harmonc task sets s a specal case of the harmonc chan bound wth K =1. T-Bound(τ) [23] s a PUB regardng the number of tasks and the task perods: T-Bound(τ) = N T +1 =1 T +2 T 1 T N N, where T s τ s scaled perod [23]. R-Bound(τ) [23] s smlar to T-Bound(τ), but uses a more abstract parameter r, the rato between the mnmum and maxmum scaled perod of the task set: R-Bound(τ) =(N 1)(r 1/(N 1) 1) + 2/r 1. We observe that all the above PUBs have the followng property: for any τ obtaned by decreasng the executon tmes Fg. 2. Parttonng a harmonc task set results n a nonharmonc task set on some processor. of some tasks of τ, the bound Λ(τ) s stll a vald utlzaton bound to guarantee the schedulablty of τ. We call a PUB holdng ths property a deflatable parametrc utlzaton bound (called D-PUB for short) 2. We use the followng lemma to precsely descrbe ths property: Lemma 1. Let Λ(τ) be a D-PUB derved from the task set τ. We decrease the executon tmes of some tasks n τ togetanew task set τ.ifτ satsfes U(τ ) Λ(τ), then t s guaranteed to be schedulable by RMS on a un-processor. The deflatable property s very common: Actually all the PUBs we are aware of are deflatable, ncludng the ones lsted above and the non-closed-form bounds n [12]. The deflatable property s of great relevance n parttoned multprocessor schedulng, snce a task set τ wll be parttoned nto several subsets and each subset s executed on a processor ndvdually. Further, due to the task splttng, a task could be dvded nto several subtasks, each of whch holds a porton of the executon demand of the orgnal task. So the deflatable property s clearly requred to generalze a utlzaton bound to multprocessors. However, the deflatable property by tself s not suffcent for the generalzaton of a PUB Λ(τ) to multprocessors. For example, suppose the harmonc task set τ n Fgure 2- (a) s parttoned as n Fgure 2-(b), where τ 2 s splt nto τ2 1 and τ2 2. To correctly execute τ 2, τ2 1 and τ2 2 need to be synchronzed such that τ2 2 never starts executon before ts predecessor τ2 1 s fnshed. Ths can be vewed as shortenng τ2 2 s relatve deadlne for a certan amount of tme from τ 2 s orgnal deadlne, as shown n Fgure 2-(c). In ths case, τ2 2 does not comply wth the L&L task model (whch requres the relatve deadlne to equal the perod), so none of the parametrc utlzaton bounds for the L&L task model are applcable to processor P 2. In [16], ths problem s solved by 2 There s a subtle dfference between the deflatable property and the (self- )sustanable property [9], [8]. The deflatable property does not requre the orgnal task set τ to satsfy U(τ) Λ(τ). U(τ) s typcally larger than 100% snce τ wll be scheduled on M processors. Λ(τ) s merely a value obtaned by applyng the functon Λ( ) to τ s parameters, and wll be used to each ndvdual processor. 263

4 representng τ2 2 s perod by ts relatve deadlne, as shown n Fgure 2-(d). Ths transforms the task set {τ 1,τ2 2 } nto a L&L task set {τ 1,τ2 2 }, wth whch we can apply the L&L bound. However, ths soluton does not n general work for other parametrc utlzaton bounds: In our example, we stll want to apply the 100% bound whch s specfc to harmonc task sets. But f we use τ2 2 s deadlne 6 to represent ts perod, the task set {τ 1,τ2 2 } s not harmonc, so the 100% bound s not applcable. Ths problem wll be solved by our new algorthms and novel proof technques n the followng sectons. IV. THE ALGORITHM FOR LIGHT TASKS In the followng we ntroduce the frst algorthm RM- TS/lght, whch acheves Λ(τ) (any D-PUB derved from τ s parameters), f τ s lght n the sense of an upper bound on each task s ndvdual utlzaton as follows. Defnton 1. A task τ s a lght task f U 1+, where denotes the L&L bound. Otherwse, τ s a heavy task. Atasksetτ s a lght task set f all tasks n τ are lght. 1+ s about 40.9% as the number of tasks n τ grows to nfnty. For example, we can nstantate ths result by the 100% utlzaton bound for harmonc task sets: Let τ be any harmonc task set n whch each task s ndvdual utlzaton s no larger than 40.9%. τ s schedulable by our algorthm RM-TS/lght on M processors f ts normalzed total utlzaton U M (τ) s no larger than 100%. A. Algorthm Descrpton The parttonng algorthm of RM-TS/lght s qute smple. We descrbe t brefly as follows: 1) Tasks are assgned n ncreasng prorty order. We always select the processor on whch the total utlzaton of the tasks that have been assgned so far s mnmal among all processors. 2) A task (subtask) can be entrely assgned to the current processor, f all tasks (ncludng the one to be assgned) on ths processor can meet ther deadlnes under RMS. 3) When a task (subtask) cannot be assgned entrely to the current processor, we splt t nto two parts 3. The frst part s assgned to the current processor. The splttng s done such that the porton of the frst part s as bg as possble, guaranteeng no task on ths processor msses ts deadlne under RMS; the second part s left for the assgnment n the step. Note that the dfference between RM-TS/lght and the algorthm n [16] s that, RM-TS/lght uses the exact response tme analyss, nstead of the utlzaton threshold, to determne whether a (sub)task can ft n a processor wthout causng deadlne mss. Algorthm IV-A and IV-A descrbe the parttonng algorthm of RM-TS/lght n pseudo-code. At the begnnng, tasks are sorted (and wll therefore be assgned) n ncreasng 3 In general a task may be splt nto more than two subtasks. Here we mean at each step the currently selected task (subtask) s splt nto two parts. prorty order, and all processors are marked as non-full whch means they stll can accept more tasks. At each step, we pck the next task n order (the one wth the lowest prorty), select the processor wth the mnmal total utlzaton of tasks that have been assgned so far, and nvoke the routne Assgn(τ k,p q) to do the task assgnment. Assgn(τ k,p q) frst verfes that after assgnng the task, all tasks on that processor would stll be schedulable under RMS. Ths s done by applyng exact schedulablty analyss of calculatng the response tme Rj h of each (sub)task τ j k on P q after assgnng ths new task τ k, and compare Rh j to ts (synthetc) deadlne Δ h j. If the response tme does not exceed the synthetc deadlne for any of the tasks on P q, we can conclude that τ k can safely be assgned to P q wthout causng any deadlne mss. Note that a subtask s synthetc deadlne Δ k j may be dfferent from ts perod T j. After presentng how the overall parttonng algorthm works, we wll show how to calculate Δ k j easly. Algorthm 1 The parttonng algorthm of RM-TS/lght. 1: Task order τ 1 N,..., τ 1 1 by ncreasng prortes 2: Mark all processors as non-full 3: whle exsts an non-full processor and an unassgned task do 4: Pck next unassgned task τ k, 5: Pck non-full processor P q wth mnmal U(P q ) 6: Assgn(τ k,p q) 7: end whle 8: If there s an unassgned task, the algorthm fals, otherwse t succeeds. Algorthm 2 The Assgn(τ k,p q) routne. 1: f τ(p q ) wth τ k s stll schedulable then 2: Add τ k to τ(p q ) 3: else 4: Splt τ k va (τ k,τk+1 ):=MaxSplt(τ k,p q) 5: Add τ k to τ(p q ) 6: Mark P q as full 7: τ k+1 s the next task to assgn 8: end f If τ k cannot be entrely assgned to the currently selected processor P q, t wll be splt nto two parts usng routne MaxSplt(τ k,p q): the frst part that makes maxmum use of the selected processor, and a remanng part of that task, whch wll be subject to assgnment n the next teraton. The desred property here s that we want the frst part to be as bg as possble such that, after assgnng t to P q, all tasks on that processor wll stll be able to meet ther deadlnes. In order to state the effect of MaxSplt(τ k,p q) formally, we ntroduce the concept of a bottleneck: Defnton 2. A bottleneck of processor P q s a (sub)task that s assgned to P q, and wll become unschedulable f we ncrease the executon tme of the task wth the hghest prorty on P q by an arbtrarly small postve number. 264

5 Note that there may be more than one bottleneck on a processor. Further, snce RM-TS/lght assgns tasks n ncreasng prorty order, MaxSplt always operates on the task that has the hghest prorty on the processor n queston. So we can state: Defnton 3. MaxSplt(τ k,p q) s a functon that splts τ k nto two subtasks τ k and τ k+1 such that: 1) τ k can now be assgned to P q wthout makng any task n τ(p q ) unschedulable. 2) After assgnng τ k, P q has a bottleneck. MaxSplt can be mplemented by, for example, performng a bnary search over [0,C k ] to fnd out the maxmal porton of τ k wth whch all tasks on P q can meet ther deadlnes. A more effcent mplementaton of MaxSplt was presented n [22], n whch one only needs to check a (small) number of possble values n [0,C k ]. The complexty of ths mproved mplementaton s stll pseudo-polynomal, but n practce t s very effcent. The whle loop n RM-TS/lght termnates as soon as all processors are full or all tasks have been assgned. If the loop termnates due to the frst reason and there are stll unassgned tasks left, the algorthm reports a falure of the parttonng, otherwse a success. Calculatng Synthetc Deadlnes: Now we show how to calculate each (sub)task τ k s synthetc deadlne Δk, whch was left open n the above presentaton. If τ k s a non-splt task, ts synthetc deadlne trvally equals ts perod T. We consder the case that τ k s a splt subtask. Snce tasks are assgned n ncreasng order of prortes, and a processor s full after a body subtask s assgned to t, we have the followng lemma: Lemma 2. A body subtask has the hghest prorty on ts host processor. A consequence s that, the response tme of each body subtask equals ts executon tme, and one can replace R l by C l n (1) to calculate the synthetc deadlne of a subtask. Especally, we are nterested n the synthetc deadlnes of tal subtasks (we don t need to worry about a body subtask s synthetc deadlne snce t has the hghest prorty on ts host processor and s schedulable anyway). The calculaton s stated n the followng lemma. Lemma 3. A tal subtask τ t s synthetc deadlne Δt calculated by Δ t = T C body where C body s the executon tme sum of τ s body subtasks. Schedulng at Run Tme: At runtme, the tasks wll be scheduled accordng to the RMS prorty order on each processor locally,.e., wth ther orgnal prortes. The subtasks of a splt task respect ther precedence relatons,.e., a splt subtask τ k s ready for executon when ts precedng subtask τ k 1 on some other processor has fnshed. s From the presented parttonng and schedulng algorthm of RM-TS/lght, t s clear that successful parttonng mples schedulablty (remember that for splt tasks, the synchronzaton delays have been counted nto the synthetc deadlnes, whch are the ones used n the response tme analyss to determne whether a task s schedulable). We state ths n the followng lemma: Lemma 4. Any task set that has been successfully parttoned by RM-TS/lght s schedulable. B. Utlzaton Bound We wll now prove that RM-TS/lght has the utlzaton bound of Λ(τ) for lght task sets,.e., f a lght task set τ s not successfully parttoned by RM-TS/lght, then the sum of the assgned utlzatons of all processors s at least 4 M Λ(τ). In order to show ths, we assume that the assgned utlzaton on some processor s strctly less than Λ(τ). We prove that ths mples there s no bottleneck on that processor. Ths s a contradcton, because each processor wth whch MaxSplt has been used must have a bottleneck. We also know that MaxSplt was used for all processors, snce the parttonng faled. In the followng, we assume P q to be a processor wth an assgned utlzaton of U(P q ) < Λ(τ). A task on P q s ether a non-splt task, a body subtask or a tal subtask. The man part of the proof conssts of showng that P q cannot have a bottleneck of any type. As the frst step, we show ths for non-splt tasks and body subtasks (Lemma 5), after whch we deal wth the more dffcult case of tal subtasks (Lemma 7). Lemma 5. Suppose task set τ s not schedulable by RM- TS/lght, and after the parttonng phase t holds for a processor P q that U(P q ) < Λ(τ) (2) Then a bottleneck of P q s nether a non-splt task nor a body subtask. Proof: By Lemma 2 we know that the body subtask has the hghest prorty on P q, so t can never be a bottleneck. For the case of non-splt tasks, we wll show that Condton (2) s suffcent for ther deadlnes to be met. The key observaton s that although some splt tasks on ths processor may have a shorter deadlne than perod, ths does not change the schedulng behavor of RMS, soλ(τ) s stll suffcent to guarantee the schedulablty of a non-splt task. For a more precse proof, we use Γ to denote the set of tasks on P q, and construct a new task set Γ correspondng to Γ such that each non-splt task τ n Γ has a counterpart n Γ that s exactly the same as τ, and each splt subtask n Γ has a counterpart n Γ wth deadlne changed to equal ts perod. It s easy to see that Γ can be obtaned by decreasng some tasks executon tmes n the orgnal task set τ (a task n τ but not Γ can be 4 By ths, the normalzed utlzaton of τ strctly exceeds Λ(τ), snce there are (sub)tasks not assgned to any of the processors after a faled parttonng. 265

6 consdered as the case that we decrease ts executon tme to 0). By Lemma 1 and Condton (2) we know, the deflatable utlzaton bound Λ(τ) guarantee Γ s schedulablty. Thus, f the executon tme of the hghest-prorty task on P q s ncreased by an arbtrarly small amount ε such that the total utlzaton stll does not exceed Λ(τ), Γ wll stll be schedulable. Recall that the only dfference between Γ and Γ s the subtasks deadlnes, and snce the schedulng behavor of RMS does not depend on task deadlnes (remember that at ths moment we only want to guarantee the schedulablty of non-splt tasks), we can conclude that each non-splt task n Γ s also schedulable, whch s stll the true after ncreasng ε to the hghest prorty task on P q. In the followng we prove that n a lght task set, a bottleneck on a processor wth utlzaton lower than Λ(τ) s not a tal subtask ether. The proof goes n two steps: We frst derve n Lemma 6 a general condton guaranteeng that a tal subtask can not be a bottleneck; then we conclude n Lemma 7 that a bottleneck on a processor wth utlzaton lower than Λ(τ) s not a tal subtask, by showng that the condton n Lemma 6 holds for each of these tal subtasks. We use the followng notaton: Let τ be a task splt nto B body subtasks τ b1...τ bb, assgned to processors P b1...p bb respectvely, and a tal subtask τ t assgned to processor P t. The utlzaton of the tal subtask τ t s U t = Ct T, and the utlzaton of a body subtask τ bj s U bj = Cbj T to denote the total utlzaton of τ s all body subtasks: U body = j [1,B] U bj = U U t. We use U body For the tal subtask τ t, let X t denote the total utlzaton of all (sub)tasks assgned to P t wth lower prorty than τ t, and Y t the total utlzaton of all (sub)tasks assgned to P t wth hgher prorty than τ t. For each body subtask τ bj, let X bj denote the total utlzaton of all (sub)tasks assgned to P bj wth lower prorty than τ bj. (We do not need Y bj, snce by Lemma 2 we know no task on P bj has hgher prorty than τ.) We start wth the general condton dentfyng nonbottleneck tal subtasks. Lemma 6. Suppose a tal subtask τ t s assgned to processor P t and s the L&L bound. If Y t + U t < (1 U body ) (3) then τ t s not a bottleneck of processor P t. Proof: The lemma s proved by showng τ t s stll schedulable after ncreasng the utlzaton of the task wth the hghest prorty on P t by a small number ɛ such that : (Y t + ɛ)+u t body < (1 U ) (note that one can always fnd such an ɛ). By the defnton of U body and Δ t, ths equals ((Y t + ɛ)+u t ) T /Δ t < (4) The key of the proof s to show that Condton (4) stll guarantees that τ t can meet ts deadlne. Note that one can not drectly apply the L&L bound to the task set Γ consstng of τ t and the tasks contrbutng to Y t, snce τ t s deadlne s shorter than ts perod,.e., Γ does not comply wth the L&L task model. In our proof, ths problem s solved by the perod shrnkng technque [16]: we transform Γ nto a L&L task set Γ by reducng some of the task perods, and prove that the total utlzaton of Γ s bounded by the LHS of (4), and thereby bounded by. On the other hand, the constructon of Γ guarantees that the schedulablty of Γ mples the schedulablty of τ t. See [16] for detals about the perod shrnkng technque. Note that n Condton (3) of Lemma 6, the L&L bound s nvolved. Ths s because n ts proof we need to use the L&L bound, rather than the hgher parametrc bound Λ(τ), to guarantee the schedulablty of the constructed task set Γ where some task perods are decreased. For example, suppose the orgnal task set s harmonc, the constructed set Γ may not be harmonc snce some of task perods are shortened to Δ t, whch s not necessarly harmonc wth other perods. So the 100% bound of harmonc task sets does not apply to Γ. However, s stll applcable, snce t only depends on, and s monotoncally decreasng wth respect to the task number. Havng ths lemma, we now show that a tal subtask τ t cannot be a bottleneck ether, f ts host processor s utlzaton s less than Λ(τ), by provng Condton (3) for τ t Lemma 7. Let τ be a lght task set unschedulable by RM- TS/lght, and let τ be a splt task whose tal subtask τ t s assgned to processor P t.if U(P t ) < Λ(τ) (5) then τ t s not a bottleneck of P t. Proof: The proof s by contradcton. We assume the lemma does not hold for one or more tasks, and let τ be the lowest-prorty one among these tasks,.e., τ t s a bottleneck of ts host processor P t, and all tal subtasks wth lower prortes are ether not a bottleneck or on a processor wth assgned utlzaton at least Λ(τ). Recall that {τ bj } j [1,B] are the B body subtasks of τ, and P t and {P bj } j [1,B] are processors hostng the correspondng tal and body subtasks. Snce a body task has the hghest prorty on ts host processor (Lemma 3) and tasks are assgned n ncreasng prorty order, all tal subtasks on processors {P bj } j [1,B] have lower prortes than τ. We wll frst show that all processors {P bj } j [1,B] have an ndvdual assgned utlzaton at least Λ(τ). We do ths by contradcton: Assume there s a P bj wth U(P bj ) < Λ(τ). Snce tasks are assgned n ncreasng prorty order, we know any tal subtask on P bj has lower prorty than τ. And snce τ s the lowest-prorty task volatng the lemma and U(P bj ) < Λ(τ), we know any tal subtask on P bj s not a bottleneck. At the same tme, U(P bj ) < Λ(τ) also mples the non-splt tasks and body subtasks on P bj are not bottlenecks ether. (by Lemma 5). So we can conclude that there s no bottleneck on P bj whch contradcts the fact there s at least 266

7 one bottleneck on each processor. So the assumpton of P bj s assgned utlzaton beng lower than Λ(τ) must be false, by whch we can conclude that all processors hostng τ t s body tasks have assgned utlzaton at least Λ(τ). Thus we have: (U bj + X bj ) B Λ(τ) (6) }{{} j [1,B] U(P bj ) Further, the assumpton from Condton (5) can be rewrtten as: X t + Y t + U t < Λ(τ) (7) We combne (6) and (7) nto: X t + Y t + U t < 1 B j [1,B] (U bj + X bj ) Snce the parttonng algorthm selects at each step the processor on whch the so-far assgned utlzaton s mnmal, we have j [1,B]:X bj X t. Thus, the nequalty can be relaxed to: Y t + U t < 1 U bj B We also have B 1 and U body j [1,B] Y t + U t <U body = j [1,B] U bj, so: Now, n order to get to Condton (3), whch mples τ t s not a bottleneck (Lemma 6), we need to show that the RHS of ths nequalty s bounded by the RHS of Condton (3),.e., that: U body (1 U body ) It s easy to see that ths s equvalent to the followng, whch holds snce τ s by assumpton a lght task: U body 1+ By now we have proved Condton (3) for τ t and by Lemma 6 we know τ t s not a bottleneck on P t, whch contradcts to our assumpton. We are ready to present RM-TS/lght s utlzaton bound. Theorem 8. Λ(τ) s a utlzaton bound of RM-TS/lght for lght task sets,.e., any lght task set τ wth U M (τ) Λ(τ) s schedulable by RM-TS/lght. Proof: Assume a lght task set τ wth U M (τ) Λ(τ) s not schedulable by RM-TS/lght,.e., there are tasks not assgned to any of the processors after the parttonng procedure wth τ. By ths we know the sum of the assgned utlzaton of all processors after the parttonng s strctly less than M Λ(τ), so there s at least one processor P q wth a utlzaton strctly less than Λ(τ). By Lemma 5 we know the bottleneck of ths processor s nether a non-splt task nor a body subtask, and by Lemma 7 we know the bottleneck s not a tal subtask ether, so there s no bottleneck on ths processor. Ths contradcts the property of the parttonng algorthm, that all processors to whch no more task can be assgned must have a bottleneck. V. THE ALGORITHM FOR ANY TASK SET In ths secton, we ntroduce RM-TS, whch removes the restrcton to lght task sets n RM-TS/lght. We wll show that RM-TS can acheve a D-PUB Λ(τ) for any task set τ, fλ(τ) does not exceed In other words, f one can derve a D-PUB Λ (τ) from τ s parameters under unprocessor RMS, RM-TS can acheve the utlzaton bound of Λ(τ) =mn(λ 2 2 (τ), 1+ ). Note that 1+ s decreasng respect to N, and t s around 81.8% when N goes to nfnty. For example, we can nstantate our result by the harmonc chan bound K(2 1/K 1): K =3. Snce 3(2 1/3 1) 77.9% < 81.8%, we know that any task set τ n whch there are at most 3 harmonc chans s schedulable by our algorthm RM-TS on M processors f ts normalzed utlzaton U M (τ) s no larger than 77.9%. K =2. Snce 2(2 1/2 1) 82.8% > 81.8%, we know 81.8% can be used as the utlzaton bound n ths case: any task set τ n whch there are at most 2 harmonc chans s schedulable by our algorthm RM-TS on M processors f ts normalzed utlzaton U M (τ) s no larger than 81.8%. So we can see that despte an upper bound on Λ(τ), RM-TS stll provdes sgnfcant room for hgher utlzaton bounds. For smplcty of presentaton, we assume each task s utlzaton s bounded by Λ(τ). Note that ths assumpton does not nvaldate the utlzaton bound of our algorthm for task sets whch have some ndvdual task s utlzaton above Λ(τ) 5. RM-TS adds a pre-assgnment mechansm to handle the heavy tasks. In the pre-assgnment, we frst dentfy the heavy tasks whose tal subtasks would have low prorty f they were splt, and pre-assgn these tasks to one processor each, whch avods the splt. The dentfcaton s checked by a smple test condton, called pre-assgn condton. Those heavy tasks that do not satsfy ths condton wll be assgned (and possbly splt) later, together wth the lght tasks. Note that the number of tasks need to be pre-assgned s at most the number of processors. Ths wll be clear n the algorthm descrpton. We ntroduce some notatons. If a heavy task τ s preassgned to a processor P q, we call τ a pre-assgned task and P q a pre-assgned processor, otherwse τ a normal task and P q a normal processor. A. Algorthm Descrpton The parttonng algorthm of RM-TS contans three phases: 5 One can let tasks wth a utlzaton more than Λ(τ) execute exclusvely on a dedcated processor each. If we can prove that the utlzaton bound of all the other tasks on all the other processors s Λ(τ), then the utlzaton bound of the overall system s also at least Λ(τ). 267

8 1) We frst pre-assgn the heavy tasks that satsfy the preassgn condton to one processor each, n decreasng prorty order. 2) We do task parttonng wth the remanng (.e. normal) tasks and remanng (.e. normal) processors smlar to RM-TS/lght untl all the normal processors are full. 3) The remanng tasks are assgned to the pre-assgned processors n ncreasng prorty order; the assgnment selects the processor hostng the lowest-prorty preassgned task, to assgn as many tasks as possble untl t s full, then selects the next processor. The pseudo-code of RM-TS s gven n Algorthm V-A. At the begnnng of the algorthm, all the processors are marked as normal and non-full. In the frst phase, we vst all the tasks n decreasng prorty order, and for each heavy task we determne whether we should pre-assgn t or not, by checkng the pre-assgn condton: U j ( P (τ ) 1) Λ(τ) (8) <j where P (τ ) s the number of processors marked as normal at the moment we are checkng for τ. If ths condton s satsfed, we pre-assgn ths heavy task to the current selected processor, whch s the one wth the mnmal ndex among all normal processors, and mark ths processor as pre-assgned. Otherwse, we do not pre-assgn ths heavy task, and leave t to the followng phases. The ntuton of the pre-assgn condton (8) s: We pre-assgn a heavy task τ f the total utlzaton of lower-prorty tasks s relatvely small, snce otherwse ts tal subtask may end up wth a low prorty on the correspondng processor. Note that, no matter how many heavy tasks are there n the system, the number of pre-assgned tasks s at most the number of processors: after P (τ ) reachng 0, the pre-assgn condton never holds, and no more heavy task wll be pre-assgned. In the second phase we assgn the remanng tasks to normal processors only. Note that the remanng tasks are ether lght tasks or the heavy tasks that do not satsfy the pre-assgn condton. The assgnment polcy n ths phase s the same as for RM-TS/lght: We sort tasks n ncreasng prorty order, and at each step select the normal processor P q wth the mnmal assgned utlzaton. Then we do the task assgnment: we ether add τ k to τ(p q ) f τ k can be entrely assgned to P q, or splt τ k and assgns a maxmzed porton of t to P q otherwse. In the thrd phase we contnue to assgn the remanng tasks to pre-assgned processors. There s an mportant dfference between the second phase and the thrd phase: In the second phase tasks are assgned by a worst-ft strategy,.e., the utlzaton of all processors are ncreased evenly, whle n the thrd phase tasks are now assgned by a frst-ft strategy. More precsely, we select the pre-assgned processor whch hosts the lowest-prorty pre-assgned task of all non-full processors. We assgn as much workload as possble to t, untl t s full, and then move to the next processor. Ths strategy s one of the key ponts to facltate the nducton-based proof of the utlzaton bound n the next subsecton. Algorthm 3 The parttonng algorthm of RM-TS. 1: Mark all processors as normal and non-full // Phase 1: Pre-assgnment 2: Sort all tasks n τ n decreasng prorty order 3: for each task n τ do 4: Pck next task τ 5: f DetermnePreAssgn(τ ) then 6: Pck the normal processor wth the mnmal ndex P q 7: Add τ to τ(p q ) 8: Mark P q as pre-assgned 9: end f 10: end for // Phase 2: Assgn remanng tasks to normal processors 11: Sort all unassgned tasks n ncreasng prorty order 12: whle there s a non-full normal processor and an unassgned task do 13: Pck next unassgned task τ 14: Pck the non-full normal processor P q wth mnmal U(P q ) 15: Assgn(τ k,p q) 16: end whle // Phase 3: Assgn remanng tasks to pre-assgned processors // Remanng tasks are stll n ncreasng prorty order 17: whle there s a non-full pre-assgned processor and an unassgned task do 18: Pck next unassgned task τ 19: Pck the non-full pre-assgned processor P q wth the largest ndex 20: Assgn(τ k,p q) 21: end whle 22: If there s an unassgned task, the algorthm fals, otherwse t succeeds. Algorthm 4 The DetermnePreAssgn(τ ) routne. 1: P (τ ):=the set of normal processors at ths moment 2: f τ s heavy then 3: f j> U j ( P (τ ) 1) Λ(τ) then 4: return true 5: end f 6: end f 7: return false After these three phases, the parttonng fals f there stll are unassgned tasks left, otherwse t s successful. At runtme, the tasks assgned to each processor are scheduled by RMS wth ther orgnal prortes, and the subtasks of a splt task need to respect ther precedence relatons, whch s the same as n RM-TS/lght. 268

9 Note that, when Assgn calculates the synthetc deadlnes and verfes whether the tasks assgned to a processor are schedulable, t assumes that any body subtask has the hghest prorty on ts host processor, whch has been proved true for RM-TS/lght n Lemma 2. It s easy to see that ths assumpton also holds for the second phase of RM-TS (the task assgnment on normal processors), n whch tasks are assgned n exactly the same way as RM-TS/lght. But t s not clear for ths moment whether ths assumpton also holds for the thrd phase or not, snce there are pre-assgned tasks already assgned to these pre-assgned processors n the frst phase, and there s a rsk that a pre-assgned task mght have hgher prorty than the body subtask on that processor. However, as wll be shown n the proof of Lemma 14, a body subtask on a pre-assgned processor has the hghest prorty on ts host processor, thus routne Assgn ndeed performs a correct schedulablty analyss for task assgnment and splttng, by whch we know any task set successfully parttoned by RM- TS s guaranteed to meet all deadlnes at run-tme. B. Utlzaton Bound The proof of the utlzaton bound Λ(τ) for RM-TS. follows a smlar pattern as the proof for RM-TS/lght, by assumng a task set τ that can t be completely assgned. The man dffculty s that we now have to deal wth heavy tasks as well. Recall that the approach n Secton IV was to show an ndvdual utlzaton of at least Λ(τ) on each sngle processor after an overflowed parttonng phase. However, for RM-TS, we wll not do that drectly. Instead, we wll show the approprate bound for sets of processors. We frst ntroduce some addtonal notaton. Let s assume that K 0 heavy tasks are pre-assgned n the frst phase of RM-TS. Then P s parttoned nto the set of pre-assgned processors: P P := {P 1,...,P K } and the set of normal processors: We also use P N := {P K+1,...,P M }. P q := {P q,...,p M } to denote the set of processors wth ndex of at least q. We want to show that, after a faled parttonng procedure of τ, the total utlzaton sum of all processors s at least M Λ(τ). We do ths by provng the property P j P q U(P j ) P q Λ(τ) by nducton on P q for all q K, startng wth base case q = K, and usng the nductve hypothess wth q = m +1 to derve ths property for q = m. When q =1, t mples the expected bound M Λ(τ) for all the M processors. 1) Base Case The proof strategy of the base case s: We assume that the total assgned utlzaton of normal processors s below the expected bound, by whch we can derve the absence of bottlenecks on some processors n P N. Ths contradcts the fact that there s at least one bottleneck on each processor after a faled parttonng procedure. Frst, Lemma 5 stll holds for normal processors under RM- TS,.e., a bottleneck on a normal processor wth assgned utlzaton lower than Λ(τ) s nether a non-splt task nor a body subtask. Ths s because the parttonng procedure of RM-TS on normal processors s exactly the same as RM- TS/lght and one can reuse the reasonng for Lemma 5 here. In the followng, we focus on the dffcult case of tal subtasks. Lemma 9. Suppose there are remanng tasks after the second phase of RM-TS. Let τ t be a tal subtask assgned to P t.if both the followng condtons are satsfed U(P q ) < P N Λ(τ) (9) P q P N then τ t s not a bottleneck on P t. U(P t ) < Λ(τ) (10) Proof: We prove by contradcton: We assume the lemma does not hold for one or more tasks, and let τ be the lowestprorty one among these tasks. Smlar wth the proof of ts counterpart n RM-TS/lght (Lemma 7), we wll frst show that all processors hostng τ s body subtasks have assgned utlzaton at least Λ(τ). We do ths by contradcton. We assume U(P bj ) < Λ(τ), and by Condton (9) we know the tal subtasks on P bj are not bottlenecks (the tal subtasks on P bj all satsfy ths lemma, snce they all have lower prortes than τ, and by assumpton τ s the lowest-prorty task does not satsfy ths lemma). By Lemma 5 (whch stll holds for normal processors as dscussed above), we know a bottleneck of P bj s nether a non-splt task nor a body subtask. So we can conclude that there s no bottleneck on P bj, whch s a contradcton. Therefore, we have proved that all processors hostng τ s body subtasks have assgned utlzaton at least Λ(τ). Ths results wll be used later n ths proof. In the followng we wll prove τ t s not a bottleneck, by dervng Condton (3) and apply Lemma 6 to τ t. τ s ether lght or heavy. For the case τ s lght, the proof s exactly the same as for Lemma 7, snce the second phase of RM-TS works n exactly the same way as RM-TS/lght. Note that to prove for the lght task case, only Condton (9) s needed (the same as n Lemma 7). In the followng we consder the case that τ s heavy. We prove n two cases: U body Λ(τ) 1 Snce τ s a heavy task but not pre-assgned, t faled the pre-assgn condton, satsfyng the negaton of that 269

10 condton: U j > ( P (τ ) 1) Λ(τ) (11) j> We splt the utlzaton sum of all lower-prorty tasks n two parts: U α, the part contrbuted by pre-assgned tasks, and U β, the part contrbuted by normal tasks. By the parttonng algorthm constructon, we know the U β part s on normal processors and the U α part s on processors n P (τ ) \P N, We further know that each pre-assgned processor has one pre-assgned task, and each task has a utlzaton of at most Λ(τ) (our assumpton stated n the begnnng of Secton V). Thus, we have: U β ( P (τ ) P N ) Λ(τ) (12) By replacng j> U j by U α + U β n (11) and applyng (12), we get: U α > ( P N 1) Λ(τ) (13) The assgned utlzatons on processors n P N conssts of three parts: () the utlzaton of tasks wth lower prorty than τ, () the utlzaton of τ, and () the utlzaton of tasks wth hgher prorty than τ. We know that part () s U α, part () s U, and the part () s at least Y t. So we have U α + U + Y t U(P q ) (14) P q P N By Condton (9), (13) and (14) we get U + Y t Λ(τ) In order to use ths to derve Condton (3) of Lemma 6, whch ndcates τ t s not a bottleneck, we need to prove Λ(τ) U body (1 U body ) U body Λ(τ) 1 (snce < 1) whch s obvously true by the precondton of ths case. U body < Λ(τ) 1 Frst, Condton (10) can be rewrtten as X t + Y t + U t < Λ(τ) (15) Snce all processors hostng τ s body subtasks have assgned utlzaton at least Λ(τ) (proved n above), we have X bj + U body >B Λ(τ) j [1,B ] Snce at each step of the second phase, RM-TS always selects the processor wth the mnmal assgned utlzaton to assgn the current (sub)task, we have X t X bj for each X bj. Therefore we have B X t + U body B Λ(τ) X t Λ(τ) U body (snce B 1) combnng whch and (15) we get Y t + U t <U body Now, to prove Condton (3) of Lemma 6, whch ndcates s not a bottleneck, we only need to show τ t U body (1 U body ) U body 1+ Due to the precondton of ths case U body we only need to prove Λ(τ) 1 1+ Λ(τ) 2 1+ < Λ(τ) 1, 2 1+ whch s true snce Λ(τ) s assumed to be at most n RM-TS. In summary, we know τ t s not a bottleneck. By the above reasonng, we can establsh the base case: Lemma 10. Suppose there are remanng tasks after the second phase of RM-TS (there exsts at least one bottleneck on each normal processor). We have: U(P q ) P N Λ(τ) P q P N 2) Inductve Step We start wth a useful property concernng the pre-assgned tasks local prortes. Lemma 11. Suppose P m s a pre-assgned processor. If U(P q ) P m+1 Λ(τ) (16) P q P m+1 then the pre-assgned task on P m has the lowest prorty among all tasks assgned to P m. Proof: Let τ be the pre-assgned task on P m. Snce τ s pre-assgned, we know that t satsfes the pre-assgn condton: U j ( P (τ ) 1) Λ(τ) }{{} j> P m+1 Usng ths wth (16) we have: U(P q ) U j (17) P q P m+1 j> whch means the total capacty of the processors wth larger ndces s enough to accommodate all lower-prorty tasks. By the parttonng algorthm, we know that no tasks, except τ whch has been pre-assgned already, wll be assgned to P m before all processors wth larger ndces are full. So no task wth prorty lower than τ wll be assgned to P m. Now we start the man proof of the nductve step. 270

11 Lemma 12. We use RM-TS to partton task set τ. Suppose there are remanng tasks after processor P m s full (there exsts at least one bottleneck on P m ). If U(P q ) P m+1 Λ(τ) (18) P q P m+1 then we have P q P m U(P q ) P m Λ(τ) Proof: We prove by contradcton. Assume P q P m U(P q ) < P m Λ(τ) (19) Wth assumpton (18) ths mples the bound on P m s utlzaton: U(P m ) < Λ(τ) (20) As before, wth (20) we want to prove that a bottleneck on P m s nether a non-splt task, a body subtask nor a tal subtask, whch forms a contradcton and completes the proof. In the followng we consder each type ndvdually. We frst consder non-splt tasks. Agan, Λ(τ) s suffcent to guarantee the schedulablty of non-splt tasks, although the relatve deadlnes of splt subtasks on ths processor may change. Thus, (20) mples that a non-splt task cannot be a bottleneck of P m. Then we consder body subtasks. By Lemma 11 we know the pre-assgned task has the lowest prorty on P m. We also know that all normal tasks on P m have lower prorty than the body subtask, snce n the thrd phase of RM-TS tasks are assgned n ncreasng prorty order. Therefore, we can conclude that the body subtask has the hghest prorty on P m, and cannot be a bottleneck. At last we consder tal subtasks. Let τ t be a tal subtask assgned to P m. We dstngush the followng two cases: U body < 1+ The nductve hypothess (18) guarantees wth Lemma 11 that the pre-assgned task has the lowest prorty on P m, so X t contans at least the utlzaton of ths pre-assgned task, whch s heavy. So we have: X t (21) 1+ We can rewrte (20) as X t + Y t + U t < Λ(τ) and apply t to (21) to get: Y t + U t < Λ(τ) (22) 1+ Recall that Λ(τ) s restrcted by an upper bound n RM- TS: Λ(τ) Λ(τ) (1 1+ ) By applyng U body < 1+ to above we have Λ(τ) body < (1 U ) 1+ And by (22) we have Y t + U t body < (1 U ). By Lemma 6 we know τ t s not a bottleneck. U body 1+ Snce τ s a heavy task but not pre-assgned, t faled the pre-assgn condton, satsfyng the negaton of that condton: U j > ( P (τ ) 1) Λ(τ) (23) j> We splt the utlzaton sum of all lower-prorty tasks n two parts: U β, the part contrbuted by tasks on P m, U α, the part contrbuted by pre-assgned tasks on P\P m.by the parttonng algorthm constructon, we know the U α part s on processors n P (τ ) \P m, We further know that each pre-assgned processor has one pre-assgned task, and each task has a utlzaton of at most Λ(τ) (our assumpton stated n the begnnng of Secton V). Thus, we have: U β ( P (τ ) P m ) Λ(τ) (24) By replacng j> U j by U α + U β n (11) and applyng (12), we get: U α > ( P m 1) Λ(τ) (25) The assgned utlzatons on processors n P m conssts of three parts: () the utlzaton of tasks wth lower prorty than τ, () the utlzaton of τ, and () the utlzaton of tasks wth hgher prorty than τ. We know that part () s U α, part () s U, and the part () s at least Y t.sowehave U α + U + Y t U(P q ) (26) P q P m By (19), (25) and 26 we have: Y t + U < Λ(τ) Y t + U t < Λ(τ) U body Y t + U t < 2 1+ U body ( Λ(τ) 2 ) 1+ By the precondton of ths case U body 1+,we have 2 1+ U body (1 U body ) Applyng ths to above we get Y t +U t body < (1 U ). By Lemma 6 we know τ t s not a bottleneck. In summary, we have shown that n both cases the tal subtask τ t s not a bottleneck of P m. So we can conclude that there s no bottleneck on P m, whch results n a contradcton and establshes the proof. 271

Fixed-Priority Multiprocessor Scheduling with Liu & Layland s Utilization Bound

Fixed-Priority Multiprocessor Scheduling with Liu & Layland s Utilization Bound Fxed-Prorty Multprocessor Schedulng wth Lu & Layland s Utlzaton Bound Nan Guan, Martn Stgge, Wang Y and Ge Yu Department of Informaton Technology, Uppsala Unversty, Sweden Department of Computer Scence

More information

Real-Time Systems. Multiprocessor scheduling. Multiprocessor scheduling. Multiprocessor scheduling

Real-Time Systems. Multiprocessor scheduling. Multiprocessor scheduling. Multiprocessor scheduling Real-Tme Systems Multprocessor schedulng Specfcaton Implementaton Verfcaton Multprocessor schedulng -- -- Global schedulng How are tasks assgned to processors? Statc assgnment The processor(s) used for

More information

Fixed-Priority Multiprocessor Scheduling with Liu & Layland s Utilization Bound

Fixed-Priority Multiprocessor Scheduling with Liu & Layland s Utilization Bound Fxed-Prorty Multprocessor Schedulng wth Lu & Layland s Utlzaton Bound Nan Guan, Martn Stgge, Wang Y and Ge Yu Department of Informaton Technology, Uppsala Unversty, Sweden Department of Computer Scence

More information

Problem Set 9 Solutions

Problem Set 9 Solutions Desgn and Analyss of Algorthms May 4, 2015 Massachusetts Insttute of Technology 6.046J/18.410J Profs. Erk Demane, Srn Devadas, and Nancy Lynch Problem Set 9 Solutons Problem Set 9 Solutons Ths problem

More information

Embedded Systems. 4. Aperiodic and Periodic Tasks

Embedded Systems. 4. Aperiodic and Periodic Tasks Embedded Systems 4. Aperodc and Perodc Tasks Lothar Thele 4-1 Contents of Course 1. Embedded Systems Introducton 2. Software Introducton 7. System Components 10. Models 3. Real-Tme Models 4. Perodc/Aperodc

More information

Structure and Drive Paul A. Jensen Copyright July 20, 2003

Structure and Drive Paul A. Jensen Copyright July 20, 2003 Structure and Drve Paul A. Jensen Copyrght July 20, 2003 A system s made up of several operatons wth flow passng between them. The structure of the system descrbes the flow paths from nputs to outputs.

More information

College of Computer & Information Science Fall 2009 Northeastern University 20 October 2009

College of Computer & Information Science Fall 2009 Northeastern University 20 October 2009 College of Computer & Informaton Scence Fall 2009 Northeastern Unversty 20 October 2009 CS7880: Algorthmc Power Tools Scrbe: Jan Wen and Laura Poplawsk Lecture Outlne: Prmal-dual schema Network Desgn:

More information

The optimal delay of the second test is therefore approximately 210 hours earlier than =2.

The optimal delay of the second test is therefore approximately 210 hours earlier than =2. THE IEC 61508 FORMULAS 223 The optmal delay of the second test s therefore approxmately 210 hours earler than =2. 8.4 The IEC 61508 Formulas IEC 61508-6 provdes approxmaton formulas for the PF for smple

More information

Foundations of Arithmetic

Foundations of Arithmetic Foundatons of Arthmetc Notaton We shall denote the sum and product of numbers n the usual notaton as a 2 + a 2 + a 3 + + a = a, a 1 a 2 a 3 a = a The notaton a b means a dvdes b,.e. ac = b where c s an

More information

Last Time. Priority-based scheduling. Schedulable utilization Rate monotonic rule: Keep utilization below 69% Static priorities Dynamic priorities

Last Time. Priority-based scheduling. Schedulable utilization Rate monotonic rule: Keep utilization below 69% Static priorities Dynamic priorities Last Tme Prorty-based schedulng Statc prortes Dynamc prortes Schedulable utlzaton Rate monotonc rule: Keep utlzaton below 69% Today Response tme analyss Blockng terms Prorty nverson And solutons Release

More information

Module 9. Lecture 6. Duality in Assignment Problems

Module 9. Lecture 6. Duality in Assignment Problems Module 9 1 Lecture 6 Dualty n Assgnment Problems In ths lecture we attempt to answer few other mportant questons posed n earler lecture for (AP) and see how some of them can be explaned through the concept

More information

Stanford University CS359G: Graph Partitioning and Expanders Handout 4 Luca Trevisan January 13, 2011

Stanford University CS359G: Graph Partitioning and Expanders Handout 4 Luca Trevisan January 13, 2011 Stanford Unversty CS359G: Graph Parttonng and Expanders Handout 4 Luca Trevsan January 3, 0 Lecture 4 In whch we prove the dffcult drecton of Cheeger s nequalty. As n the past lectures, consder an undrected

More information

More metrics on cartesian products

More metrics on cartesian products More metrcs on cartesan products If (X, d ) are metrc spaces for 1 n, then n Secton II4 of the lecture notes we defned three metrcs on X whose underlyng topologes are the product topology The purpose of

More information

LINEAR REGRESSION ANALYSIS. MODULE IX Lecture Multicollinearity

LINEAR REGRESSION ANALYSIS. MODULE IX Lecture Multicollinearity LINEAR REGRESSION ANALYSIS MODULE IX Lecture - 30 Multcollnearty Dr. Shalabh Department of Mathematcs and Statstcs Indan Insttute of Technology Kanpur 2 Remedes for multcollnearty Varous technques have

More information

3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X

3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X Statstcs 1: Probablty Theory II 37 3 EPECTATION OF SEVERAL RANDOM VARIABLES As n Probablty Theory I, the nterest n most stuatons les not on the actual dstrbuton of a random vector, but rather on a number

More information

Lecture Notes on Linear Regression

Lecture Notes on Linear Regression Lecture Notes on Lnear Regresson Feng L fl@sdueducn Shandong Unversty, Chna Lnear Regresson Problem In regresson problem, we am at predct a contnuous target value gven an nput feature vector We assume

More information

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur Module 3 LOSSY IMAGE COMPRESSION SYSTEMS Verson ECE IIT, Kharagpur Lesson 6 Theory of Quantzaton Verson ECE IIT, Kharagpur Instructonal Objectves At the end of ths lesson, the students should be able to:

More information

Lecture 4: November 17, Part 1 Single Buffer Management

Lecture 4: November 17, Part 1 Single Buffer Management Lecturer: Ad Rosén Algorthms for the anagement of Networs Fall 2003-2004 Lecture 4: November 7, 2003 Scrbe: Guy Grebla Part Sngle Buffer anagement In the prevous lecture we taled about the Combned Input

More information

a b a In case b 0, a being divisible by b is the same as to say that

a b a In case b 0, a being divisible by b is the same as to say that Secton 6.2 Dvsblty among the ntegers An nteger a ε s dvsble by b ε f there s an nteger c ε such that a = bc. Note that s dvsble by any nteger b, snce = b. On the other hand, a s dvsble by only f a = :

More information

princeton univ. F 17 cos 521: Advanced Algorithm Design Lecture 7: LP Duality Lecturer: Matt Weinberg

princeton univ. F 17 cos 521: Advanced Algorithm Design Lecture 7: LP Duality Lecturer: Matt Weinberg prnceton unv. F 17 cos 521: Advanced Algorthm Desgn Lecture 7: LP Dualty Lecturer: Matt Wenberg Scrbe: LP Dualty s an extremely useful tool for analyzng structural propertes of lnear programs. Whle there

More information

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems Numercal Analyss by Dr. Anta Pal Assstant Professor Department of Mathematcs Natonal Insttute of Technology Durgapur Durgapur-713209 emal: anta.bue@gmal.com 1 . Chapter 5 Soluton of System of Lnear Equatons

More information

Assortment Optimization under MNL

Assortment Optimization under MNL Assortment Optmzaton under MNL Haotan Song Aprl 30, 2017 1 Introducton The assortment optmzaton problem ams to fnd the revenue-maxmzng assortment of products to offer when the prces of products are fxed.

More information

Global EDF Scheduling for Parallel Real-Time Tasks

Global EDF Scheduling for Parallel Real-Time Tasks Washngton Unversty n St. Lous Washngton Unversty Open Scholarshp Engneerng and Appled Scence Theses & Dssertatons Engneerng and Appled Scence Sprng 5-15-2014 Global EDF Schedulng for Parallel Real-Tme

More information

Affine transformations and convexity

Affine transformations and convexity Affne transformatons and convexty The purpose of ths document s to prove some basc propertes of affne transformatons nvolvng convex sets. Here are a few onlne references for background nformaton: http://math.ucr.edu/

More information

NP-Completeness : Proofs

NP-Completeness : Proofs NP-Completeness : Proofs Proof Methods A method to show a decson problem Π NP-complete s as follows. (1) Show Π NP. (2) Choose an NP-complete problem Π. (3) Show Π Π. A method to show an optmzaton problem

More information

Polynomial Regression Models

Polynomial Regression Models LINEAR REGRESSION ANALYSIS MODULE XII Lecture - 6 Polynomal Regresson Models Dr. Shalabh Department of Mathematcs and Statstcs Indan Insttute of Technology Kanpur Test of sgnfcance To test the sgnfcance

More information

Lectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix

Lectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix Lectures - Week 4 Matrx norms, Condtonng, Vector Spaces, Lnear Independence, Spannng sets and Bass, Null space and Range of a Matrx Matrx Norms Now we turn to assocatng a number to each matrx. We could

More information

Partitioned Mixed-Criticality Scheduling on Multiprocessor Platforms

Partitioned Mixed-Criticality Scheduling on Multiprocessor Platforms Parttoned Mxed-Crtcalty Schedulng on Multprocessor Platforms Chuanca Gu 1, Nan Guan 1,2, Qngxu Deng 1 and Wang Y 1,2 1 Northeastern Unversty, Chna 2 Uppsala Unversty, Sweden Abstract Schedulng mxed-crtcalty

More information

Difference Equations

Difference Equations Dfference Equatons c Jan Vrbk 1 Bascs Suppose a sequence of numbers, say a 0,a 1,a,a 3,... s defned by a certan general relatonshp between, say, three consecutve values of the sequence, e.g. a + +3a +1

More information

2E Pattern Recognition Solutions to Introduction to Pattern Recognition, Chapter 2: Bayesian pattern classification

2E Pattern Recognition Solutions to Introduction to Pattern Recognition, Chapter 2: Bayesian pattern classification E395 - Pattern Recognton Solutons to Introducton to Pattern Recognton, Chapter : Bayesan pattern classfcaton Preface Ths document s a soluton manual for selected exercses from Introducton to Pattern Recognton

More information

MMA and GCMMA two methods for nonlinear optimization

MMA and GCMMA two methods for nonlinear optimization MMA and GCMMA two methods for nonlnear optmzaton Krster Svanberg Optmzaton and Systems Theory, KTH, Stockholm, Sweden. krlle@math.kth.se Ths note descrbes the algorthms used n the author s 2007 mplementatons

More information

Outline. Communication. Bellman Ford Algorithm. Bellman Ford Example. Bellman Ford Shortest Path [1]

Outline. Communication. Bellman Ford Algorithm. Bellman Ford Example. Bellman Ford Shortest Path [1] DYNAMIC SHORTEST PATH SEARCH AND SYNCHRONIZED TASK SWITCHING Jay Wagenpfel, Adran Trachte 2 Outlne Shortest Communcaton Path Searchng Bellmann Ford algorthm Algorthm for dynamc case Modfcatons to our algorthm

More information

Improved Worst-Case Response-Time Calculations by Upper-Bound Conditions

Improved Worst-Case Response-Time Calculations by Upper-Bound Conditions Improved Worst-Case Response-Tme Calculatons by Upper-Bound Condtons Vctor Pollex, Steffen Kollmann, Karsten Albers and Frank Slomka Ulm Unversty Insttute of Embedded Systems/Real-Tme Systems {frstname.lastname}@un-ulm.de

More information

Case A. P k = Ni ( 2L i k 1 ) + (# big cells) 10d 2 P k.

Case A. P k = Ni ( 2L i k 1 ) + (# big cells) 10d 2 P k. THE CELLULAR METHOD In ths lecture, we ntroduce the cellular method as an approach to ncdence geometry theorems lke the Szemeréd-Trotter theorem. The method was ntroduced n the paper Combnatoral complexty

More information

Notes on Frequency Estimation in Data Streams

Notes on Frequency Estimation in Data Streams Notes on Frequency Estmaton n Data Streams In (one of) the data streamng model(s), the data s a sequence of arrvals a 1, a 2,..., a m of the form a j = (, v) where s the dentty of the tem and belongs to

More information

Common loop optimizations. Example to improve locality. Why Dependence Analysis. Data Dependence in Loops. Goal is to find best schedule:

Common loop optimizations. Example to improve locality. Why Dependence Analysis. Data Dependence in Loops. Goal is to find best schedule: 15-745 Lecture 6 Data Dependence n Loops Copyrght Seth Goldsten, 2008 Based on sldes from Allen&Kennedy Lecture 6 15-745 2005-8 1 Common loop optmzatons Hostng of loop-nvarant computatons pre-compute before

More information

2.3 Nilpotent endomorphisms

2.3 Nilpotent endomorphisms s a block dagonal matrx, wth A Mat dm U (C) In fact, we can assume that B = B 1 B k, wth B an ordered bass of U, and that A = [f U ] B, where f U : U U s the restrcton of f to U 40 23 Nlpotent endomorphsms

More information

Two Methods to Release a New Real-time Task

Two Methods to Release a New Real-time Task Two Methods to Release a New Real-tme Task Abstract Guangmng Qan 1, Xanghua Chen 2 College of Mathematcs and Computer Scence Hunan Normal Unversty Changsha, 410081, Chna qqyy@hunnu.edu.cn Gang Yao 3 Sebel

More information

Lecture 4. Instructor: Haipeng Luo

Lecture 4. Instructor: Haipeng Luo Lecture 4 Instructor: Hapeng Luo In the followng lectures, we focus on the expert problem and study more adaptve algorthms. Although Hedge s proven to be worst-case optmal, one may wonder how well t would

More information

A combinatorial problem associated with nonograms

A combinatorial problem associated with nonograms A combnatoral problem assocated wth nonograms Jessca Benton Ron Snow Nolan Wallach March 21, 2005 1 Introducton. Ths work was motvated by a queston posed by the second named author to the frst named author

More information

Supplement: Proofs and Technical Details for The Solution Path of the Generalized Lasso

Supplement: Proofs and Technical Details for The Solution Path of the Generalized Lasso Supplement: Proofs and Techncal Detals for The Soluton Path of the Generalzed Lasso Ryan J. Tbshran Jonathan Taylor In ths document we gve supplementary detals to the paper The Soluton Path of the Generalzed

More information

Appendix B: Resampling Algorithms

Appendix B: Resampling Algorithms 407 Appendx B: Resamplng Algorthms A common problem of all partcle flters s the degeneracy of weghts, whch conssts of the unbounded ncrease of the varance of the mportance weghts ω [ ] of the partcles

More information

= z 20 z n. (k 20) + 4 z k = 4

= z 20 z n. (k 20) + 4 z k = 4 Problem Set #7 solutons 7.2.. (a Fnd the coeffcent of z k n (z + z 5 + z 6 + z 7 + 5, k 20. We use the known seres expanson ( n+l ( z l l z n below: (z + z 5 + z 6 + z 7 + 5 (z 5 ( + z + z 2 + z + 5 5

More information

Calculation of time complexity (3%)

Calculation of time complexity (3%) Problem 1. (30%) Calculaton of tme complexty (3%) Gven n ctes, usng exhaust search to see every result takes O(n!). Calculaton of tme needed to solve the problem (2%) 40 ctes:40! dfferent tours 40 add

More information

The Order Relation and Trace Inequalities for. Hermitian Operators

The Order Relation and Trace Inequalities for. Hermitian Operators Internatonal Mathematcal Forum, Vol 3, 08, no, 507-57 HIKARI Ltd, wwwm-hkarcom https://doorg/0988/mf088055 The Order Relaton and Trace Inequaltes for Hermtan Operators Y Huang School of Informaton Scence

More information

Volume 18 Figure 1. Notation 1. Notation 2. Observation 1. Remark 1. Remark 2. Remark 3. Remark 4. Remark 5. Remark 6. Theorem A [2]. Theorem B [2].

Volume 18 Figure 1. Notation 1. Notation 2. Observation 1. Remark 1. Remark 2. Remark 3. Remark 4. Remark 5. Remark 6. Theorem A [2]. Theorem B [2]. Bulletn of Mathematcal Scences and Applcatons Submtted: 016-04-07 ISSN: 78-9634, Vol. 18, pp 1-10 Revsed: 016-09-08 do:10.1805/www.scpress.com/bmsa.18.1 Accepted: 016-10-13 017 ScPress Ltd., Swtzerland

More information

(1 ) (1 ) 0 (1 ) (1 ) 0

(1 ) (1 ) 0 (1 ) (1 ) 0 Appendx A Appendx A contans proofs for resubmsson "Contractng Informaton Securty n the Presence of Double oral Hazard" Proof of Lemma 1: Assume that, to the contrary, BS efforts are achevable under a blateral

More information

CHAPTER 17 Amortized Analysis

CHAPTER 17 Amortized Analysis CHAPTER 7 Amortzed Analyss In an amortzed analyss, the tme requred to perform a sequence of data structure operatons s averaged over all the operatons performed. It can be used to show that the average

More information

Errors for Linear Systems

Errors for Linear Systems Errors for Lnear Systems When we solve a lnear system Ax b we often do not know A and b exactly, but have only approxmatons  and ˆb avalable. Then the best thng we can do s to solve ˆx ˆb exactly whch

More information

Graph Reconstruction by Permutations

Graph Reconstruction by Permutations Graph Reconstructon by Permutatons Perre Ille and Wllam Kocay* Insttut de Mathémathques de Lumny CNRS UMR 6206 163 avenue de Lumny, Case 907 13288 Marselle Cedex 9, France e-mal: lle@ml.unv-mrs.fr Computer

More information

Welfare Properties of General Equilibrium. What can be said about optimality properties of resource allocation implied by general equilibrium?

Welfare Properties of General Equilibrium. What can be said about optimality properties of resource allocation implied by general equilibrium? APPLIED WELFARE ECONOMICS AND POLICY ANALYSIS Welfare Propertes of General Equlbrum What can be sad about optmalty propertes of resource allocaton mpled by general equlbrum? Any crteron used to compare

More information

Maximizing the number of nonnegative subsets

Maximizing the number of nonnegative subsets Maxmzng the number of nonnegatve subsets Noga Alon Hao Huang December 1, 213 Abstract Gven a set of n real numbers, f the sum of elements of every subset of sze larger than k s negatve, what s the maxmum

More information

Kernel Methods and SVMs Extension

Kernel Methods and SVMs Extension Kernel Methods and SVMs Extenson The purpose of ths document s to revew materal covered n Machne Learnng 1 Supervsed Learnng regardng support vector machnes (SVMs). Ths document also provdes a general

More information

CS : Algorithms and Uncertainty Lecture 17 Date: October 26, 2016

CS : Algorithms and Uncertainty Lecture 17 Date: October 26, 2016 CS 29-128: Algorthms and Uncertanty Lecture 17 Date: October 26, 2016 Instructor: Nkhl Bansal Scrbe: Mchael Denns 1 Introducton In ths lecture we wll be lookng nto the secretary problem, and an nterestng

More information

5 The Rational Canonical Form

5 The Rational Canonical Form 5 The Ratonal Canoncal Form Here p s a monc rreducble factor of the mnmum polynomal m T and s not necessarly of degree one Let F p denote the feld constructed earler n the course, consstng of all matrces

More information

Winter 2008 CS567 Stochastic Linear/Integer Programming Guest Lecturer: Xu, Huan

Winter 2008 CS567 Stochastic Linear/Integer Programming Guest Lecturer: Xu, Huan Wnter 2008 CS567 Stochastc Lnear/Integer Programmng Guest Lecturer: Xu, Huan Class 2: More Modelng Examples 1 Capacty Expanson Capacty expanson models optmal choces of the tmng and levels of nvestments

More information

A new construction of 3-separable matrices via an improved decoding of Macula s construction

A new construction of 3-separable matrices via an improved decoding of Macula s construction Dscrete Optmzaton 5 008 700 704 Contents lsts avalable at ScenceDrect Dscrete Optmzaton journal homepage: wwwelsevercom/locate/dsopt A new constructon of 3-separable matrces va an mproved decodng of Macula

More information

U.C. Berkeley CS294: Spectral Methods and Expanders Handout 8 Luca Trevisan February 17, 2016

U.C. Berkeley CS294: Spectral Methods and Expanders Handout 8 Luca Trevisan February 17, 2016 U.C. Berkeley CS94: Spectral Methods and Expanders Handout 8 Luca Trevsan February 7, 06 Lecture 8: Spectral Algorthms Wrap-up In whch we talk about even more generalzatons of Cheeger s nequaltes, and

More information

find (x): given element x, return the canonical element of the set containing x;

find (x): given element x, return the canonical element of the set containing x; COS 43 Sprng, 009 Dsjont Set Unon Problem: Mantan a collecton of dsjont sets. Two operatons: fnd the set contanng a gven element; unte two sets nto one (destructvely). Approach: Canoncal element method:

More information

COMPARISON OF SOME RELIABILITY CHARACTERISTICS BETWEEN REDUNDANT SYSTEMS REQUIRING SUPPORTING UNITS FOR THEIR OPERATIONS

COMPARISON OF SOME RELIABILITY CHARACTERISTICS BETWEEN REDUNDANT SYSTEMS REQUIRING SUPPORTING UNITS FOR THEIR OPERATIONS Avalable onlne at http://sck.org J. Math. Comput. Sc. 3 (3), No., 6-3 ISSN: 97-537 COMPARISON OF SOME RELIABILITY CHARACTERISTICS BETWEEN REDUNDANT SYSTEMS REQUIRING SUPPORTING UNITS FOR THEIR OPERATIONS

More information

ECE559VV Project Report

ECE559VV Project Report ECE559VV Project Report (Supplementary Notes Loc Xuan Bu I. MAX SUM-RATE SCHEDULING: THE UPLINK CASE We have seen (n the presentaton that, for downlnk (broadcast channels, the strategy maxmzng the sum-rate

More information

Complete subgraphs in multipartite graphs

Complete subgraphs in multipartite graphs Complete subgraphs n multpartte graphs FLORIAN PFENDER Unverstät Rostock, Insttut für Mathematk D-18057 Rostock, Germany Floran.Pfender@un-rostock.de Abstract Turán s Theorem states that every graph G

More information

Grover s Algorithm + Quantum Zeno Effect + Vaidman

Grover s Algorithm + Quantum Zeno Effect + Vaidman Grover s Algorthm + Quantum Zeno Effect + Vadman CS 294-2 Bomb 10/12/04 Fall 2004 Lecture 11 Grover s algorthm Recall that Grover s algorthm for searchng over a space of sze wors as follows: consder the

More information

NUMERICAL DIFFERENTIATION

NUMERICAL DIFFERENTIATION NUMERICAL DIFFERENTIATION 1 Introducton Dfferentaton s a method to compute the rate at whch a dependent output y changes wth respect to the change n the ndependent nput x. Ths rate of change s called the

More information

Department of Statistics University of Toronto STA305H1S / 1004 HS Design and Analysis of Experiments Term Test - Winter Solution

Department of Statistics University of Toronto STA305H1S / 1004 HS Design and Analysis of Experiments Term Test - Winter Solution Department of Statstcs Unversty of Toronto STA35HS / HS Desgn and Analyss of Experments Term Test - Wnter - Soluton February, Last Name: Frst Name: Student Number: Instructons: Tme: hours. Ads: a non-programmable

More information

Remarks on the Properties of a Quasi-Fibonacci-like Polynomial Sequence

Remarks on the Properties of a Quasi-Fibonacci-like Polynomial Sequence Remarks on the Propertes of a Quas-Fbonacc-lke Polynomal Sequence Brce Merwne LIU Brooklyn Ilan Wenschelbaum Wesleyan Unversty Abstract Consder the Quas-Fbonacc-lke Polynomal Sequence gven by F 0 = 1,

More information

4 Analysis of Variance (ANOVA) 5 ANOVA. 5.1 Introduction. 5.2 Fixed Effects ANOVA

4 Analysis of Variance (ANOVA) 5 ANOVA. 5.1 Introduction. 5.2 Fixed Effects ANOVA 4 Analyss of Varance (ANOVA) 5 ANOVA 51 Introducton ANOVA ANOVA s a way to estmate and test the means of multple populatons We wll start wth one-way ANOVA If the populatons ncluded n the study are selected

More information

BOUNDEDNESS OF THE RIESZ TRANSFORM WITH MATRIX A 2 WEIGHTS

BOUNDEDNESS OF THE RIESZ TRANSFORM WITH MATRIX A 2 WEIGHTS BOUNDEDNESS OF THE IESZ TANSFOM WITH MATIX A WEIGHTS Introducton Let L = L ( n, be the functon space wth norm (ˆ f L = f(x C dx d < For a d d matrx valued functon W : wth W (x postve sem-defnte for all

More information

Lecture Space-Bounded Derandomization

Lecture Space-Bounded Derandomization Notes on Complexty Theory Last updated: October, 2008 Jonathan Katz Lecture Space-Bounded Derandomzaton 1 Space-Bounded Derandomzaton We now dscuss derandomzaton of space-bounded algorthms. Here non-trval

More information

CHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE

CHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE CHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE Analytcal soluton s usually not possble when exctaton vares arbtrarly wth tme or f the system s nonlnear. Such problems can be solved by numercal tmesteppng

More information

On the correction of the h-index for career length

On the correction of the h-index for career length 1 On the correcton of the h-ndex for career length by L. Egghe Unverstet Hasselt (UHasselt), Campus Depenbeek, Agoralaan, B-3590 Depenbeek, Belgum 1 and Unverstet Antwerpen (UA), IBW, Stadscampus, Venusstraat

More information

Chapter Newton s Method

Chapter Newton s Method Chapter 9. Newton s Method After readng ths chapter, you should be able to:. Understand how Newton s method s dfferent from the Golden Secton Search method. Understand how Newton s method works 3. Solve

More information

Finding Primitive Roots Pseudo-Deterministically

Finding Primitive Roots Pseudo-Deterministically Electronc Colloquum on Computatonal Complexty, Report No 207 (205) Fndng Prmtve Roots Pseudo-Determnstcally Ofer Grossman December 22, 205 Abstract Pseudo-determnstc algorthms are randomzed search algorthms

More information

COS 521: Advanced Algorithms Game Theory and Linear Programming

COS 521: Advanced Algorithms Game Theory and Linear Programming COS 521: Advanced Algorthms Game Theory and Lnear Programmng Moses Charkar February 27, 2013 In these notes, we ntroduce some basc concepts n game theory and lnear programmng (LP). We show a connecton

More information

HMMT February 2016 February 20, 2016

HMMT February 2016 February 20, 2016 HMMT February 016 February 0, 016 Combnatorcs 1. For postve ntegers n, let S n be the set of ntegers x such that n dstnct lnes, no three concurrent, can dvde a plane nto x regons (for example, S = {3,

More information

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 16

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 16 STAT 39: MATHEMATICAL COMPUTATIONS I FALL 218 LECTURE 16 1 why teratve methods f we have a lnear system Ax = b where A s very, very large but s ether sparse or structured (eg, banded, Toepltz, banded plus

More information

Chapter 13: Multiple Regression

Chapter 13: Multiple Regression Chapter 13: Multple Regresson 13.1 Developng the multple-regresson Model The general model can be descrbed as: It smplfes for two ndependent varables: The sample ft parameter b 0, b 1, and b are used to

More information

For now, let us focus on a specific model of neurons. These are simplified from reality but can achieve remarkable results.

For now, let us focus on a specific model of neurons. These are simplified from reality but can achieve remarkable results. Neural Networks : Dervaton compled by Alvn Wan from Professor Jtendra Malk s lecture Ths type of computaton s called deep learnng and s the most popular method for many problems, such as computer vson

More information

First day August 1, Problems and Solutions

First day August 1, Problems and Solutions FOURTH INTERNATIONAL COMPETITION FOR UNIVERSITY STUDENTS IN MATHEMATICS July 30 August 4, 997, Plovdv, BULGARIA Frst day August, 997 Problems and Solutons Problem. Let {ε n } n= be a sequence of postve

More information

AN EXTENDIBLE APPROACH FOR ANALYSING FIXED PRIORITY HARD REAL-TIME TASKS

AN EXTENDIBLE APPROACH FOR ANALYSING FIXED PRIORITY HARD REAL-TIME TASKS AN EXENDIBLE APPROACH FOR ANALYSING FIXED PRIORIY HARD REAL-IME ASKS K. W. ndell 1 Department of Computer Scence, Unversty of York, England YO1 5DD ABSRAC As the real-tme computng ndustry moves away from

More information

Simultaneous Optimization of Berth Allocation, Quay Crane Assignment and Quay Crane Scheduling Problems in Container Terminals

Simultaneous Optimization of Berth Allocation, Quay Crane Assignment and Quay Crane Scheduling Problems in Container Terminals Smultaneous Optmzaton of Berth Allocaton, Quay Crane Assgnment and Quay Crane Schedulng Problems n Contaner Termnals Necat Aras, Yavuz Türkoğulları, Z. Caner Taşkın, Kuban Altınel Abstract In ths work,

More information

Report on Image warping

Report on Image warping Report on Image warpng Xuan Ne, Dec. 20, 2004 Ths document summarzed the algorthms of our mage warpng soluton for further study, and there s a detaled descrpton about the mplementaton of these algorthms.

More information

Edge Isoperimetric Inequalities

Edge Isoperimetric Inequalities November 7, 2005 Ross M. Rchardson Edge Isopermetrc Inequaltes 1 Four Questons Recall that n the last lecture we looked at the problem of sopermetrc nequaltes n the hypercube, Q n. Our noton of boundary

More information

and problem sheet 2

and problem sheet 2 -8 and 5-5 problem sheet Solutons to the followng seven exercses and optonal bonus problem are to be submtted through gradescope by :0PM on Wednesday th September 08. There are also some practce problems,

More information

Copyright 2017 by Taylor Enterprises, Inc., All Rights Reserved. Adjusted Control Limits for P Charts. Dr. Wayne A. Taylor

Copyright 2017 by Taylor Enterprises, Inc., All Rights Reserved. Adjusted Control Limits for P Charts. Dr. Wayne A. Taylor Taylor Enterprses, Inc. Control Lmts for P Charts Copyrght 2017 by Taylor Enterprses, Inc., All Rghts Reserved. Control Lmts for P Charts Dr. Wayne A. Taylor Abstract: P charts are used for count data

More information

Solutions to the 71st William Lowell Putnam Mathematical Competition Saturday, December 4, 2010

Solutions to the 71st William Lowell Putnam Mathematical Competition Saturday, December 4, 2010 Solutons to the 7st Wllam Lowell Putnam Mathematcal Competton Saturday, December 4, 2 Kran Kedlaya and Lenny Ng A The largest such k s n+ 2 n 2. For n even, ths value s acheved by the partton {,n},{2,n

More information

Parallel Real-Time Scheduling of DAGs

Parallel Real-Time Scheduling of DAGs Washngton Unversty n St. Lous Washngton Unversty Open Scholarshp All Computer Scence and Engneerng Research Computer Scence and Engneerng Report Number: WUCSE-013-5 013 Parallel Real-Tme Schedulng of DAGs

More information

O-line Temporary Tasks Assignment. Abstract. In this paper we consider the temporary tasks assignment

O-line Temporary Tasks Assignment. Abstract. In this paper we consider the temporary tasks assignment O-lne Temporary Tasks Assgnment Yoss Azar and Oded Regev Dept. of Computer Scence, Tel-Avv Unversty, Tel-Avv, 69978, Israel. azar@math.tau.ac.l??? Dept. of Computer Scence, Tel-Avv Unversty, Tel-Avv, 69978,

More information

Estimation: Part 2. Chapter GREG estimation

Estimation: Part 2. Chapter GREG estimation Chapter 9 Estmaton: Part 2 9. GREG estmaton In Chapter 8, we have seen that the regresson estmator s an effcent estmator when there s a lnear relatonshp between y and x. In ths chapter, we generalzed the

More information

Economics 101. Lecture 4 - Equilibrium and Efficiency

Economics 101. Lecture 4 - Equilibrium and Efficiency Economcs 0 Lecture 4 - Equlbrum and Effcency Intro As dscussed n the prevous lecture, we wll now move from an envronment where we looed at consumers mang decsons n solaton to analyzng economes full of

More information

arxiv: v1 [math.co] 1 Mar 2014

arxiv: v1 [math.co] 1 Mar 2014 Unon-ntersectng set systems Gyula O.H. Katona and Dánel T. Nagy March 4, 014 arxv:1403.0088v1 [math.co] 1 Mar 014 Abstract Three ntersecton theorems are proved. Frst, we determne the sze of the largest

More information

Text S1: Detailed proofs for The time scale of evolutionary innovation

Text S1: Detailed proofs for The time scale of evolutionary innovation Text S: Detaled proofs for The tme scale of evolutonary nnovaton Krshnendu Chatterjee Andreas Pavloganns Ben Adlam Martn A. Nowak. Overvew and Organzaton We wll present detaled proofs of all our results.

More information

A 2D Bounded Linear Program (H,c) 2D Linear Programming

A 2D Bounded Linear Program (H,c) 2D Linear Programming A 2D Bounded Lnear Program (H,c) h 3 v h 8 h 5 c h 4 h h 6 h 7 h 2 2D Lnear Programmng C s a polygonal regon, the ntersecton of n halfplanes. (H, c) s nfeasble, as C s empty. Feasble regon C s unbounded

More information

Temperature. Chapter Heat Engine

Temperature. Chapter Heat Engine Chapter 3 Temperature In prevous chapters of these notes we ntroduced the Prncple of Maxmum ntropy as a technque for estmatng probablty dstrbutons consstent wth constrants. In Chapter 9 we dscussed the

More information

Chapter - 2. Distribution System Power Flow Analysis

Chapter - 2. Distribution System Power Flow Analysis Chapter - 2 Dstrbuton System Power Flow Analyss CHAPTER - 2 Radal Dstrbuton System Load Flow 2.1 Introducton Load flow s an mportant tool [66] for analyzng electrcal power system network performance. Load

More information

The Geometry of Logit and Probit

The Geometry of Logit and Probit The Geometry of Logt and Probt Ths short note s meant as a supplement to Chapters and 3 of Spatal Models of Parlamentary Votng and the notaton and reference to fgures n the text below s to those two chapters.

More information

Learning Theory: Lecture Notes

Learning Theory: Lecture Notes Learnng Theory: Lecture Notes Lecturer: Kamalka Chaudhur Scrbe: Qush Wang October 27, 2012 1 The Agnostc PAC Model Recall that one of the constrants of the PAC model s that the data dstrbuton has to be

More information

Convergence of random processes

Convergence of random processes DS-GA 12 Lecture notes 6 Fall 216 Convergence of random processes 1 Introducton In these notes we study convergence of dscrete random processes. Ths allows to characterze phenomena such as the law of large

More information

Psychology 282 Lecture #24 Outline Regression Diagnostics: Outliers

Psychology 282 Lecture #24 Outline Regression Diagnostics: Outliers Psychology 282 Lecture #24 Outlne Regresson Dagnostcs: Outlers In an earler lecture we studed the statstcal assumptons underlyng the regresson model, ncludng the followng ponts: Formal statement of assumptons.

More information

Min Cut, Fast Cut, Polynomial Identities

Min Cut, Fast Cut, Polynomial Identities Randomzed Algorthms, Summer 016 Mn Cut, Fast Cut, Polynomal Identtes Instructor: Thomas Kesselhem and Kurt Mehlhorn 1 Mn Cuts n Graphs Lecture (5 pages) Throughout ths secton, G = (V, E) s a mult-graph.

More information