multiprogrammed, hard real-time environments Giuseppe Lipari John Carpenter Sanjoy Baruah particular server.

Size: px
Start display at page:

Download "multiprogrammed, hard real-time environments Giuseppe Lipari John Carpenter Sanjoy Baruah particular server."

Transcription

1 A framework for achevng nter-applcaton solaton n multprogrammed, hard real-tme envronments Guseppe Lpar John Carpenter Sanjoy Baruah Abstract A framework for schedulng a number of derent real-tme applcatons on a sngle shared preemptable processor s proposed. Ths framework enforces complete solaton among the derent applcatons, such that the behavor of each applcaton s very smlar to ts behavor f t had been executng on a slower dedcated processor. A schedulng algorthm that mplements ths framework s presented and proved correct. Keywords. Hard-real-tme systems Preemptve schedulng Earlest deadlne rst Inter-applcaton solaton. 1. Introducton When several real-tme applcatons are multprogrammed on a sngle computer system, the underlyng schedulng polcy must provde each applcaton wth an approprate qualty of servce. In most computng envronments, ths necesstates the enforcement of solaton between applcatons. The scheduler must ensure that an errant applcaton should not be able to cause an unacceptable degradaton n performance of other { well-behaved { applcatons. An eectve conceptual framework for modelng such systems s to assocate a server wth each applcaton, and have a global scheduler resolve contenton of shared resources among the servers. Each server s characterzed by parameters whch specfy ts performance expectatons. A feasblty/ admsson control test determnes whether a set of servers can be scheduled on the avalable resources by the global scheduler such that each server receves ts expected level of servce (the level t would receve on a slower, dedcated system). If so, the global scheduler allocates resources Supported n part by the Natonal Scence Foundaton (Grant Nos. CCR , CCR , and CCR ). at run-tme n such a manner that each server's performance expectatons are met. Each server s stll responsble for schedulng competng jobs generated by ts applcaton, the global scheduler only makes the sharng of resources among derent servers transparent toany partcular server. In ths paper we present Algorthm PShED (Processor Sharng wth Earlest Deadlnes Frst), a global schedulng algorthm that provdes guaranteed servce and nter-applcaton solaton n preemptve unprocessor systems. Much prevous research dedcated to achevng these goals (see, e.g., [13, 14, 16, 6, 17, 8,4,1,10]) has assumed that jobs to be handled by a partcular server are processed n a rstcome rst-served (FCFS) manner amongst themselves. Because few real-tme applcatons (n partcular, hard real-tme applcatons) satsfy such a restrcton, the guarantees that can be made by these global schedulers are severely lmted. The schedulng framework we propose does not correlate the arrval tme of a job wth ts deadlne. Servce guarantees can then be made to an applcaton regardless of how ts ndvdual server schedules ts jobs. System model. In ths paper, we consder a system of N applcatons 1 2 ::: N, each wth correspondng server S 1 S 2 ::: S N. Eachserver S s characterzed by a sngle parameter: a processor share U, denotng the fracton of the total processor capacty devoted to applcaton. We restrct our attenton to systems n whch allservers execute on a sngle shared processor. Wthout loss of generalty, ths processor s assumed to have unt capacty, requrng that the sum of the processor P shares of all the servers be no more than one N (.e., =1 U 1). Algorthm PShED gves applcaton the appearance that ts jobs are executng on a dedcated \vrtual" processor of capacty U. If ths applcaton has hard deadlnes whch would all be met when scheduled on such a dedcated slower processor, then Algorthm PShED wll also guarantee to meet all deadlnes of ths applcaton, regardless of the behav-

2 ors of other applcatons beng scheduled wth t. The goal of havng each applcaton behave as though executng on a dedcated processor of capacty U can be acheved trvally n a processor sharng schedule whch s obtaned by parttonng the tmelne nto nntesmally small ntervals and assgnng a share U of the processor durng each such nterval to server S. Ths strategy s not useful n practce because job preemptons requre executon tme. Whle preemptons are allowed n our model, ther costs are accounted for n such a way that Algorthm PShED produces a vald schedule that s practcal at run-tme (see Secton 5). Sgncance of ths research. The framework we present s desgned to permt several derent realtme (and non real-tme) applcatons to coexst on a sngle processor, whle solatng each applcaton from the others. Hence each applcaton can be desgned and mplemented n solaton, wth no assumptons made about the run-tme behavors of other applcatons that may execute concurrently wth t. Our longterm research goal s to extend ths framework to permt the sharng of resources other than the processor, and to ncorporate resource sharng strateges that permt crtcal sectons and nonpreemptable resources to be shared n a transparent and guaranteed manner, wthout each applcaton havng to make too many assumptons about the behavors of the other applcatons. We expect such a framework to be useful n generalpurpose computng envronments such as desktop machnes, n whch real-tme (ncludng soft and rm realtme) and non real-tme applcatons coexst and contend for processors and other resources. More mportantly, we are nterested n provdng a general framework for the development of real-tme applcatons that permts each applcaton to be developed n solaton, wth ts resource requrements completely characterzed by a few parameters (n the research reported here, whch does not consder the sharng of resources other than the processor, ths would be a sngle parameter { the processor share expected by the applcaton). We envson dynamc, dstrbuted envronments n whch such applcatons may mgrate between derent processng nodes at run-tme for reasons of fault-tolerance, load-balancng, and ecency, wth smple admsson tests (n the current paper's framework, \s there sucent processor capacty avalable to accommodate the new applcaton?") that determne whether an applcaton s permtted to execute on a partcular processng node. Note that we arenot expectng each applcaton to be perodc or sporadc, or ndeed to even execute \forever." It s qute possble for nstance, that an ndvdual applcaton represents a onetme transacton that has hard real-tme constrants, and that can be modeled as a nte number of real-tme jobs. Provded such a transacton can be shown to successfully meet all deadlnes on a dedcated processor of a partcular capacty U,wemay model ts resource requrements n our framework by assocatng t wth a server of processor-share U. Whenever ths transacton s to be executed at run-tme, ths approach would requre us to nd a processor wth sucent spare capacty to be able to accommodate ths server. Upon ndng such a processor, a server of processor-share U s added to the set of servers beng served on ths processor for the duraton of the transacton, and Algorthm PShED's performance guarantee ensures that the transacton wll ndeed meet ts deadlne. Organzaton of ths report. The remander of ths report s organzed as follows. Secton 2 denes Algorthm PShED and formally proves that the desred propertes of guaranteed servce and solaton among applcatons are acheved. For smplcty of argument, Secton 2 assumes Algorthm PShED can correctly compute a rgorously dened noton of a server's budget. Secton 3 then descrbes how Algorthm PShED explctly computes these budgets, whle Secton 4 oers a formal proof that these computatons gve the budgets as dened n Secton 2. Secton 5 dscusses mplementaton ssues concernng context swtch costs, and optmzatons when servers dle. Secton 6 brey descrbes other research onprovdng guaranteed servce to several applcatons that share a processng platform, whle Secton 7 concludes wth a bref summary of the major ponts we have attempted to make here. 2. Algorthm PShED: Overvew Algorthm PShED must enforce nter-applcaton solaton {tmust not permt any applcaton to consume more than a fracton U of the shared processor f ths would mpact the performance of other applcatons. In order to do so, Algorthm PShED computes budgets for each server: these budgets keep track of the executon hstory of each server S. To arbtrate between the varous servers to determne whch server should have access to the processor at any nstant, Algorthm PShED assocates a server deadlne D wth each server S. We dscuss both the server budgets and the server deadlne n more detal below.

3 2.1 Server deadlne Algorthm PShED assocates a server deadlne D wth S. Informally speakng, the value that server S assgns D at any nstant s a measure of the urgency wth whch S desres the processor { the smaller the value assgned to D, the greater the urgency (f S has no actve jobs awatng executon and hence does not desre the processor, t should set D equal to 1). From the perspectve of Algorthm PShED, the current value of D s a measure of the prorty that Algorthm PShED accords server S at that nstant. Algorthm PShED wll be performng earlest deadlne rst (EDF) schedulng among all elgble servers based on ther D values. Algorthm PShED holds each server S responsble for updatng the value of D as necessary. For nstance f S schedules ts assocated applcaton's jobs n EDF order, then the value of D at each nstant t o should be set equal to the deadlne parameter of the earlestdeadlne job of server S that has not completed executon by tme t o. If there are no such jobs, then server S sets D equal to 1. Snce S s schedulng ts jobs accordng to the EDF dscplne, ths would mply that D be updated n one of three crcumstances. () The job wth deadlne D completes executon. () A job wth an earler deadlne arrves at server S. () The job wth deadlne D has not completed, but the budget computatons of Algorthm PShED ndcate that S has used up ts processor share up to nstant D (we explan below n Secton 3 how ths s done). Case () only occurs when there s an error wthn server S 's applcaton. What happens n ths case depends upon the semantcs of ths applcaton { optons nclude postponng the deadlne, or abortng ths job. In ether case, D should be set equal to the deadlne of the new earlest-deadlne actve job of S. Although Algorthm PShED uses EDF, an ndvdual server S may schedule ts applcaton's jobs wth any algorthm that produces reasonably consstent schedules. For the purposes of ths paper we requre that each server use a fully preemptve localschedulng algorthm that totally orders all jobs by prorty (.e., for any two jobs the schedulng algorthm assgns dstnct prortes, based upon the jobs' parameters { arrval tmes, executon requrements, and deadlnes), and never executes a job of lower prorty whle a hgher prorty job s actve. Examples of approprate algorthms are EDF and any preemptve xed prorty scheme (for the recurrng task model). Examples of algorthms we do not consder n ths paper nclude Least Laxty Frst [15] and non-preemptve schemes. If all deadlnes of S are met whle usng such an algorthm on a dedcated processor of capacty U, then Algorthm PShED wll guarantee to meet all deadlnes of S. No matter whch schedulng dscplne server S s usng, S should always set ts D value equal to the earlest deadlne of all ready jobs whch have not completed executon. If S 's nternal scheduler s not EDF, then at some nstant t 0, S maychoose to schedule some job other than the one whose deadlne s the current D value. Algorthm PShED must ensure that an errant server S whch fals to update ts D parameter accurately does not mpact the performance of other servers n the system. The performance of the errant server S tself mght degrade however. As we wll see below, ths s acheved by the server budget computatons performed by Algorthm PShED. 2.2 Server budget At each nstant t o and for all values d t o that D has taken thus far, Algorthm PShED computes a budget bdgt (d t o ) whch speces exactly how much more executon server S s to be permtted wth D set to values d. Ths value s determned by the \tghtest" constrant of the followng knd Let t s denote a tme nstant t o such that the value of D just pror to t s s >d, and D s assgnedavalue d at tme nstant t s. Let (t s d t o ) denote the amount of executon that Algorthm PShED has permtted server S wth D d, durng the nterval [t s t o ). Clearly, (U (d ; t s ) ; (t s d t o )) s an upper bound on the remanng amount of executon that S s permtted wth D set to values d. If S were permtted to execute for more than ths amount wthd d, then S would be executng for more than ts permtted fracton of the processor over the nterval [t s d). Some further dentons. Let (d t o ) denote the set of all tme nstants t o such that the value of D just pror to t s s >d,andd s assgned a value d at tme nstant t s. Let slack (d t o ) be dened as follows: slack (d t o ) def = mn fu (d ; t s ) ; (t s d t o )g t s2 (d t o) (1)

4 Fgure 1. The scenaro descrbed n Example 1. Thus slack (d t o ) s an upper bound on howmuch more S can safely execute at or after tme t o wth deadlne d wthout usng more than ts share of the processor. As we move toward a formal denton of bdgt (d t o ), we see that t must have the property that bdgt (d t o ) slack (d t o ): (2) Snce at any tme t o, Algorthm PShED wll permt server S to contend for the processor wth D set equal to d only f bdgt (d t o ) 0, the followng lemma clearly holds for any denton of bdgt (d t o ) satsfyng Condton 2. Lemma 1 In systems scheduled usng Algorthm PShED, (8)(8d)(8t o ) slack (d t o ) 0: (3) By requrng that bdgt (d t o ) slack (d t o ), Algorthm PShED prohbts S from executng for more than ts fracton U of the shared processor, and thus solates the remanng applcatons from. In addton to solatng other applcatons from, we would also lke to guarantee a certan level of performance for applcaton. That s, we would lke topermts the maxmum amount of executon possble wthout nterferng wth the executons of other applcatons. Equvalently, we would lke toset bdgt (d t o ) to be as large as possble, whle stll ensurng solaton. We may be tempted to try settng bdgt (d t o ) equal to slack (d t o ) {.e., the \" of Condton 2 could be replaced by an equalty. The followng example however llustrates that ths would be ncorrect. Example 1 Consder the server S wth U = 0:5. Suppose that D s ntally 1 and that S sets D to value 20 at tme nstant zero. Algorthm PShED schedules S over the nterval [0 6). At t = 6, S sets D to 16, and s scheduled over the nterval [7 9). From the denton, we see that (16 10) = f6g whle (20 10) = f0g. Furthermore, ( ) = 2 whle ( ) = 8. Therefore, slack (16 10) = ( 1(16 ; 2 6) ; 2) = 3, and slack (20 10) = ( 1 (20 ; 0) ; 8) = 2. 2 At tme nstant t o = 10, Algorthm PShED would compute bdgt (16 10) = 2, even though slack (16 10) = 1 (16 ; 6) ; 2 = 3. Notce that ths s what one would 2 ntutvely expect. If S were executng on a dedcated processor half as fast as the shared one, t could have executed for ten tme unts over the nterval [0 20). Snce Algorthm PShED has permtted t to execute for eght tme unts wth deadlnes 20, t can execute for just two more tme unts wth deadlne 20, and consequently wth deadlne 16. As Example 1 llustrates, the budget bdgt (d t o )depends not just upon the amount of processor consumed by S wth deadlne set d, but s also bounded from above by the amount of processor consumed by S wth deadlne set >d.thus the budget values computed by Algorthm PShED are as follows. bdgt (d t o ) def =mnfslack (d 0 t o )g (4) d 0 d Let ^d be the smallest value > dwhch D has been assgned pror to t o. Equaton 5 below mmedately follows. bdgt (d t o )=mn n o slack (d t o ) bdgt ( ^d t o ) (5) Consder d 1 d 2 wth d 1 < d 2. Whle we cannot draw conclusons regardng the relatve values of slack (d 1 t o ) and slack (d 2 t o ), the followng property mmedately follows from repeated applcatons of Condton 5. Lemma 2 For all and all t o d 1 <d 2 ) bdgt (d 1 t o ) bdgt (d 2 t o ): Algorthm PShED can now be descrbed. At each nstant t o, Algorthm PShED assgns the processor to aserver S satsfyng the followng two condtons 1. bdgt (D t o ) > 0 2. D =mnfd j j bdgt j (D j t o ) > 0g That s, the processor s assgned to a server that has the earlest server deadlne of all the servers whch have a postve budget for ther current server deadlnes. (We note that Algorthm PShED does not explctly compute bdgt (D t o ) for all from the denton n Equaton 4. Rather than computng the slack,, and quanttes at each nstant, Algorthm PShED mantans data structures that allow the ecent computaton of these quanttes as needed. These data structures are dened and proven approprate n Sectons 3 and 4.)

5 2.3 Correctness of Algorthm PShED The hard real-tme guarantee property of Algorthm PShED s stated n the followng theorem. Theorem 1. Consder a system of N servers S 1 S 2 ::: S N wth processor shares U 1 U 2 ::: U N respectvely, such that (U 1 + U U N ) 1. If all jobs of S make ther deadlnes when scheduled on adedcated slower processor of computng capacty U, then all jobs of S wll make ther deadlnes when scheduled usng Algorthm PShED. Proof: We argue the contrapostve,.e., f (a job of) server S msses a deadlne at tme nstant d usng Algorthm PShED, then (some job of) S would have mssed a deadlne at or pror to tme nstant d f S were scheduled on a dedcated slower processor of computng capacty U. Assume that server S msses a deadlne, and let d denote the rst nstant at whch ths happens. Ths can happen n one of two ways. () The bdgt(d t o ) becomes equal to zero at some nstant t 0 precsely at or pror to d, preventng S from contendng for the processor by settng ts deadlne equal to d. ()The bdgt(d d) > 0, n whch case the job mssed ts deadlne at d despte S beng able to contend for the processor wth D set equal to d. x1. Let us consder the rst possblty: bdgt(d t o )= 0. By Equaton 4, ths mples that 9d 0 d such that slack (d 0 t o ) = 0. Recall (Equaton 1) that slack (d 0 t o ) def = mn fu (d 0 ; t s ) ; (t s d 0 t o )g t s2 (d 0 t o) and let t s denote the value of t s 2 (d 0 t o ) correspondng to the mnmumof the RHS of ths expresson. Over the nterval [t s d 0 ), server S has already executed as much as t would have on a slower dedcated processor of computng capacty U tmes the computng capacty of the shared processor whle jobs wth deadlnes d were ready. Applcaton therefore must mss a deadlne durng ths nterval on the slower processor as well. x2. Consder now the case where bdgt(d d) > 0. Recall that d s the earlest nstant atwhch S msses a deadlne. Therefore at nstant d, D s set equal to d. Recall that Algorthm PShED schedules accordng to the deadlne parameters D k of the servers S k, k =1 2 ::: ::: n. Let ^t <dbe the last tme nstant pror to d when the processor was ether dle, or was allocated to a server S k wth D k > d. At that nstant, all the D k 's must have been >d(or blocked from executon because ther budget was exhausted). Over the nterval (^t d], the processor has been assgned exclusvely to servers wth deadlne d. For each k =1 2 ::: N,letb k denote the earlest nstant ^t at whch Dk becomes d. Ths mples b k 2 k (d d), snce b k s a tme nstant pror to d at whch D k 's value was changed from beng >dto beng d. By Lemma 1, slack k (d d) 0 for each server S k. From the denton of slack (Equaton 1), t follows that U k (d ; b k ) ; k (b k d d) 0: Equvalently k (b k d d) U k (d ; b k ), whch mples (snce b k ^t) k(bk d d) Uk (d ; ^t) : (6) By our choce of ^t, the processor s never dled over the nterval (^t d]. Therefore, (b d d) { the amount of executon that Algorthm PShED has permtted server S durng the nterval [b d) wth D d, { s gven by (b d d) = X N (d ; ^t) ; k=1 k6= (d ; ^t) ; N X k=1 k6= = (d ; ^t) ; 0 B = (d ; ^t) 0 ; k (b k d d) ; Uk (d ; ; ^t) N X k=1 k6= NX k=1 k6= U k 1 C A U k 1 C A (d ; ^t) U : (7) Snce b 2 (d d), t follows from the denton of slack (Equaton 1) that slack (d d) U (d ; b ) ; (b d d) ) slack (d d) U (d ; ^t) ; (b d d) ) slack (d d) 0 : By the denton of bdgt (Equaton 4), bdgt (d d) slack (d d) thus, the above nequalty contradcts the assumpton that bdgt (d d) > Algorthm PShED: computng the bdgts As we have seen n Secton 2.2, Algorthm PShED makes schedulng decsons based on the budgets correspondng to the current server deadlnes. The crucal

6 factor determnng the run-tme complexty of Algorthm PShED s the ecency wth whch these budgets are computed. Algorthm PShED mantans a resdual lst R assocated wth each server S from whch budgets bdgt (d t o ) are easly and ecently computed. A resdual lst s a set of 3-tuples (d ) where d and are nonnegatverealnumbers and s one of fval bndg. At any nstant t o, R contans a 3-tuple (d ) for each value d t o that D has been assgned thus far, whch snterpreted as follows. f = val then bdgt (d t o )= f = bnd then bdgt (d t o )=mnf(d ; t o ) U g That s, s ether the value of bdgt (d t o ), or an upper bound on ths value. Algorthm PShED mantans ts lst of resduals R n sorted order, sorted by the rst coordnate of each 3-tuple. For the remander of ths secton, we wll let h(d ) (d ) ::: (d` ` `) ::: denote the sorted resdual lst R at the current tme nstant, wth d 1 <d 2 < : The lst of resduals R s updated when jobs of S are executed, or server S changes the value of ts server deadlne D. 3.1 When D s value s changed Suppose that server S changes the value of ts server deadlne at nstant t o.letd old denote the value of D pror to the change, and D new the value afterwards. We assume that there s a 3-tuple (t o 0 val) n the rst poston of R, and a 3-tuple (1 1 val) n the last poston of R. Ths leaves two cases. D new already n R. Suppose that there s a 3-tuple j j ) already n R,atthej'th locaton. (d j = D new If ( j val), then no change s necessary. else ( j bnd), n whch case the assgnment j s performed and val j mnf j (d j ; t o) U g: (8) D new not n R. Suppose that the 3-tuple (d j = D new j j ) would occupy the j'th poston n the sorted lst R. Then j val, and j s computed as follows. If ( j;1 val), then j mn f j;1 +(d j ; d j;1) U j+1g (9) else ( j;1 bnd), and j mn f j;1 +(d j ; d j;1) U (d j ; t o) U j+1g : (10) Stack of deadlnes. Algorthm PShED mantans a stack of deadlnes SoD wth each server S.Ateach nstant, the top of SoD contans the current value of D. SoD also contans some past values of D n ncreasng order. When D changes value, SoD s moded as follows. f D new <D old,thend new s pushed onto SoD. else (.e., D new >D old ), values from the top of SoD are repeatedly popped, untl the value on top of SoD s D new. The eld of the 3-tuple correspondng to each popped deadlne s set equal to bnd (.e., for each deadlne ^d popped, the 3-tuple (d` = ^d ` `) sdented and ` bnd). If the value now ontopofsod s not equal to D new,the D new s pushed onto SoD. The followng lemma follows from the manner n whch the stack of deadlnes SoD s used by Algorthm PShED. Lemma 3 At any nstant t o, f there s a 3-tuple (d j j j ) n R such that d j < D at t o, then j bnd. That s, all j 's stored n R at tme nstant t o, for deadlnes < the current server deadlne, are bounds rather than exact values. (Note that ths lemma s not assertng the converse {.e., t s qute possble that j bnd even f d j >D ). Proof of Lemma 3: If (d j j j ) exsts n R at nstant t 0,andd j <D,thenatsomemomentpror to t 0, S must have set ts D value to be d j (otherwse the resdual (d j j j )would not be n R ). Call the most recent suchmoment(where D was set to d j ) t ;1. At t ;1,(d j j j )must have been on the stack because t would have been added to the stack when D was assgned the value d j. Snce at tme t 0 >t ;1 we know D >d j, then at some moment over (t ;1 t 0 ], the value of D was ncreased from beng equal to d j, to beng greater than d j.atthat moment, (d j j j ) would have been popped o of the stack and j would be set to bnd. Snce t ;1 was the last nstant at whch D was set to d j,thevalue of j n the resdual has been uneected and thus at t 0, j = bnd.

7 3.2 Upon executon Recall that Algorthm PShED performs earlestdeadlne-rst schedulng among the elgble servers, wth the server's deadlne parameter D playng the role of ts deadlne. Algorthm PShED consders a server elgble only f the server's budget assocated wth ts current deadlne D s not exhausted. Algorthm PShED montors the budget of each server S va the resdual lst R assocated wth server S.More formally, atany nstant t 0,each server S wll have a resdual of the form (D val) n the resdual lst R mantaned by Algorthm PShED. Algorthm PShED then wll assgn the processor to server S only f 1. > 0 n the resdual (D val), and 2. for all other servers S j ether D j D or j =0 n the resdual (D j j val). Suppose that D has not changed durng the nterval [t o ; t o ), and that Algorthm PShED has assgned the processor to S durng ths entre nterval. Then bdgt (d t o ) s equal to (bdgt (d t o ; ) ; ) for all d D, and consequently ` should be decremented by for all d` D. Addtonally, Algorthm PShED mantans R such that Lemma 2 s always satsed: f decrementng the resdual correspondng to D causes ths resdual to become smaller than the resdual correspondng to a deadlne <D, then the resdual correspondng to the deadlne < D s also decremented to conform to Lemma 2 (such decrementng of a resdual correspondng to a deadlne d j <D occurs when the value of bdgt (d j t o ) s equal to slack (d 0 t o ) for some d 0 D ). Thus f the resdual lst R pror to the executon of S for tme unts s h(d ) (d ) ::: (d` ` `) ::: then the resdual lst after the executon s where h(d ) (d ) ::: (d` 0` `) ::: 0` = 8 >< >: ` ; 4. Proof of correctness f d` D mn ; ` 0`+1 f d` D : (11) Theorem 1 shows that Algorthm PShED meets ts performance guarantees, assumng that t can accurately compute the budgets. To complete the proof of correctness of Algorthm PShED,, we now show that the method descrbed n Secton 3 accurately computes budgets. Theorem 2 At any nstant t o, the values of the (d j j j ) tuples stored nr satsfy the property that j f bdgt (d j t o )= j = val mnf(d j ; t o ) U j g f j = bnd Proof: The events of sgncance durng a run of Algorthm PShED are: () Algorthm PShED changes ts schedulng decson {.e., ether the processor transts from an dle state to executng some server, or t completes executng a server, and () some server changes the value of ts server deadlne. The proof s by nducton on the nstants at whch these events of sgncance occur, n the order n whch these events occur. If two servers change ther D 's smultaneously, they are consdered serally n arbtrary order. Base case: Each R s ntally empty. The rst 3- tuple s nserted nto R at the nstant thatd s rst assgned a value d<1. Say ths occurs at tme nstant t o { the 3-tuple (d (d ; t o ) U val) s nserted nto R. By Equaton 4, clearly bdgt (d t o ) s exactly (d;t o )U. Inducton step: Our nductve hypothess s that each 3-tuple n R s \correct" at tme nstant ~ t when an event of sgncance occurred {.e., each 3-tuple n R has the nterpretaton stated n the body of Theorem 2. Suppose that the next event of sgncance occurs at tme nstant t o > t. ~ x1: If server S executed over the nterval [~ t to = ~ t + ), then the update of R at tme nstant t o s correct, snce For each d j D, j s decremented by the amount, reectng the fact that the remanng (bound on) budget correspondng to d j has been decremented by the amount executed. For each d j < D j s decremented as necessary to mantan monotoncty, n accordance wth Lemma 2. x2: If some other server S j executed over the nterval [~ t to = ~ t + ), then the optmalty of EDF [12, 5] ensures that R remans correct at tme nstant t o,.e., S wll get to execute for j tme unts pror to d j.

8 x3: Suppose that the event of sgncance that occurs at tme nstant t o s that some server S changes the value of ts server deadlne from D old to D new. In that case, the value of the resdual j correspondng to server deadlne d j = D new s computed accordng to one of Equatons 8, 9, or 10, and j set equal to val (thus ndcatng that bdgt (d j t o ) s exactly equal to the computed value of j ). To show that ths computaton s correct { that bdgt (d j t o ) s ndeed exactly equal to j as computed {wemust show that the rhs's of Equatons 8, 9, and 10 do ndeed compute bdgt(d j t o ). To do so, we assume the nductve hypothess.e., that all 3-tuples stored n R are ndeed correct pror to ths update { and then consder each of the three equatons separately, provng that the value of j computed n each case equals bdgt (d j t o ). Equaton 8: Snce j = bnd, thevalue of bdgt (d j t o ) s, accordng to the nductve hypothess, equal to mnf j (d j ; t o ) U g. Equaton 9: Snce j;1 = val n ths case, bdgt (d j;1 t o )= j;1 by the nductve hypothess. By Equaton 5 ths mples that j;1 = mnfslack (d j;1 t o ) bdgt (d j+1 t o )g. From the denton of slack and the fact that D has thus far taken on no values between d j;1 and d j, t follows that slack (d j t o )= slack (d j;1 t o )+(d j ; d j;1 ) U t therefore follows that bdgt (d j t o )= mn( j;1 +(d j ; d j;1 ) U j+1 ). Equaton 10: Snce j;1 = bnd n ths case, bdgt (d j;1 t o ) = mnf j;1 (d j;1 ; t o ) U g by the nductve hypothess. By the same argument asabove, slack (d j t o )= slack (d j;1 t o )+ (d j ; d j;1 ) U therefore, bdgt (d j t o ) = mn(mn( j;1 (d j;1 ; t o ) U )+(d j ; d j;1 ) U j+1 ). By algebrac smplcaton, the rhs of ths expresson reduces to mn( j;1 +(d j ; d j;1 ) U (d j;1 ; t o ) U +(d j ; d j;1 ) U j+1 ), whch equals mn( j;1 +(d j ;d j;1 )U (d j ;t o )U j+1 ). Suppose D new <D old. It follows from the optmalty of EDF schedulng [12, 5] that f the resdual (the \") correspondng to D old was exactly equal to the budget (.e., the correspondng \" equals val), then ths resdual remans exactly equal to the budget even after the server deadlne change. On the other hand,, then t does not follow from the op- correspondng n R that le between D old and D new f D new >D new tmalty of EDF that the resduals j to deadlnes d j whch were exactly equal to bdgt (d j ~ t) pror to the deadlne change, reman exactly equal to budget guarantees bdgt (d j t o ) at nstant t o. These resduals be (a) (b) Fgure 2. Scenaros descrbed n Example 2 come upper bounds on the avalable budgets. Ths change n semantcs of j { from perhaps beng an exact value to beng an upper bound { s recorded by Algorthm PShED when deadlnes popped o the stack of deadlnes SoD have ther correspondng -values set to bnd. Ths s llustrated by the followng example. Example 2 Consder rst server S wth ts processor share parameter U equal to 0:5. Suppose that D s ntally 1 and that S sets D to value 20 at tme nstant zero. The 3-tuple (d j j j ) = (20 10 val)would be nserted n R. Suppose that S changes D to value 40 at tme nstant 5, pror to gettng scheduled at all by Algorthm PShED. Algorthm PShED no longer guarantees S 10 unts of executon by deadlne 20. For nstance, suppose that S were to then set D back to 20 at tme nstant 10. Despte the presence of the 3- tuple (20 10 j )nr, bdgt (20 10) s certanly not 10, but s 1 (20 ; 10) = 5. 2 Consder next the same server, and suppose agan that D s ntally 1 and that S sets D to value 20 at tme nstant zero. As s the case above, the 3- tuple (20 10 val) s nserted n R. Suppose that Algorthm PShED executes S durng [0 5), at whch nstant the 3-tuple s (20 5 val). Suppose that S changes D to 40 at tme nstant 5, and then set t back to20at tme nstant 6. In ths scenaro, bdgt (20 10) s not 1 (20 ; 6) = 7, rather t s bounded from above by 2 the value stored as the second component of the 3- tuple (20 5 j ). In ths second scenaro, the value of j represents an upper bound on the budget of S for executon by deadlne d j =20.

9 5. Implementaton ssues An dled processor. As descrbed n Secton 3, the lsts of resduals mantaned by Algorthm PShED track of the executon hstory of each server. Ths nformaton s used to ensure that no one server s able to compromse the performance of others by consumng more than ts reserved share of the processor. Under certan well-dened crcumstances however, t s possble to permt a server to safely consume more than ts reserved share of the processor. Suppose, for nstance that no server has any actve jobsawatng executon at some tme nstant t because all jobs that arrved pror to t have ether already completed executon or mssed ther deadlnes. Algorthm PShED wll detect ths when all server deadlnes are set to 1. It can be shown that the future performance of all the servers s not compromsed f the entre hstory stored thus far s dscarded. Algorthm PShED can rentalze the lsts of resduals R 1 R 2 ::: R 1 N. Smlarly, Algorthm PShED can dscard any resdual (d ), for any d greater than the current nstant. These two optmzatons ncrease runtme ecency by reducng the resdual lst sze. Accountng for preempton costs. The correctness of Algorthm PShED mples that when a server sets ts deadlne to a certan value, t s guaranteed to receve ts share of the processor by that deadlne. Once the budget assocated wth the current server deadlne equals zero, t must ncrease the value of ts server deadlne to receve more processng tme. A natural queston to ask s: why would each server not always ntally set ts server deadlne to be arbtrarly close to the current nstant, and always ncrease t by an arbtrarly small amount when the assocated budget becomes equal to zero? Ths strategy allows t to obtan exactly ts share of the processor over arbtrarly small ntervals, and hence experence arbtrarly accurate emulaton of ts behavor on a dedcated server 2. To answer ths queston, we needtolookatthessue of job preemptons. It has been shown [15] that f a set of jobs s scheduled usng EDF, then the total number of context swtches due to preemptons s bounded from above at twce the number ofjobs. The standard way n 1 In the termnology of feasblty analyss (see, e.g., [3]), the equvalent of a new \busy perod" can be assumed to start wth the rst arrval of a job after nstant t, and ths busy perod can be analyzed ndependently of what occurred n earler busy perods. 2 It s noteworthy that f such a strategy s employed, then Algorthm PShED reduces to the standard processor sharng algorthm { each server S gets the processor for U t for tme unts over each nterval [t t + t). whch these preempton costs are ncorporated nto the schedule s by ncreasng the executon requrement of each job by two context swtch tmes, and makng each such job responsble for swtchng context twce { rst, when t preempts another job to seze control of the processor for the rst tme, and agan when t completes executon and returns control of the processor to the job wth the next hghest deadlne. Ths accounts for all context swtches n the system. In the framework of Algorthm PShED, ths eect s acheved by \chargng" each server two context swtch tmes whenever the server deadlne parameter s changed. Hence, a strategy of emulatng processor sharng results n excessve charges for context swtches, causng the server to waste too much of ts assgned executon tme wth context swtch charges. Implementng resdual lsts. Anave mplementaton of the resdual lst, whch mantans each lstr as a lnked lst of 3-tuples, would result n a computatonal complexty of(n) for each deadlne-change and executon update, where n s the number ofor- dered pars n R at the tme of the operaton. Wth a bt more eort, however, these operatons can generally be done n O(log n) tme per operaton, by storng the resdual lst n the form of a balanced bnary tree: n partcular, as a varant oftheavl tree [2] data structure. Space constrants prevent us from descrbng ths any further n ths paper however, we are currently workng on a more complete wrteup contanng these mplementaton detals. 6. Comparson to other work A great deal of research has been conducted on achevng guaranteed servce and nter-applcaton solaton n unprocessor mult-applcaton envronments (see, e.g., [16, 6, 17, 1, 7,18,13,8,4,10]). The PShED approach ders from most of these approaches n one very sgncant manner n all of the above research, t has been assumed that the jobs to be servced byeach server are processed n a rst-come rst-served (FCFS) manner. (In our notaton and termnology, ths would requre that d j dj+1, where d j denotes the deadlne of the j'th job that arrves at server S.) In most of ths earler work the jobs do not have hard deadlne parameters a pror assgned, but are to be scheduled to complete as soon as possble wthn the context of ts own server. Placng such afcfs requrement on the jobs generated by each applcaton s a serous lmtaton there are ndeed very few real-tme applcatons (n partcular, hard real-tme applcatons) that wll satsfy such a restrcton. The PShED framework

10 places no such restrcton on the jobs generated byeach applcaton the arrval tme of a job s not correlated wth ts deadlne. The PShED approach bulds drectly upon the Bandwdth Sharng Server (BSS) [11] of Lpar and Buttazzo (see also [9]). Resdual lsts as a means of capturng the hstory of servers were ntroduced n [11], as ordered pars (rather than 3-tuples, as n the current paper) of data. The major derence between the results descrbed here and the ones n [11, 9] les n the context whle BSS was desgned to facltate solaton between cooperatve servers n a processor sharng envronment, Algorthm PShED makes fewer requrements that the servers cooperate wth each other (e.g., by beng honest n the manner n whch deadlne parameters are updated, or n reportng arrval tmes of jobs). Algorthm PShED extends the work n [11, 9] by () placng fewer restrctons on the schedulng framework, () provdng a precse formulaton and formal proof of the knd of hard real-tme guarantee that can be made by the schedulng framework, and () addng several optmzatons to the framework desgn whch permt a more ecent mplementaton, and provde a \cleaner" demarcaton of responsbltes between the applcatons and the global schedulng algorthm. 7. Conclusons We have proposed a global schedulng algorthm for use n preemptve unprocessor systems n whch several derent real-tme applcatons can execute smultaneously such that each s assured performance guarantees. Each applcaton has the lluson of executng on a dedcated processor and s solated from any eects of other msbehavng applcatons. Unlke all prevous approaches to achevng such behavor, whch requre that jobs of each ndvdual applcaton be processed n rst-come rst-served order, our algorthm permts each server to schedule ts applcaton's jobs however t chooses. We have formally proven that an applcaton whch s feasble on a slower processor n solaton remans feasble when scheduled together wth other applcatons usng our algorthm, regardless of whether the other applcatons are \well-behaved" or not. References [1] Luca Aben and Gorgo Buttazzo. Integratng multmeda applcatons n hard real-tme systems. In Proceedngs of the Real-Tme Systems Symposum, pages 3{13, Madrd, Span, December IEEE Computer Socety Press. [2] G. M. Adelson-Velsk and E. M. Lands. An algorthm for the organzaton of nformaton. Sovet Math Doklady, 3:1259{1263, [3] Gorgo C. Buttazzo. Hard Real-Tme Computng Systems: Predctable Schedulng Algorthms and Applcatons. Kluwer Academc Publshers, 101 Phlp Drve, Assnpp Park Norwell, MA 02061, USA, [4] Z. Deng and J. Lu. Schedulng real-tme applcatons n an Open envronment. In Proceedngs of the Eghteenth Real- Tme Systems Symposum, pages 308{319, San Francsco, CA, December IEEE Computer Socety Press. [5] M. Dertouzos. Control robotcs : the procedural control of physcal processors. In Proceedngs of the IFIP Congress, pages 807{813, [6] T. M. Ghazale and T. Baker. Aperodc servers n a deadlne schedulng envronment. Real-Tme Systems: The Internatonal Journal of Tme-Crtcal Computng, 9, [7] P. Goyal, X. Guo, and H.M. Vn. A herarchcal cpu scheduler for multmeda operatng systems. In Proceedngs of the Second Symposum on Operatng Systems Desgn and Implementaton (OSDI'96), pages 107{122, Seattle, Washngton, October [8] H. Kaneko, J. Stankovc, S. Sen, and K. Ramamrtham. Integrated schedulng of multmeda and hard real-tme tasks. In Proceedngs of the Real-Tme Systems Symposum, pages 206{217, Washngton, DC, December [9] Guseppe Lpar and Sanjoy Baruah. Ecent schedulng of real-tme mult-task applcatons n dynamc systems. In Proceedngs of the Real-Tme Technology and Applcatons Symposum, pages 166{175, Washngton, DC, May{June IEEE Computer Socety Press. [10] Guseppe Lpar and Sanjoy Baruah. Greedy reclamaton of unused bandwdthn constant-bandwdth servers. In Proceedngs of the EuroMcro Conference onreal-tme Systems, pages 193{200, Stockholm, Sweden, June IEEE Computer Socety Press. [11] Guseppe Lpar and Gorgo Buttazzo. Schedulng real-tme mult-task applcatons n an open system. In Proceedngs of the EuroMcro Conference onreal-tme Systems, York, UK, June IEEE Computer Socety Press. [12] C. Lu and J. Layland. Schedulng algorthms for multprogrammng n a hard real-tme envronment. Journal of the ACM, 20(1):46{61, [13] C. W. Mercer, S. Savage, and H. Tokuda. Processor capacty reserves for multmeda operatng systems. Techncal Report CMU-CS , Carnege Mellon Unversty,1993. [14] C. W. Mercer, S. Savage, and H. Tokuda. Processor capacty reserves: operatng system support for multmeda applcatons. In IEEE, edtor, Proceedngs of the Internatonal Conference onmultmeda Computng and Systems, Boston, MA, USA, May 15{19, 1994, pages 90{99, 1109 Sprng Street, Sute 300, Slver Sprng, MD 20910, USA, IEEE Computer Socety Press. [15] A. K. Mok. Fundamental Desgn Problems of Dstrbuted Systems for The Hard-Real-Tme Envronment. PhD thess, Laboratory for Computer Scence, Massachusetts Insttute of Technology, Avalable as Techncal Report No. MIT/LCS/TR-297. [16] Marco Spur and Gorgo Buttazzo. Ecent aperodc servce under earlest deadlne schedulng. In Proceedngs of the Real-Tme Systems Symposum, San Juan, Puerto Rco, IEEE Computer Socety Press. [17] Marco Spur and Gorgo Buttazzo. Schedulng aperodc tasks n dynamc prorty systems. Real-Tme Systems: The Internatonal Journal of Tme-Crtcal Computng, 10(2), [18] I. Stoca, H. Abdel-Wahab, K. Jeay, J. Gherke, G. Plaxton, and S. Baruah. A proportonal share resource allocaton algorthm for real-tme, tme-shared systems. In Proceedngs of the Real-Tme Systems Symposum, pages 288{299, Washngton, DC, December 1996.

Real-Time Systems. Multiprocessor scheduling. Multiprocessor scheduling. Multiprocessor scheduling

Real-Time Systems. Multiprocessor scheduling. Multiprocessor scheduling. Multiprocessor scheduling Real-Tme Systems Multprocessor schedulng Specfcaton Implementaton Verfcaton Multprocessor schedulng -- -- Global schedulng How are tasks assgned to processors? Statc assgnment The processor(s) used for

More information

Embedded Systems. 4. Aperiodic and Periodic Tasks

Embedded Systems. 4. Aperiodic and Periodic Tasks Embedded Systems 4. Aperodc and Perodc Tasks Lothar Thele 4-1 Contents of Course 1. Embedded Systems Introducton 2. Software Introducton 7. System Components 10. Models 3. Real-Tme Models 4. Perodc/Aperodc

More information

Two Methods to Release a New Real-time Task

Two Methods to Release a New Real-time Task Two Methods to Release a New Real-tme Task Abstract Guangmng Qan 1, Xanghua Chen 2 College of Mathematcs and Computer Scence Hunan Normal Unversty Changsha, 410081, Chna qqyy@hunnu.edu.cn Gang Yao 3 Sebel

More information

CHAPTER 17 Amortized Analysis

CHAPTER 17 Amortized Analysis CHAPTER 7 Amortzed Analyss In an amortzed analyss, the tme requred to perform a sequence of data structure operatons s averaged over all the operatons performed. It can be used to show that the average

More information

Module 9. Lecture 6. Duality in Assignment Problems

Module 9. Lecture 6. Duality in Assignment Problems Module 9 1 Lecture 6 Dualty n Assgnment Problems In ths lecture we attempt to answer few other mportant questons posed n earler lecture for (AP) and see how some of them can be explaned through the concept

More information

Problem Set 9 Solutions

Problem Set 9 Solutions Desgn and Analyss of Algorthms May 4, 2015 Massachusetts Insttute of Technology 6.046J/18.410J Profs. Erk Demane, Srn Devadas, and Nancy Lynch Problem Set 9 Solutons Problem Set 9 Solutons Ths problem

More information

Structure and Drive Paul A. Jensen Copyright July 20, 2003

Structure and Drive Paul A. Jensen Copyright July 20, 2003 Structure and Drve Paul A. Jensen Copyrght July 20, 2003 A system s made up of several operatons wth flow passng between them. The structure of the system descrbes the flow paths from nputs to outputs.

More information

Clock-Driven Scheduling (in-depth) Cyclic Schedules: General Structure

Clock-Driven Scheduling (in-depth) Cyclic Schedules: General Structure CPSC-663: Real-me Systems n-depth Precompute statc schedule o-lne e.g. at desgn tme: can aord expensve algorthms. Idle tmes can be used or aperodc jobs. Possble mplementaton: able-drven Schedulng table

More information

Improved Worst-Case Response-Time Calculations by Upper-Bound Conditions

Improved Worst-Case Response-Time Calculations by Upper-Bound Conditions Improved Worst-Case Response-Tme Calculatons by Upper-Bound Condtons Vctor Pollex, Steffen Kollmann, Karsten Albers and Frank Slomka Ulm Unversty Insttute of Embedded Systems/Real-Tme Systems {frstname.lastname}@un-ulm.de

More information

Resource Allocation with a Budget Constraint for Computing Independent Tasks in the Cloud

Resource Allocation with a Budget Constraint for Computing Independent Tasks in the Cloud Resource Allocaton wth a Budget Constrant for Computng Independent Tasks n the Cloud Wemng Sh and Bo Hong School of Electrcal and Computer Engneerng Georga Insttute of Technology, USA 2nd IEEE Internatonal

More information

Quantifying the Sub-optimality of Uniprocessor Fixed Priority Pre-emptive Scheduling for Sporadic Tasksets with Arbitrary Deadlines

Quantifying the Sub-optimality of Uniprocessor Fixed Priority Pre-emptive Scheduling for Sporadic Tasksets with Arbitrary Deadlines Quantfyng the Sub-optmalty of Unprocessor Fxed Prorty Pre-emptve Schedulng for Sporadc Tasksets wth Arbtrary Deadlnes Robert Davs, Sanjoy Baruah, Thomas Rothvoss, Alan Burns To cte ths verson: Robert Davs,

More information

ECE559VV Project Report

ECE559VV Project Report ECE559VV Project Report (Supplementary Notes Loc Xuan Bu I. MAX SUM-RATE SCHEDULING: THE UPLINK CASE We have seen (n the presentaton that, for downlnk (broadcast channels, the strategy maxmzng the sum-rate

More information

Simultaneous Optimization of Berth Allocation, Quay Crane Assignment and Quay Crane Scheduling Problems in Container Terminals

Simultaneous Optimization of Berth Allocation, Quay Crane Assignment and Quay Crane Scheduling Problems in Container Terminals Smultaneous Optmzaton of Berth Allocaton, Quay Crane Assgnment and Quay Crane Schedulng Problems n Contaner Termnals Necat Aras, Yavuz Türkoğulları, Z. Caner Taşkın, Kuban Altınel Abstract In ths work,

More information

find (x): given element x, return the canonical element of the set containing x;

find (x): given element x, return the canonical element of the set containing x; COS 43 Sprng, 009 Dsjont Set Unon Problem: Mantan a collecton of dsjont sets. Two operatons: fnd the set contanng a gven element; unte two sets nto one (destructvely). Approach: Canoncal element method:

More information

= z 20 z n. (k 20) + 4 z k = 4

= z 20 z n. (k 20) + 4 z k = 4 Problem Set #7 solutons 7.2.. (a Fnd the coeffcent of z k n (z + z 5 + z 6 + z 7 + 5, k 20. We use the known seres expanson ( n+l ( z l l z n below: (z + z 5 + z 6 + z 7 + 5 (z 5 ( + z + z 2 + z + 5 5

More information

Last Time. Priority-based scheduling. Schedulable utilization Rate monotonic rule: Keep utilization below 69% Static priorities Dynamic priorities

Last Time. Priority-based scheduling. Schedulable utilization Rate monotonic rule: Keep utilization below 69% Static priorities Dynamic priorities Last Tme Prorty-based schedulng Statc prortes Dynamc prortes Schedulable utlzaton Rate monotonc rule: Keep utlzaton below 69% Today Response tme analyss Blockng terms Prorty nverson And solutons Release

More information

The Minimum Universal Cost Flow in an Infeasible Flow Network

The Minimum Universal Cost Flow in an Infeasible Flow Network Journal of Scences, Islamc Republc of Iran 17(2): 175-180 (2006) Unversty of Tehran, ISSN 1016-1104 http://jscencesutacr The Mnmum Unversal Cost Flow n an Infeasble Flow Network H Saleh Fathabad * M Bagheran

More information

Lecture 4: November 17, Part 1 Single Buffer Management

Lecture 4: November 17, Part 1 Single Buffer Management Lecturer: Ad Rosén Algorthms for the anagement of Networs Fall 2003-2004 Lecture 4: November 7, 2003 Scrbe: Guy Grebla Part Sngle Buffer anagement In the prevous lecture we taled about the Combned Input

More information

Single-Facility Scheduling over Long Time Horizons by Logic-based Benders Decomposition

Single-Facility Scheduling over Long Time Horizons by Logic-based Benders Decomposition Sngle-Faclty Schedulng over Long Tme Horzons by Logc-based Benders Decomposton Elvn Coban and J. N. Hooker Tepper School of Busness, Carnege Mellon Unversty ecoban@andrew.cmu.edu, john@hooker.tepper.cmu.edu

More information

Assortment Optimization under MNL

Assortment Optimization under MNL Assortment Optmzaton under MNL Haotan Song Aprl 30, 2017 1 Introducton The assortment optmzaton problem ams to fnd the revenue-maxmzng assortment of products to offer when the prces of products are fxed.

More information

jitter Abstract Output jitter the variation in the inter-completion times of successive jobs of the same

jitter Abstract Output jitter the variation in the inter-completion times of successive jobs of the same Schedulng perodc task systems to mnmze output jtter Sanjoy Baruah Gorgo Buttazzo y Sergey Gornsky z Guseppe Lpar y Abstract Output jtter the varaton n the nter-completon tmes of successve jobs of the same

More information

Annexes. EC.1. Cycle-base move illustration. EC.2. Problem Instances

Annexes. EC.1. Cycle-base move illustration. EC.2. Problem Instances ec Annexes Ths Annex frst llustrates a cycle-based move n the dynamc-block generaton tabu search. It then dsplays the characterstcs of the nstance sets, followed by detaled results of the parametercalbraton

More information

Appendix B: Resampling Algorithms

Appendix B: Resampling Algorithms 407 Appendx B: Resamplng Algorthms A common problem of all partcle flters s the degeneracy of weghts, whch conssts of the unbounded ncrease of the varance of the mportance weghts ω [ ] of the partcles

More information

Fixed-Priority Multiprocessor Scheduling with Liu & Layland s Utilization Bound

Fixed-Priority Multiprocessor Scheduling with Liu & Layland s Utilization Bound Fxed-Prorty Multprocessor Schedulng wth Lu & Layland s Utlzaton Bound Nan Guan, Martn Stgge, Wang Y and Ge Yu Department of Informaton Technology, Uppsala Unversty, Sweden Department of Computer Scence

More information

The optimal delay of the second test is therefore approximately 210 hours earlier than =2.

The optimal delay of the second test is therefore approximately 210 hours earlier than =2. THE IEC 61508 FORMULAS 223 The optmal delay of the second test s therefore approxmately 210 hours earler than =2. 8.4 The IEC 61508 Formulas IEC 61508-6 provdes approxmaton formulas for the PF for smple

More information

Affine transformations and convexity

Affine transformations and convexity Affne transformatons and convexty The purpose of ths document s to prove some basc propertes of affne transformatons nvolvng convex sets. Here are a few onlne references for background nformaton: http://math.ucr.edu/

More information

NP-Completeness : Proofs

NP-Completeness : Proofs NP-Completeness : Proofs Proof Methods A method to show a decson problem Π NP-complete s as follows. (1) Show Π NP. (2) Choose an NP-complete problem Π. (3) Show Π Π. A method to show an optmzaton problem

More information

The Order Relation and Trace Inequalities for. Hermitian Operators

The Order Relation and Trace Inequalities for. Hermitian Operators Internatonal Mathematcal Forum, Vol 3, 08, no, 507-57 HIKARI Ltd, wwwm-hkarcom https://doorg/0988/mf088055 The Order Relaton and Trace Inequaltes for Hermtan Operators Y Huang School of Informaton Scence

More information

O-line Temporary Tasks Assignment. Abstract. In this paper we consider the temporary tasks assignment

O-line Temporary Tasks Assignment. Abstract. In this paper we consider the temporary tasks assignment O-lne Temporary Tasks Assgnment Yoss Azar and Oded Regev Dept. of Computer Scence, Tel-Avv Unversty, Tel-Avv, 69978, Israel. azar@math.tau.ac.l??? Dept. of Computer Scence, Tel-Avv Unversty, Tel-Avv, 69978,

More information

College of Computer & Information Science Fall 2009 Northeastern University 20 October 2009

College of Computer & Information Science Fall 2009 Northeastern University 20 October 2009 College of Computer & Informaton Scence Fall 2009 Northeastern Unversty 20 October 2009 CS7880: Algorthmc Power Tools Scrbe: Jan Wen and Laura Poplawsk Lecture Outlne: Prmal-dual schema Network Desgn:

More information

Endogenous timing in a mixed oligopoly consisting of a single public firm and foreign competitors. Abstract

Endogenous timing in a mixed oligopoly consisting of a single public firm and foreign competitors. Abstract Endogenous tmng n a mxed olgopoly consstng o a sngle publc rm and oregn compettors Yuanzhu Lu Chna Economcs and Management Academy, Central Unversty o Fnance and Economcs Abstract We nvestgate endogenous

More information

FREQUENCY DISTRIBUTIONS Page 1 of The idea of a frequency distribution for sets of observations will be introduced,

FREQUENCY DISTRIBUTIONS Page 1 of The idea of a frequency distribution for sets of observations will be introduced, FREQUENCY DISTRIBUTIONS Page 1 of 6 I. Introducton 1. The dea of a frequency dstrbuton for sets of observatons wll be ntroduced, together wth some of the mechancs for constructng dstrbutons of data. Then

More information

Resource Sharing. CSCE 990: Real-Time Systems. Steve Goddard. Resources & Resource Access Control (Chapter 8 of Liu)

Resource Sharing. CSCE 990: Real-Time Systems. Steve Goddard. Resources & Resource Access Control (Chapter 8 of Liu) CSCE 990: Real-Tme Systems Resource Sharng Steve Goddard goddard@cse.unl.edu http://www.cse.unl.edu/~goddard/courses/realtmesystems Resources & Resource Access Control (Chapter 8 of Lu) Real-Tme Systems

More information

CHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE

CHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE CHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE Analytcal soluton s usually not possble when exctaton vares arbtrarly wth tme or f the system s nonlnear. Such problems can be solved by numercal tmesteppng

More information

Kernel Methods and SVMs Extension

Kernel Methods and SVMs Extension Kernel Methods and SVMs Extenson The purpose of ths document s to revew materal covered n Machne Learnng 1 Supervsed Learnng regardng support vector machnes (SVMs). Ths document also provdes a general

More information

Global EDF Scheduling for Parallel Real-Time Tasks

Global EDF Scheduling for Parallel Real-Time Tasks Washngton Unversty n St. Lous Washngton Unversty Open Scholarshp Engneerng and Appled Scence Theses & Dssertatons Engneerng and Appled Scence Sprng 5-15-2014 Global EDF Schedulng for Parallel Real-Tme

More information

EEL 6266 Power System Operation and Control. Chapter 3 Economic Dispatch Using Dynamic Programming

EEL 6266 Power System Operation and Control. Chapter 3 Economic Dispatch Using Dynamic Programming EEL 6266 Power System Operaton and Control Chapter 3 Economc Dspatch Usng Dynamc Programmng Pecewse Lnear Cost Functons Common practce many utltes prefer to represent ther generator cost functons as sngle-

More information

Lecture 4: Universal Hash Functions/Streaming Cont d

Lecture 4: Universal Hash Functions/Streaming Cont d CSE 5: Desgn and Analyss of Algorthms I Sprng 06 Lecture 4: Unversal Hash Functons/Streamng Cont d Lecturer: Shayan Oves Gharan Aprl 6th Scrbe: Jacob Schreber Dsclamer: These notes have not been subjected

More information

Physics 5153 Classical Mechanics. Principle of Virtual Work-1

Physics 5153 Classical Mechanics. Principle of Virtual Work-1 P. Guterrez 1 Introducton Physcs 5153 Classcal Mechancs Prncple of Vrtual Work The frst varatonal prncple we encounter n mechancs s the prncple of vrtual work. It establshes the equlbrum condton of a mechancal

More information

Notes on Frequency Estimation in Data Streams

Notes on Frequency Estimation in Data Streams Notes on Frequency Estmaton n Data Streams In (one of) the data streamng model(s), the data s a sequence of arrvals a 1, a 2,..., a m of the form a j = (, v) where s the dentty of the tem and belongs to

More information

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur Module 3 LOSSY IMAGE COMPRESSION SYSTEMS Verson ECE IIT, Kharagpur Lesson 6 Theory of Quantzaton Verson ECE IIT, Kharagpur Instructonal Objectves At the end of ths lesson, the students should be able to:

More information

x = , so that calculated

x = , so that calculated Stat 4, secton Sngle Factor ANOVA notes by Tm Plachowsk n chapter 8 we conducted hypothess tests n whch we compared a sngle sample s mean or proporton to some hypotheszed value Chapter 9 expanded ths to

More information

An Interactive Optimisation Tool for Allocation Problems

An Interactive Optimisation Tool for Allocation Problems An Interactve Optmsaton ool for Allocaton Problems Fredr Bonäs, Joam Westerlund and apo Westerlund Process Desgn Laboratory, Faculty of echnology, Åbo Aadem Unversty, uru 20500, Fnland hs paper presents

More information

Overhead-Aware Compositional Analysis of Real-Time Systems

Overhead-Aware Compositional Analysis of Real-Time Systems Overhead-Aware ompostonal Analyss of Real-Tme Systems Lnh T.X. Phan, Meng Xu, Jaewoo Lee, nsup Lee, Oleg Sokolsky PRESE enter Department of omputer and nformaton Scence Unversty of Pennsylvana ompostonal

More information

Turing Machines (intro)

Turing Machines (intro) CHAPTER 3 The Church-Turng Thess Contents Turng Machnes defntons, examples, Turng-recognzable and Turng-decdable languages Varants of Turng Machne Multtape Turng machnes, non-determnstc Turng Machnes,

More information

On the Multicriteria Integer Network Flow Problem

On the Multicriteria Integer Network Flow Problem BULGARIAN ACADEMY OF SCIENCES CYBERNETICS AND INFORMATION TECHNOLOGIES Volume 5, No 2 Sofa 2005 On the Multcrtera Integer Network Flow Problem Vassl Vasslev, Marana Nkolova, Maryana Vassleva Insttute of

More information

Quantifying the Sub-optimality of Uniprocessor Fixed Priority Non-Pre-emptive Scheduling

Quantifying the Sub-optimality of Uniprocessor Fixed Priority Non-Pre-emptive Scheduling Quantfyng the Sub-optmalty of Unprocessor Fxed Prorty Non-Pre-emptve Schedulng Robert I Davs Real-Tme Systems Research Group, Department of Computer Scence, Unversty of York, York, UK robdavs@csyorkacuk

More information

a b a In case b 0, a being divisible by b is the same as to say that

a b a In case b 0, a being divisible by b is the same as to say that Secton 6.2 Dvsblty among the ntegers An nteger a ε s dvsble by b ε f there s an nteger c ε such that a = bc. Note that s dvsble by any nteger b, snce = b. On the other hand, a s dvsble by only f a = :

More information

Temperature. Chapter Heat Engine

Temperature. Chapter Heat Engine Chapter 3 Temperature In prevous chapters of these notes we ntroduced the Prncple of Maxmum ntropy as a technque for estmatng probablty dstrbutons consstent wth constrants. In Chapter 9 we dscussed the

More information

Improving the Sensitivity of Deadlines with a Specific Asynchronous Scenario for Harmonic Periodic Tasks scheduled by FP

Improving the Sensitivity of Deadlines with a Specific Asynchronous Scenario for Harmonic Periodic Tasks scheduled by FP Improvng the Senstvty of Deadlnes wth a Specfc Asynchronous Scenaro for Harmonc Perodc Tasks scheduled by FP P. Meumeu Yoms, Y. Sorel, D. de Rauglaudre AOSTE Project-team INRIA Pars-Rocquencourt Le Chesnay,

More information

An Admission Control Algorithm in Cloud Computing Systems

An Admission Control Algorithm in Cloud Computing Systems An Admsson Control Algorthm n Cloud Computng Systems Authors: Frank Yeong-Sung Ln Department of Informaton Management Natonal Tawan Unversty Tape, Tawan, R.O.C. ysln@m.ntu.edu.tw Yngje Lan Management Scence

More information

Computer Control: Task Synchronisation in Dynamic Priority Scheduling

Computer Control: Task Synchronisation in Dynamic Priority Scheduling Computer Control: Task Synchronsaton n Dynamc Prorty Schedulng Sérgo Adrano Fernandes Lopes Department of Industral Electroncs Engneerng School Unversty of Mnho Campus de Azurém 4800 Gumarães - PORTUGAL

More information

Resource Reservation for Mixed Criticality Systems

Resource Reservation for Mixed Criticality Systems Resource Reservaton for Mxed Crtcalty Systems Guseppe Lpar 1, Gorgo C. Buttazzo 2 1 LSV, ENS - Cachan, France 2 Scuola Superore Sant Anna, Italy Abstract. Ths paper presents a reservaton-based approach

More information

ILP models for the allocation of recurrent workloads upon heterogeneous multiprocessors

ILP models for the allocation of recurrent workloads upon heterogeneous multiprocessors Journal of Schedulng DOI 10.1007/s10951-018-0593-x ILP models for the allocaton of recurrent workloads upon heterogeneous multprocessors Sanjoy K. Baruah Vncenzo Bonfac Renato Brun Alberto Marchett-Spaccamela

More information

Fixed-Priority Multiprocessor Scheduling with Liu & Layland s Utilization Bound

Fixed-Priority Multiprocessor Scheduling with Liu & Layland s Utilization Bound Fxed-Prorty Multprocessor Schedulng wth Lu & Layland s Utlzaton Bound Nan Guan, Martn Stgge, Wang Y and Ge Yu Department of Informaton Technology, Uppsala Unversty, Sweden Department of Computer Scence

More information

CS : Algorithms and Uncertainty Lecture 17 Date: October 26, 2016

CS : Algorithms and Uncertainty Lecture 17 Date: October 26, 2016 CS 29-128: Algorthms and Uncertanty Lecture 17 Date: October 26, 2016 Instructor: Nkhl Bansal Scrbe: Mchael Denns 1 Introducton In ths lecture we wll be lookng nto the secretary problem, and an nterestng

More information

Lecture 14: Bandits with Budget Constraints

Lecture 14: Bandits with Budget Constraints IEOR 8100-001: Learnng and Optmzaton for Sequental Decson Makng 03/07/16 Lecture 14: andts wth udget Constrants Instructor: Shpra Agrawal Scrbed by: Zhpeng Lu 1 Problem defnton In the regular Mult-armed

More information

3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X

3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X Statstcs 1: Probablty Theory II 37 3 EPECTATION OF SEVERAL RANDOM VARIABLES As n Probablty Theory I, the nterest n most stuatons les not on the actual dstrbuton of a random vector, but rather on a number

More information

Difference Equations

Difference Equations Dfference Equatons c Jan Vrbk 1 Bascs Suppose a sequence of numbers, say a 0,a 1,a,a 3,... s defned by a certan general relatonshp between, say, three consecutve values of the sequence, e.g. a + +3a +1

More information

Real-Time Operating Systems M. 11. Real-Time: Periodic Task Scheduling

Real-Time Operating Systems M. 11. Real-Time: Periodic Task Scheduling Real-Tme Operatng Systems M 11. Real-Tme: Perodc Task Schedulng Notce The course materal ncludes sldes downloaded from:! http://codex.cs.yale.edu/av/os-book/! and! (sldes by Slberschatz, Galvn, and Gagne,

More information

Tornado and Luby Transform Codes. Ashish Khisti Presentation October 22, 2003

Tornado and Luby Transform Codes. Ashish Khisti Presentation October 22, 2003 Tornado and Luby Transform Codes Ashsh Khst 6.454 Presentaton October 22, 2003 Background: Erasure Channel Elas[956] studed the Erasure Channel β x x β β x 2 m x 2 k? Capacty of Noseless Erasure Channel

More information

Task Scheduling with Self-Suspensions in Soft Real-Time Multiprocessor Systems

Task Scheduling with Self-Suspensions in Soft Real-Time Multiprocessor Systems ask Schedulng wth Self-Suspensons n Soft Real-me Multprocessor Systems Cong Lu and James H. Anderson Department of Computer Scence, Unversty of North Carolna at Chapel Hll Abstract Job release Job deadlne

More information

COMPLETE BUFFER SHARING IN ATM NETWORKS UNDER BURSTY ARRIVALS

COMPLETE BUFFER SHARING IN ATM NETWORKS UNDER BURSTY ARRIVALS COMPLETE BUFFER SHARING WITH PUSHOUT THRESHOLDS IN ATM NETWORKS UNDER BURSTY ARRIVALS Ozgur Aras and Tugrul Dayar Abstract. Broadband Integrated Servces Dgtal Networks (B{ISDNs) are to support multple

More information

18.1 Introduction and Recap

18.1 Introduction and Recap CS787: Advanced Algorthms Scrbe: Pryananda Shenoy and Shjn Kong Lecturer: Shuch Chawla Topc: Streamng Algorthmscontnued) Date: 0/26/2007 We contnue talng about streamng algorthms n ths lecture, ncludng

More information

Winter 2008 CS567 Stochastic Linear/Integer Programming Guest Lecturer: Xu, Huan

Winter 2008 CS567 Stochastic Linear/Integer Programming Guest Lecturer: Xu, Huan Wnter 2008 CS567 Stochastc Lnear/Integer Programmng Guest Lecturer: Xu, Huan Class 2: More Modelng Examples 1 Capacty Expanson Capacty expanson models optmal choces of the tmng and levels of nvestments

More information

Economics 101. Lecture 4 - Equilibrium and Efficiency

Economics 101. Lecture 4 - Equilibrium and Efficiency Economcs 0 Lecture 4 - Equlbrum and Effcency Intro As dscussed n the prevous lecture, we wll now move from an envronment where we looed at consumers mang decsons n solaton to analyzng economes full of

More information

AN EXTENDIBLE APPROACH FOR ANALYSING FIXED PRIORITY HARD REAL-TIME TASKS

AN EXTENDIBLE APPROACH FOR ANALYSING FIXED PRIORITY HARD REAL-TIME TASKS AN EXENDIBLE APPROACH FOR ANALYSING FIXED PRIORIY HARD REAL-IME ASKS K. W. ndell 1 Department of Computer Scence, Unversty of York, England YO1 5DD ABSRAC As the real-tme computng ndustry moves away from

More information

Welfare Properties of General Equilibrium. What can be said about optimality properties of resource allocation implied by general equilibrium?

Welfare Properties of General Equilibrium. What can be said about optimality properties of resource allocation implied by general equilibrium? APPLIED WELFARE ECONOMICS AND POLICY ANALYSIS Welfare Propertes of General Equlbrum What can be sad about optmalty propertes of resource allocaton mpled by general equlbrum? Any crteron used to compare

More information

Convexity preserving interpolation by splines of arbitrary degree

Convexity preserving interpolation by splines of arbitrary degree Computer Scence Journal of Moldova, vol.18, no.1(52), 2010 Convexty preservng nterpolaton by splnes of arbtrary degree Igor Verlan Abstract In the present paper an algorthm of C 2 nterpolaton of dscrete

More information

princeton univ. F 17 cos 521: Advanced Algorithm Design Lecture 7: LP Duality Lecturer: Matt Weinberg

princeton univ. F 17 cos 521: Advanced Algorithm Design Lecture 7: LP Duality Lecturer: Matt Weinberg prnceton unv. F 17 cos 521: Advanced Algorthm Desgn Lecture 7: LP Dualty Lecturer: Matt Wenberg Scrbe: LP Dualty s an extremely useful tool for analyzng structural propertes of lnear programs. Whle there

More information

Feature Selection: Part 1

Feature Selection: Part 1 CSE 546: Machne Learnng Lecture 5 Feature Selecton: Part 1 Instructor: Sham Kakade 1 Regresson n the hgh dmensonal settng How do we learn when the number of features d s greater than the sample sze n?

More information

Amiri s Supply Chain Model. System Engineering b Department of Mathematics and Statistics c Odette School of Business

Amiri s Supply Chain Model. System Engineering b Department of Mathematics and Statistics c Odette School of Business Amr s Supply Chan Model by S. Ashtab a,, R.J. Caron b E. Selvarajah c a Department of Industral Manufacturng System Engneerng b Department of Mathematcs Statstcs c Odette School of Busness Unversty of

More information

Limited Preemptive Scheduling for Real-Time Systems: a Survey

Limited Preemptive Scheduling for Real-Time Systems: a Survey Lmted Preemptve Schedulng for Real-Tme Systems: a Survey Gorgo C. Buttazzo, Fellow Member, IEEE, Marko Bertogna, Senor Member, IEEE, and Gang Yao Abstract The queston whether preemptve algorthms are better

More information

Foundations of Arithmetic

Foundations of Arithmetic Foundatons of Arthmetc Notaton We shall denote the sum and product of numbers n the usual notaton as a 2 + a 2 + a 3 + + a = a, a 1 a 2 a 3 a = a The notaton a b means a dvdes b,.e. ac = b where c s an

More information

FUZZY GOAL PROGRAMMING VS ORDINARY FUZZY PROGRAMMING APPROACH FOR MULTI OBJECTIVE PROGRAMMING PROBLEM

FUZZY GOAL PROGRAMMING VS ORDINARY FUZZY PROGRAMMING APPROACH FOR MULTI OBJECTIVE PROGRAMMING PROBLEM Internatonal Conference on Ceramcs, Bkaner, Inda Internatonal Journal of Modern Physcs: Conference Seres Vol. 22 (2013) 757 761 World Scentfc Publshng Company DOI: 10.1142/S2010194513010982 FUZZY GOAL

More information

Lecture 3 January 31, 2017

Lecture 3 January 31, 2017 CS 224: Advanced Algorthms Sprng 207 Prof. Jelan Nelson Lecture 3 January 3, 207 Scrbe: Saketh Rama Overvew In the last lecture we covered Y-fast tres and Fuson Trees. In ths lecture we start our dscusson

More information

and problem sheet 2

and problem sheet 2 -8 and 5-5 problem sheet Solutons to the followng seven exercses and optonal bonus problem are to be submtted through gradescope by :0PM on Wednesday th September 08. There are also some practce problems,

More information

Parametric Utilization Bounds for Fixed-Priority Multiprocessor Scheduling

Parametric Utilization Bounds for Fixed-Priority Multiprocessor Scheduling 2012 IEEE 26th Internatonal Parallel and Dstrbuted Processng Symposum Parametrc Utlzaton Bounds for Fxed-Prorty Multprocessor Schedulng Nan Guan 1,2, Martn Stgge 1, Wang Y 1,2 and Ge Yu 2 1 Uppsala Unversty,

More information

Yong Joon Ryang. 1. Introduction Consider the multicommodity transportation problem with convex quadratic cost function. 1 2 (x x0 ) T Q(x x 0 )

Yong Joon Ryang. 1. Introduction Consider the multicommodity transportation problem with convex quadratic cost function. 1 2 (x x0 ) T Q(x x 0 ) Kangweon-Kyungk Math. Jour. 4 1996), No. 1, pp. 7 16 AN ITERATIVE ROW-ACTION METHOD FOR MULTICOMMODITY TRANSPORTATION PROBLEMS Yong Joon Ryang Abstract. The optmzaton problems wth quadratc constrants often

More information

Statistics II Final Exam 26/6/18

Statistics II Final Exam 26/6/18 Statstcs II Fnal Exam 26/6/18 Academc Year 2017/18 Solutons Exam duraton: 2 h 30 mn 1. (3 ponts) A town hall s conductng a study to determne the amount of leftover food produced by the restaurants n the

More information

Equilibrium Analysis of the M/G/1 Queue

Equilibrium Analysis of the M/G/1 Queue Eulbrum nalyss of the M/G/ Queue Copyrght, Sanay K. ose. Mean nalyss usng Resdual Lfe rguments Secton 3.. nalyss usng an Imbedded Marov Chan pproach Secton 3. 3. Method of Supplementary Varables done later!

More information

On the Scheduling of Mixed-Criticality Real-Time Task Sets

On the Scheduling of Mixed-Criticality Real-Time Task Sets On the Schedulng of Mxed-Crtcalty Real-Tme Task Sets Donso de Nz, Karthk Lakshmanan, and Ragunathan (Raj) Rajkumar Carnege Mellon Unversty, Pttsburgh, PA - 15232 Abstract The functonal consoldaton nduced

More information

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall 2013 Lecture 12 10/21/2013. Martingale Concentration Inequalities and Applications

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall 2013 Lecture 12 10/21/2013. Martingale Concentration Inequalities and Applications MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.65/15.070J Fall 013 Lecture 1 10/1/013 Martngale Concentraton Inequaltes and Applcatons Content. 1. Exponental concentraton for martngales wth bounded ncrements.

More information

Computational Biology Lecture 8: Substitution matrices Saad Mneimneh

Computational Biology Lecture 8: Substitution matrices Saad Mneimneh Computatonal Bology Lecture 8: Substtuton matrces Saad Mnemneh As we have ntroduced last tme, smple scorng schemes lke + or a match, - or a msmatch and -2 or a gap are not justable bologcally, especally

More information

ANSWERS. Problem 1. and the moment generating function (mgf) by. defined for any real t. Use this to show that E( U) var( U)

ANSWERS. Problem 1. and the moment generating function (mgf) by. defined for any real t. Use this to show that E( U) var( U) Econ 413 Exam 13 H ANSWERS Settet er nndelt 9 deloppgaver, A,B,C, som alle anbefales å telle lkt for å gøre det ltt lettere å stå. Svar er gtt . Unfortunately, there s a prntng error n the hnt of

More information

Volume 18 Figure 1. Notation 1. Notation 2. Observation 1. Remark 1. Remark 2. Remark 3. Remark 4. Remark 5. Remark 6. Theorem A [2]. Theorem B [2].

Volume 18 Figure 1. Notation 1. Notation 2. Observation 1. Remark 1. Remark 2. Remark 3. Remark 4. Remark 5. Remark 6. Theorem A [2]. Theorem B [2]. Bulletn of Mathematcal Scences and Applcatons Submtted: 016-04-07 ISSN: 78-9634, Vol. 18, pp 1-10 Revsed: 016-09-08 do:10.1805/www.scpress.com/bmsa.18.1 Accepted: 016-10-13 017 ScPress Ltd., Swtzerland

More information

Lecture Randomized Load Balancing strategies and their analysis. Probability concepts include, counting, the union bound, and Chernoff bounds.

Lecture Randomized Load Balancing strategies and their analysis. Probability concepts include, counting, the union bound, and Chernoff bounds. U.C. Berkeley CS273: Parallel and Dstrbuted Theory Lecture 1 Professor Satsh Rao August 26, 2010 Lecturer: Satsh Rao Last revsed September 2, 2010 Lecture 1 1 Course Outlne We wll cover a samplng of the

More information

THE CHINESE REMAINDER THEOREM. We should thank the Chinese for their wonderful remainder theorem. Glenn Stevens

THE CHINESE REMAINDER THEOREM. We should thank the Chinese for their wonderful remainder theorem. Glenn Stevens THE CHINESE REMAINDER THEOREM KEITH CONRAD We should thank the Chnese for ther wonderful remander theorem. Glenn Stevens 1. Introducton The Chnese remander theorem says we can unquely solve any par of

More information

Interactive Bi-Level Multi-Objective Integer. Non-linear Programming Problem

Interactive Bi-Level Multi-Objective Integer. Non-linear Programming Problem Appled Mathematcal Scences Vol 5 0 no 65 3 33 Interactve B-Level Mult-Objectve Integer Non-lnear Programmng Problem O E Emam Department of Informaton Systems aculty of Computer Scence and nformaton Helwan

More information

Polynomial Regression Models

Polynomial Regression Models LINEAR REGRESSION ANALYSIS MODULE XII Lecture - 6 Polynomal Regresson Models Dr. Shalabh Department of Mathematcs and Statstcs Indan Insttute of Technology Kanpur Test of sgnfcance To test the sgnfcance

More information

More metrics on cartesian products

More metrics on cartesian products More metrcs on cartesan products If (X, d ) are metrc spaces for 1 n, then n Secton II4 of the lecture notes we defned three metrcs on X whose underlyng topologes are the product topology The purpose of

More information

Calculation of time complexity (3%)

Calculation of time complexity (3%) Problem 1. (30%) Calculaton of tme complexty (3%) Gven n ctes, usng exhaust search to see every result takes O(n!). Calculaton of tme needed to solve the problem (2%) 40 ctes:40! dfferent tours 40 add

More information

2.3 Nilpotent endomorphisms

2.3 Nilpotent endomorphisms s a block dagonal matrx, wth A Mat dm U (C) In fact, we can assume that B = B 1 B k, wth B an ordered bass of U, and that A = [f U ] B, where f U : U U s the restrcton of f to U 40 23 Nlpotent endomorphsms

More information

Chapter 6. Supplemental Text Material

Chapter 6. Supplemental Text Material Chapter 6. Supplemental Text Materal S6-. actor Effect Estmates are Least Squares Estmates We have gven heurstc or ntutve explanatons of how the estmates of the factor effects are obtaned n the textboo.

More information

On the Interval Zoro Symmetric Single-step Procedure for Simultaneous Finding of Polynomial Zeros

On the Interval Zoro Symmetric Single-step Procedure for Simultaneous Finding of Polynomial Zeros Appled Mathematcal Scences, Vol. 5, 2011, no. 75, 3693-3706 On the Interval Zoro Symmetrc Sngle-step Procedure for Smultaneous Fndng of Polynomal Zeros S. F. M. Rusl, M. Mons, M. A. Hassan and W. J. Leong

More information

On the correction of the h-index for career length

On the correction of the h-index for career length 1 On the correcton of the h-ndex for career length by L. Egghe Unverstet Hasselt (UHasselt), Campus Depenbeek, Agoralaan, B-3590 Depenbeek, Belgum 1 and Unverstet Antwerpen (UA), IBW, Stadscampus, Venusstraat

More information

Speeding up Computation of Scalar Multiplication in Elliptic Curve Cryptosystem

Speeding up Computation of Scalar Multiplication in Elliptic Curve Cryptosystem H.K. Pathak et. al. / (IJCSE) Internatonal Journal on Computer Scence and Engneerng Speedng up Computaton of Scalar Multplcaton n Ellptc Curve Cryptosystem H. K. Pathak Manju Sangh S.o.S n Computer scence

More information

Equilibrium with Complete Markets. Instructor: Dmytro Hryshko

Equilibrium with Complete Markets. Instructor: Dmytro Hryshko Equlbrum wth Complete Markets Instructor: Dmytro Hryshko 1 / 33 Readngs Ljungqvst and Sargent. Recursve Macroeconomc Theory. MIT Press. Chapter 8. 2 / 33 Equlbrum n pure exchange, nfnte horzon economes,

More information

Lecture 10 Support Vector Machines II

Lecture 10 Support Vector Machines II Lecture 10 Support Vector Machnes II 22 February 2016 Taylor B. Arnold Yale Statstcs STAT 365/665 1/28 Notes: Problem 3 s posted and due ths upcomng Frday There was an early bug n the fake-test data; fxed

More information

Snce h( q^; q) = hq ~ and h( p^ ; p) = hp, one can wrte ~ h hq hp = hq ~hp ~ (7) the uncertanty relaton for an arbtrary state. The states that mnmze t

Snce h( q^; q) = hq ~ and h( p^ ; p) = hp, one can wrte ~ h hq hp = hq ~hp ~ (7) the uncertanty relaton for an arbtrary state. The states that mnmze t 8.5: Many-body phenomena n condensed matter and atomc physcs Last moded: September, 003 Lecture. Squeezed States In ths lecture we shall contnue the dscusson of coherent states, focusng on ther propertes

More information