The Characterization of Data-Accumulating Algorithms

Similar documents
Technical Report No The Characterization of Data-Accumulating Algorithms Stefan D. Bruda and Selim G. Akl Department of Computing and Informat

EXERCISES FOR SECTION 1.5

Chapter 2. First Order Scalar Equations

Vehicle Arrival Models : Headway

Random Walk with Anti-Correlated Steps

3.1.3 INTRODUCTION TO DYNAMIC OPTIMIZATION: DISCRETE TIME PROBLEMS. A. The Hamiltonian and First-Order Conditions in a Finite Time Horizon

T L. t=1. Proof of Lemma 1. Using the marginal cost accounting in Equation(4) and standard arguments. t )+Π RB. t )+K 1(Q RB

The Asymptotic Behavior of Nonoscillatory Solutions of Some Nonlinear Dynamic Equations on Time Scales

Supplement for Stochastic Convex Optimization: Faster Local Growth Implies Faster Global Convergence

Matrix Versions of Some Refinements of the Arithmetic-Geometric Mean Inequality

Lecture 2-1 Kinematics in One Dimension Displacement, Velocity and Acceleration Everything in the world is moving. Nothing stays still.

Christos Papadimitriou & Luca Trevisan November 22, 2016

Simulation-Solving Dynamic Models ABE 5646 Week 2, Spring 2010

23.2. Representing Periodic Functions by Fourier Series. Introduction. Prerequisites. Learning Outcomes

Lecture Notes 2. The Hilbert Space Approach to Time Series

Inventory Analysis and Management. Multi-Period Stochastic Models: Optimality of (s, S) Policy for K-Convex Objective Functions

2.7. Some common engineering functions. Introduction. Prerequisites. Learning Outcomes

Longest Common Prefixes

Physics 235 Chapter 2. Chapter 2 Newtonian Mechanics Single Particle

STATE-SPACE MODELLING. A mass balance across the tank gives:

5.1 - Logarithms and Their Properties

4.1 - Logarithms and Their Properties

5. Stochastic processes (1)

Solutions from Chapter 9.1 and 9.2

1 Review of Zero-Sum Games

Some Basic Information about M-S-D Systems

Finish reading Chapter 2 of Spivak, rereading earlier sections as necessary. handout and fill in some missing details!

Math 2142 Exam 1 Review Problems. x 2 + f (0) 3! for the 3rd Taylor polynomial at x = 0. To calculate the various quantities:

10. State Space Methods

A Shooting Method for A Node Generation Algorithm

Application of a Stochastic-Fuzzy Approach to Modeling Optimal Discrete Time Dynamical Systems by Using Large Scale Data Processing

IMPLICIT AND INVERSE FUNCTION THEOREMS PAUL SCHRIMPF 1 OCTOBER 25, 2013

Diebold, Chapter 7. Francis X. Diebold, Elements of Forecasting, 4th Edition (Mason, Ohio: Cengage Learning, 2006). Chapter 7. Characterizing Cycles

23.5. Half-Range Series. Introduction. Prerequisites. Learning Outcomes

t is a basis for the solution space to this system, then the matrix having these solutions as columns, t x 1 t, x 2 t,... x n t x 2 t...

18 Biological models with discrete time

11!Hí MATHEMATICS : ERDŐS AND ULAM PROC. N. A. S. of decomposiion, properly speaking) conradics he possibiliy of defining a counably addiive real-valu

Guest Lectures for Dr. MacFarlane s EE3350 Part Deux

dt = C exp (3 ln t 4 ). t 4 W = C exp ( ln(4 t) 3) = C(4 t) 3.

EE 315 Notes. Gürdal Arslan CLASS 1. (Sections ) What is a signal?

0.1 MAXIMUM LIKELIHOOD ESTIMATION EXPLAINED

Final Spring 2007

di Bernardo, M. (1995). A purely adaptive controller to synchronize and control chaotic systems.

MATH 5720: Gradient Methods Hung Phan, UMass Lowell October 4, 2018

Sections 2.2 & 2.3 Limit of a Function and Limit Laws

On Boundedness of Q-Learning Iterates for Stochastic Shortest Path Problems

Expert Advice for Amateurs

MODULE 3 FUNCTION OF A RANDOM VARIABLE AND ITS DISTRIBUTION LECTURES PROBABILITY DISTRIBUTION OF A FUNCTION OF A RANDOM VARIABLE

Some Ramsey results for the n-cube

Robotics I. April 11, The kinematics of a 3R spatial robot is specified by the Denavit-Hartenberg parameters in Tab. 1.

Orthogonal Rational Functions, Associated Rational Functions And Functions Of The Second Kind

LECTURE 1: GENERALIZED RAY KNIGHT THEOREM FOR FINITE MARKOV CHAINS

Lecture 20: Riccati Equations and Least Squares Feedback Control

BBP-type formulas, in general bases, for arctangents of real numbers

Designing Information Devices and Systems I Spring 2019 Lecture Notes Note 17

Section 3.5 Nonhomogeneous Equations; Method of Undetermined Coefficients

Seminar 4: Hotelling 2

Chapter 7: Solving Trig Equations

Hamilton- J acobi Equation: Weak S olution We continue the study of the Hamilton-Jacobi equation:

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation

Math Week 14 April 16-20: sections first order systems of linear differential equations; 7.4 mass-spring systems.

d 1 = c 1 b 2 - b 1 c 2 d 2 = c 1 b 3 - b 1 c 3

Comments on Window-Constrained Scheduling

Optimality Conditions for Unconstrained Problems

Learning a Class from Examples. Training set X. Class C 1. Class C of a family car. Output: Input representation: x 1 : price, x 2 : engine power

Math 334 Fall 2011 Homework 11 Solutions

CHERNOFF DISTANCE AND AFFINITY FOR TRUNCATED DISTRIBUTIONS *

Martingales Stopping Time Processes

Online Convex Optimization Example And Follow-The-Leader

SZG Macro 2011 Lecture 3: Dynamic Programming. SZG macro 2011 lecture 3 1

Robust estimation based on the first- and third-moment restrictions of the power transformation model

Convergence of the Neumann series in higher norms

Georey E. Hinton. University oftoronto. Technical Report CRG-TR February 22, Abstract

Traveling Waves. Chapter Introduction

Demodulation of Digitally Modulated Signals

POSITIVE SOLUTIONS OF NEUTRAL DELAY DIFFERENTIAL EQUATION

Logic in computer science

12: AUTOREGRESSIVE AND MOVING AVERAGE PROCESSES IN DISCRETE TIME. Σ j =

A NOTE ON THE STRUCTURE OF BILATTICES. A. Avron. School of Mathematical Sciences. Sackler Faculty of Exact Sciences. Tel Aviv University

An introduction to the theory of SDDP algorithm

SOME MORE APPLICATIONS OF THE HAHN-BANACH THEOREM

6.003 Homework #8 Solutions

Stability and Bifurcation in a Neural Network Model with Two Delays

Basic Circuit Elements Professor J R Lucas November 2001

Examples of Dynamic Programming Problems

EECE 301 Signals & Systems Prof. Mark Fowler

4 Sequences of measurable functions

Lecture 2 October ε-approximation of 2-player zero-sum games

MATH 4330/5330, Fourier Analysis Section 6, Proof of Fourier s Theorem for Pointwise Convergence

THE GENERALIZED PASCAL MATRIX VIA THE GENERALIZED FIBONACCI MATRIX AND THE GENERALIZED PELL MATRIX

2. Nonlinear Conservation Law Equations

SOLUTIONS TO ECE 3084

Undetermined coefficients for local fractional differential equations

) were both constant and we brought them from under the integral.

The Arcsine Distribution

Two Coupled Oscillators / Normal Modes

Stationary Distribution. Design and Analysis of Algorithms Andrei Bulatov

Traversal of a subtree is slow, which affects prefix and range queries.

CHAPTER 2 Signals And Spectra

Chapter 7 Response of First-order RL and RC Circuits

Transcription:

The Characerizaion of Daa-Accumulaing Algorihms Sefan D Bruda and Selim G Akl Deparmen of Compuing and Informaion Science Queen s Universiy Kingson, Onario, K7L 3N6 Canada fbruda,aklg@csqueensuca Absrac A daa-accumulaing algorihm (d-algorihm for shor) works on an inpu considered as a virually endless sream The compuaion erminaes when all he currenly arrived daa have been processed before anoher daum arrives In his paper, he class of d-algorihms is characerized I is shown ha his class is idenical o he class of on-line algorihms The parallel implemenaion of d-algorihms is hen invesigaed I is found ha, in general, he speedup achieved hrough parallelism can be made arbirarily large for almos any such algorihm On he oher hand, we prove ha for d-algorihms whose saic counerpars manifes only uniary speedup, no improvemen is possible hrough parallel implemenaion 1 Inroducion The mos cied resul wih respec o he limis of parallel compuaion saes ha he decrease in he running ime of a parallel algorihm ha solves some problem is a mos proporional o he increase in he number of processors [3, 12] The firs observaion ha conradics his resul was based on parallel search algorihms, which have been found o exhibi superuniary behaviour on paricular shapes of he search space [7] Laer, addiional examples of such algorihms were found [1, 2], his ime manifesing superuniary behaviour in all insances of he solved problem Finally, anoher approach led o a new paradigm where superuniary behaviour is manifesed, namely he daa-accumulaing paradigm In he daa-accumulaing paradigm, inroduced in [9], he inpu is considered as a virually endless sream An algorihm peraining o his paradigm, called a daaaccumulaing algorihm or d-algorihm for shor, ermi- This research was suppored by he Naural Sciences and Engineering Research Council of Canada naes when all he currenly arrived daa have been processed before anoher daum arrives This paradigm is sudied furher in [10] and [4], where complexiy-relaed properies are derived for boh he parallel and sequenial cases In his paper we characerize he class of d-algorihms Firs, we show ha i is precisely he same as he wellknown class of on-line algorihms This resul basically shows ha a d-algorihm is an on-line algorihm for which he erminaion ime is imposed by some real-ime resricion The ideniy beween d-algorihms and on-line algorihms leads o an ineresing discussion on he noion of opimaliy of d-algorihms This discussion is oulined in he las secion In he second par of he paper we sudy how a parallel implemenaion affecs he performance of d-algorihms We find ha, as long as he speedup in he saic case 1 is larger han 1, he speedup of he d-algorihm can be made arbirarily large On he oher hand, when he saic case manifess a uniary speedup, hen he parallel d-algorihm will keep his propery In he following, a proposiion is a resul proved elsewhere The word iff sands for he phrase if and only if 2 The daa-accumulaing paradigm We presen here he necessary preliminaries concerning he daa-accumulaing paradigm, conforming o [10] (see also he definiion 31 from secion 3) A sandard algorihm, working on a non-varying se of daa, is referred o as a saic algorihm For a d-algorihm, he compuaion erminaes when all he currenly arrived daa have been reaed The size of he se of processed daa is denoed by N Consider a given problem Le he bes known saic algorihm for be A 0 Then, a d-algorihm A for work- 1 The saic version of a d-algorihm A solves he same problem as A, bu he whole inpu is available a he beginning of compuaion, as explained in he nex secion

ing on a varying se of daa of size N is opimal iff is running ime T (N ) is asympoically equal o he ime T 0 (N ), where T 0 (N ) is he ime required by A 0 working on he N daa as if hey were available a ime 0 When referring o he parallel case, we add he subscrip p We will denoe he arrival law by f(n; ) The amoun of daa processed by a d-algorihm will be given hen by he implici equaion N = f(n; T (N )) Noe he difference beween he running ime and he (ime) complexiy in he daa-accumulaing paradigm Since N iself is a funcion of ime, he acual running ime is obained by solving an implici equaion of he form = T (N ) The firs form of he running ime (ha is, as a funcion of N) isreferredoasheime complexiy (or jus complexiy for shor) of he d-algorihm in discussion, while he second form (he soluion of he implici equaion) is referred o as he running ime and is denoed by Noe ha, in he saic case, he running ime and he ime complexiy as defined here are idenical The form proposed in [10] for he daa arrival law is f(n; ) =n + kn ; (1) where k,, and are posiive consans In wha follows, when we refer o a paricular form of he daa arrival law we use he above expression I is shown in [10] ha he erminaion ime of a d-algorihm of complexiy O(N ) is finie for any < 1 We consider in secion 4 problems ha are solvable in polynomial ime This implies ha a d-algorihm has a ime complexiy of T (N )=cn, for some posiive consans c and The number of processors used by a parallel algorihm is denoed by P The size of he whole inpu daa se will be denoed by N! Since he inpu daa se in virually endless in he daaaccumulaing paradigm, we will consider N! o be eiher large enough or ending o infiniy 3 Characerizing d-algorihms Definiion 31 An algorihm A is a d-algorihm if (1) A works on a se of daa which is no enirely available a he beginning of compuaion Daa come while he compuaion is in progress, and A erminaes when all he currenly arrived daa have been processed before anoher daum arrives, and (2) For any inpu daa se, here is a leas one daa arrival law f such ha, for any value of n, A erminaes in finie ime, where f has he following properies: (i) f is sricly increasing wih respec o, and (ii) f(n; C(n)) >n,wherec(n) is he complexiy of A Moreover, A immediaely erminaes if he iniial daa se is null (n =0) 2 The firs condiion is he definiion implicily given in [10] The second condiion means ha A sops for some increasing daa arrival law, such ha a leas one new daum arrives before A finishes he processing of he iniial se of n daa If his condiion is no saed, hen any algorihm A 1 may be considered a d-algorihm We use he following noaions: We denoe by D i he i-h daum in he inpu sream The ordering is naurally defined as follows: D j is examined before D i is examined for he firs ime iff i>j We say ha an algorihm A is able o erminae a poin k if, before visiing any D k 0, k 0 >k, i has buil a soluion idenical o he soluion reurned by A when working on he inpu se D 1 ;:::;D k If he algorihm A is able o erminae a some poin, ha poin will be denoed by N j, j 2f1; 2;:::g Noe ha N (he amoun of daa processed by a d-algorihm) is also a erminaion poin 31 A Turing machine model Definiion 32 A Turing machine M which models an algorihm ha is able o erminae a some poin oher han N! is he uple (K; ;;h 0 ), K being he (finie) se of saes, he (finie) ape alphabe, he ransiion funcion, and h 0 he iniial sae The machine M has wo apes, as in [5]: The firs ape is he (read-only) inpu ape, and he second one is he working ape In addiion, M is deerminisic, excep ha i has o model he abiliy o erminae a some poin For his purpose, we allow a designaed sae h 0 o have wo oupu ransiions as follows: (h 0 ;x)=(h; x), and (h 0 ;x) = (q; z), whereh denoes he haling sae Wih he above excepion, is deerminisic Moreover, no oher sae is allowed o go direcly o h Tha is, he haling sae h is replaced by an opional haling one (namely, h 0 ) Noe ha he opional haling sae h 0 is also he iniial sae 2 The definiion above models a d-algorihm More precisely, he algorihm A corresponding o such a machine M can erminae before he whole inpu is considered, namely, when M eners he sae h 0 Once in h 0, M s choice of haling or coninuing o work models he abiliy of A o erminae evenually when i is able o oupu a soluion for he currenly arrived daa and here is no arrived bu ye unprocessed daum Since a d-algorihm should immediaely erminae on an empy iniial inpu, we impose h 0 as he iniial sae Lemma 31 A Turing machine M as in definiion 32, working on any sufficienly large inpu daa se N!, is able o erminae a some poin N 1 <N!, N 1 being consan wih respec o N!, iff i is able o erminae a wo finie poins N 1 and N 2 sricly smaller han N! and consan wih respec o N!

Proof The only if par is immediae We provide a proof for he if par When M hals a he poin N 1 i mus have reached he special sae h 0 Obviously, his happened afer some consan number of seps (since boh K and are of consan size, and he number of ape cells visied is N 1 which is consan as well) Therefore, we have a cycle, from h 0 (he iniial sae) back o h 0, afer a number of seps bounded by some consan Assume now ha M chooses no o hal a he poin N 1 and insead goes o anoher sae q Bu he sae h 0 is accessible from q (oherwise, M won hal a all) and, since M already reached h 0 for an arbirary inpu, i will reach i again, afer a number of seps bounded by and afer visiing a consan number of new ape cells, because M is deerminisic 2 Theorem 32 A Turing machine M as in definiion 32, working on any inpu daa se of size N!,whereN! ends o infiniy, is able o erminae a some finie poin N 1 iff i is able o erminae a all of he poins in a counably infinie se S f1; 2;:::;N! g,wheres has he following properies: (i) he leas elemen of S is upper bounded by a finie consan, and (ii) he disance beween any wo consecuive elemens in S is upper bounded by Proof By inducion on he size of S, using lemma 31 2 Given a consan, one can compac a Turing machine s ape by simply considering [f#g,where# is he blank symbol, as he ape alphabe insead of, hen folding each sequence of non-blank ape cells ino one cell, and finally modifying he funcion accordingly (see for example he proof given in [8] of he fac ha a k-ape Turing machine can be simulaed by a one-ape Turing machine) We have hus he following corollary: Corollary 33 A Turing machine M as in definiion 32, working on any inpu daa se of size N!,whereN! ends o infiniy, is able o erminae a some finie poin N 1 iff i is able o erminae a all of he poins in he se f1; 2;:::;N! g 2 32 On line algorihms Basically, an on-line algorihm processes each inpu daum D k wihou looking ahead a any daum D k 0, k 0 >k By conras, an algorihm ha needs o know all he inpu in advance is called an off-line algorihm One can already idenify a srong similariy beween on-line algorihms and d-algorihms In his secion we formally show ha hese wo classes are in fac idenical An on-line algorihm is defined in [11] as an algorihm ha canno look ahead a is inpu A similar definiion in erms of Turing machines can be found in [5] Finally, an on-line algorihm A is defined in [6] as having he propery ha A can deermine he resul of N inpu daa wihou knowing N in advance, such ha i is possible o run he algorihm unil he end of he inpu daa, or o run i unil a cerain condiion is me We assume here he laer definiion, since he definiion given in [11] leaves he way of reporing he resul unclarified However, if he definiion in [11] is compleed in a naural way (ha is, an on-line algorihm A is able o repor a (parial) soluion afer processing each daum), we reach he definiiongiven in [6] Wih he above paragraphs in mind, corollary 33 leads o he following resul, where D and O denoe he class of d-algorihms and on-line algorihms, respecively Theorem 34 D = O Proof Clearly, corollary 33 proves he inclusion DO I also proves OD, excep ha he second poin of definiion 31 is no accouned for Therefore, in order o complee he proof, we have o show ha, for any on-line algorihm A and any size n of he iniial daa se, here is a daa arrival law f such ha, when working on a daa-accumulaing inpu se, A erminaes in finie ime, and considers a leas n +1daa Le he complexiy of A be C(n) For any posiive ineger n, denoe by 1 a lower bound on C(n), and le 2 be an upper bound on C(n +1), for any possible inpu daa ses of size n and n +1, respecively Bu hen one can build a funcion f(n 1 ;), sricly increasing wih respec o, such ha f(n; 0) = n, f(n; 1 )=n +1:1,and f(n; 2 )=n +1:5 2 4 On he parallel speedup The main measure used for evaluaing a parallel algorihm is he speedup, defined as follows Given some problem, he speedup provided by an algorihm ha uses p 1 processors over an algorihm ha uses 1 processor wih respec o problem is he raio S(1;p 1 )= (1)= (p 1 ), p 1 > 0, where (x) is he running ime of he bes x- processor algorihm ha solves We sar by quoing he main resul from [10] concerning parallel d-algorihms Proposiion 41 For a problem admiing an opimal sequenial d-algorihm obeying relaion = c(n + kn ) and an opimal parallel d-algorihm obeying relaion p = (n+kn p ) P we have: 1 For = = =1, P p = c 1,(=P )kn 1,ckn 2 For =P < c, P p! N! for n! 1,where kc 1= = = 1, and P = (n + kn p ), wih some consans, >0, and, 0

3 For all values of,,, P p > c Le us firs ake a look a how he implici equaion for he parallel running ime has been derived Generally, p = Tp 0 (n + kn p )Onlywork-opimal parallel algorihms 2 are considered in [10] In his case, a saiarallel algorihm requires ime Tp 0 (N )=O(N =P ), andheimplici equaion for he running ime of a parallel d-algorihm follows immediaely However, in he case of a non-workopimal parallel saic algorihm, we have analogously he relaion Tp 0 (N )=O(N =S 0 (1;P)) Then, (n + kn p p = ) : (2) S 0 (1;P) The following exension of proposiion 41 is hence immediae Theorem 42 For a problem admiing a sequenial d- algorihm and a parallel d-algorihm such ha he speedup for he saic case is S 0 (1;P) > 1 we have: 1 For = = = 1, p = c 1,(=S 0 (1;P ))kn 1,ckn S 0 (1;P) 2 For =S 0 (1;P) <c, S 0 (1;P ) p! N! for n! 1 kc 1=, where = =1 3 For all values of,,, p > c S 0 (1;P) Corollary 43 For a problem admiing a sequenial d- algorihm and a parallel P -processor d-algorihm, P = (n + kn p ), such ha he speedup for he saic case is S 0 (1;P)= 1 (n +kn p ), S 0 (1;P) > 1 for any sricly posiive values of n and p, we have for =S 0 (1;P) <c: P p! N! for n! 1,where = = 1, and kc 1= 0, 0 Proof Conforming o formula (2), we have p = ( = 1 )(n + kn p ), Bu, since = 1, he soluion p of he above equaion is finie for any finie value of n [10] Noe ha, in our case, n! 1 kc 1= Then, boh P and S 0 (1;P) are finie a he poin p since hey are polynomials in n and p Bu we have by heorem 42 ha S 0 (1;P ) p! N! and, obviously, = P p 2 2 S 0 (1;P ) S 0 (1;P ) p P 2 A parallel algorihm is said o be work-opimal if he produc of is wors case running ime and he number of processors i uses is of he same order as he wors case running ime of he bes known sequenial algorihm solving he same problem Usually, such parallel algorihms are called simply opimal [1] However, we will keep he erminology from [10], because we already used he qualifier opimal for d-algorihms Bu hen P p equals an infinie quaniy muliplied by a finie quaniy, and herefore i is infinie, as desired 2 Noe ha he resul of corollary 43 does no apply only o work-opimal algorihms as he resul in proposiion 41 Indeed, he case <is covered as well, for any small On he oher hand, i is no an acciden ha we specified S 0 (1;P) > 1 in heorem 42 and corollary 43: Theorem 44 For any problem admiing a sequenial d- algorihm and a parallel d-algorihm such ha he speedup for he saic case is S 0 (1;P) = 1, and for any daa arrival law such ha eiher 1, or 1 and 1=2 kc (, 1), he speedup of a parallel d-algorihm is S(1;P)=1 Proof When S 0 (1;P) = 1, equaion (2) become p = (n + kn p ) Also, recall ha he implici equaion for he running ime in he sequenial case is = c(n + kn ) Thus, he complexiy of he saiarallel algorihm is precisely he same as he complexiy of he sequenial algorihm We have hen X(n; ) =X(n; p ), where he funcion X is X(n; ) =,1= (1 + kn,1 ) Therefore, in order o prove ha he speedup is uniary (ha is, = p ) i is enough o prove ha X(n; ) is a one o one funcion for any n For his purpose, we will prove ha X(n; ) is a sricly monoonic funcion and hence we will complee he proof @X If 1, hen i is immediae ha < 0 for @ any n On he oher hand, if > 1, henwehave @X (n; @ 0) = 0,and @X (n; @ ) > 0 for any > 0,where 0 =1=(kn,1 (, 1)) Bu he algorihm mus process a leas he iniial se of daa n and one more daum (conforming o definiion 31) Suppose now ha 0 is a possible value for he erminaion ime Then, 0 c(n +1) as well This leads o n,1 1 < 2kc (, : (3) 1) Bu his clearly conradics he heorem s hypohesis Bu hen, for all possible values of he erminaion ime, X(n; ) is monoonic 2 Corollary 45 For any problem admiing a sequenial d- algorihm and a parallel d-algorihm such ha he speedup for he saic case is S 0 (1;P)=1, and for any daa arrival law such ha eiher 1, or > 1 and n is large enough, he speedup of a parallel d-algorihm is S(1;P)= 1 Proof The siuaion is analogous o he one in heorem 44 The only difference in he proof is he way in which he falsiy of relaion (3) is proved: In his case he relaion is immediaely false 2

Finally, he mos imporan resul in [4] concerning he parallel case is as follows: Proposiion 46 For he polynomial daa arrival law given by relaion (1), le A be any P -processor d-algorihm wih ime complexiy (N ), >1 If A erminaes, hen is running ime is upper bounded by a consan T ha does no depend on n bu depends on P 2 We can now exend his resul: Theorem 47 For he polynomial daa arrival law given by relaion (1), le A be any P -processor d-algorihm wih ime complexiy (N ), >1 IfA erminaes, hen is running ime is upper bounded by a consan T ha does no depend on n bu depends on S 0 (1;P) 2 5 Conclusions Theorem 34 is an imporan resul, because i characerizes he class of d-algorihms as being exacly he class of on-line algorihms When working wih d-algorihms, one can ake advanage of his resul, since on-line algorihms have already been designed for various problems As an immediae consequence of heorem 34, i is easier o know wheher some problem does no admi an opimal d-algorihm (where he noion of opimaliy is he one defined in [10] and summarized in secion 2 of his paper): If a given problem admis an off-line algorihm wih a complexiy asympoically smaller han he lower bound for he complexiy in he on-line case, hen one canno build an opimal d-algorihm As an example, soring does no admi an opimal d-algorihm, because he bes known algorihm has a complexiy of O(n log n) [13], while i is immediae ha an on-line soring algorihm has a complexiy of (n 2 ) The same resul is obained in [4], hough wih a lo more effor However, considering heorem 34, he above noion of opimaliy no longer makes sense since, given some problem, once he lower bound in he on-line case has been esablished for ha problem, a d-algorihm has no chance o bea i Therefore, we sugges he following definiion of opimaliy: Given some problem, a d-algorihm solving is opimal iff is complexiy maches he lower bound for he complexiy of on-line algorihms solving Using his definiion, i follows ha soring does admi an opimal d- algorihm, namely he one found in [4] which has a complexiy of (N 2 ) Concerning he parallel case, we found ha, when he parallel implemenaion of a saic algorihm offers some (however small) speedup, hen he d-algorihm based on ha saic algorihm will efficienly exploi his feaure, such ha he speedup may grow wihou bound for ha d-algorihm On he oher hand, for hose problems ha ake no advanage a all of a parallel implemenaion in he saic case, a d-algorihm will manifes no speedup For example, consider he following lis scanning problem defined in [10]: Given only a saring and an ending poin in a linked lis, i is required ha he lis be scanned beween hose poins, some processing being required for each visied node; in hedaa-accumulaing case, new nodes may be insered in he lis while he scanning is in progress [10] In ligh of he resuls in his paper, i is unlikely ha a parallel d-algorihm for he lis scanning problem would admi any speedup, since a parallel saic algorihm for his problem is likely o manifes uniary speedup only, as shown in [1], where he same problem (in he saic case) is independenly found and analyzed (exercise 613) References [1] S G Akl Parallel Compuaion: Models and Mehods Prenice-Hall, Upper Saddle River, NJ, 1997 [2] S G Akl and L Fava Lindon Paradigms admiing superuniary behaviour in parallel compuaion Parallel Algorihms and Applicaions, 11:129 153, 1997 [3] R P Bren The parallel evaluaion of general arihmeic expressions Journal of he ACM, 21(2):201 206, Apr 1974 [4] S D Bruda and S G Akl On he daa-accumulaing paradigm In Proceedings of he Fourh Inernaional Conference on Compuer Science and Informaics, pages 150 153, Research Triangle Park, NC, Ocober 1998 [5] J Harmanis, P M Lewis II, and R E Searns Classificaion of compuaions by ime and memory requiremens In Proceedings of he IFIP Congress 65, pages 31 35, Washingon, DC, 1965 [6] D Knuh The Ar of Compuer Programming, Vol 2: Seminumerical Algorihms Addison-Wesley, Reading, MA, 1969 [7] T H Lai and S Sahni Anomalies in parallel branch-andbound algorihms Communicaions of he ACM, 27(6):594 602, June 1984 [8] H R Lewis and C H Papadimiriou Elemens of he Theory of Compuaion Prenice-Hall, Englewood Cliffs, NJ, 1981 [9] F Luccio and L Pagli The p-shovelers problem (compuing wih ime-varying daa) In Proceedings of he 4h IEEE Symposium on Parallel and Disribued Processing, pages 188 193, 1992 [10] F Luccio and L Pagli Compuing wih ime varying daa: Sequenial complexiy and parallel speed up Theory of Compuing Sysems, 31(1):5 26, Jan/Feb 1998 [11] F P Preparaa and M I Shamos Compuaional Geomery An Inroducion Springer-Verlag, New York, NY, 1985 [12] J R Smih The Design and Analysis of Parallel Algorihms Oxford Universiy Press, 1993 [13] J D Ullman, A V Aho, and J E Hopcrof The Design and Analysis of Compuer Algorihms Addison-Wesley, Reading, MA, 1974