Scheduling divisible loads with return messages on heterogeneous master-worker platforms

Size: px
Start display at page:

Download "Scheduling divisible loads with return messages on heterogeneous master-worker platforms"

Transcription

1 Scheduling divisible loads with return messages on heterogeneous master-worker platforms Olivier Beaumont, Loris Marchal, Yves Robert Laboratoire de l Informatique du Parallélisme École Normale Supérieure de Lyon, France Graal Working Group June 2005

2 Outline 1 Introduction 2 Framework 3 Case study of some scenarios 4 Simulations 5 Conclusion Loris Marchal (LIP) Divisible load scheduling with return messages 2/ 32

3 Outline Introduction 1 Introduction 2 Framework 3 Case study of some scenarios 4 Simulations 5 Conclusion Loris Marchal (LIP) Divisible load scheduling with return messages 3/ 32

4 Introduction Introduction One master, holding a large number of identical tasks Some workers Loris Marchal (LIP) Divisible load scheduling with return messages 4/ 32

5 Introduction Introduction One master, holding a large number of identical tasks Some workers Heterogeneity in computing speed and bandwidth Loris Marchal (LIP) Divisible load scheduling with return messages 4/ 32

6 Introduction Introduction One master, holding a large number of identical tasks Some workers Heterogeneity in computing speed and bandwidth Distribute work to workers Loris Marchal (LIP) Divisible load scheduling with return messages 4/ 32

7 Introduction Introduction One master, holding a large number of identical tasks Some workers Heterogeneity in computing speed and bandwidth Distribute work to workers Gather the results Loris Marchal (LIP) Divisible load scheduling with return messages 4/ 32

8 Introduction Assumption: divisible load Important relaxation of the problem Loris Marchal (LIP) Divisible load scheduling with return messages 5/ 32

9 Introduction Assumption: divisible load Important relaxation of the problem Master holding N tasks Loris Marchal (LIP) Divisible load scheduling with return messages 5/ 32

10 Introduction Assumption: divisible load Important relaxation of the problem Master holding N tasks Worker P i will get a fraction α i N of these tasks Loris Marchal (LIP) Divisible load scheduling with return messages 5/ 32

11 Introduction Assumption: divisible load Important relaxation of the problem Master holding N tasks Worker P i will get a fraction α i N of these tasks α i is rational, tasks are divisible Loris Marchal (LIP) Divisible load scheduling with return messages 5/ 32

12 Introduction Assumption: divisible load Important relaxation of the problem Master holding N tasks Worker P i will get a fraction α i N of these tasks α i is rational, tasks are divisible possible to derive analytical solutions (tractability) Loris Marchal (LIP) Divisible load scheduling with return messages 5/ 32

13 Introduction Assumption: divisible load Important relaxation of the problem Master holding N tasks Worker P i will get a fraction α i N of these tasks α i is rational, tasks are divisible possible to derive analytical solutions (tractability) In practice, reasonable assumption with a large number of tasks Loris Marchal (LIP) Divisible load scheduling with return messages 5/ 32

14 Introduction Background on Divisible Load Scheduling Without return messages Linear cost model: X units of work: sent to Pi in X c i time units computed by P i in X w i time units Loris Marchal (LIP) Divisible load scheduling with return messages 6/ 32

15 Introduction Background on Divisible Load Scheduling Without return messages Linear cost model: X units of work: sent to Pi in X c i time units computed by P i in X w i time units Results: Bus network all processors work and finish at the same time order does not matter closed-formula for the makespan (Bataineh, Hsiung & Robertazzi, 1994) Result extended for homogeneous tree: a subtree reduces to one single worker Heterogeneous star network: all processors work and finish at the same time order matters: largest bandwidth first (whatever the computing power) Loris Marchal (LIP) Divisible load scheduling with return messages 6/ 32

16 Introduction Background on Divisible Load Scheduling With an affine cost model: X units of work: sent to P i in C i + X c i time units computed by P i in W i + X w i time units not all processors participate to the computation selecting the resources is hard: computing the optimal schedule on a star with affine cost model is NP-hard (Casanova, Drodowski, Legrand & Yang, 2005) Loris Marchal (LIP) Divisible load scheduling with return messages 7/ 32

17 Introduction Background on Divisible Load Scheduling With an affine cost model: X units of work: sent to P i in C i + X c i time units computed by P i in W i + X w i time units not all processors participate to the computation selecting the resources is hard: computing the optimal schedule on a star with affine cost model is NP-hard (Casanova, Drodowski, Legrand & Yang, 2005) Affine cost model + multi-round algorithms: UMR, quadratic progression of the chunk size (Casanova & Yang, 2003) asymptotically optimal algorithm based on steady-state (Beaumont, Legrand & Robert, 2003) Loris Marchal (LIP) Divisible load scheduling with return messages 7/ 32

18 Introduction Adding return messages What we need to decide: Ordering of the initial data to the workers Ordering of the return messages Quantity of work for each worker Loris Marchal (LIP) Divisible load scheduling with return messages 8/ 32

19 Introduction Adding return messages What we need to decide: Ordering of the initial data to the workers Ordering of the return messages Quantity of work for each worker Related work: Barlas: fixed communication time or bus network + affine computation cost optimal ordering and closed-from formulas Drodowski and Wolniewicz: consider LIFO and FIFO distributions Rosenberg et al.: complex (affine) communication model + possibility to slow down computing speed all FIFO ordering are equivalent and better than any other ordering Loris Marchal (LIP) Divisible load scheduling with return messages 8/ 32

20 Outline Framework 1 Introduction 2 Framework Model Linear Program for a given scenario Counter Examples 3 Case study of some scenarios 4 Simulations 5 Conclusion Loris Marchal (LIP) Divisible load scheduling with return messages 9/ 32

21 Model Framework Model P 0 c 1, d 1 c p, d p c2, d 2 c i, d i P 1 P 2 P i P p w 1 w 2 w i w p One master P 0 Loris Marchal (LIP) Divisible load scheduling with return messages 10/ 32

22 Model Framework Model P 0 c 1, d 1 c p, d p c2, d 2 c i, d i P 1 P 2 P i P p w 1 w 2 w i w p One master P 0 p workers P 1,..., P p Loris Marchal (LIP) Divisible load scheduling with return messages 10/ 32

23 Model Framework Model P 0 c 1, d 1 c p, d p c2, d 2 c i, d i P 1 P 2 P i P p w 1 w 2 w i w p One master P 0 p workers P 1,..., P p Linear model, X unit of work are: Loris Marchal (LIP) Divisible load scheduling with return messages 10/ 32

24 Model Framework Model P 0 c 1, d 1 c p, d p c2, d 2 c i, d i P 1 P 2 P i P p w 1 w 2 w i w p One master P 0 p workers P 1,..., P p Linear model, X unit of work are: sent to P i in X c i time units Loris Marchal (LIP) Divisible load scheduling with return messages 10/ 32

25 Model Framework Model P 0 c 1, d 1 c p, d p c2, d 2 c i, d i P 1 P 2 P i P p w 1 w 2 w i w p One master P 0 p workers P 1,..., P p Linear model, X unit of work are: sent to P i in X c i time units computed by P i in X w i time units Loris Marchal (LIP) Divisible load scheduling with return messages 10/ 32

26 Model Framework Model P 0 c 1, d 1 c p, d p c2, d 2 c i, d i P 1 P 2 P i P p w 1 w 2 w i w p One master P 0 p workers P 1,..., P p Linear model, X unit of work are: sent to P i in X c i time units computed by P i in X w i time units their result is sent back from Pi to the master in X d i time units Loris Marchal (LIP) Divisible load scheduling with return messages 10/ 32

27 Model Framework Model P 0 c 1, d 1 c p, d p c2, d 2 c i, d i P 1 P 2 P i P p w 1 w 2 w i w p One master P 0 p workers P 1,..., P p Linear model, X unit of work are: sent to P i in X c i time units computed by P i in X w i time units their result is sent back from Pi to the master in X d i time units We sometimes make the assumption: d i = z c i (e.g. z=0.8 result file = 80% of original files) Loris Marchal (LIP) Divisible load scheduling with return messages 10/ 32

28 Model Framework Model Standard model in DLS for communications the master can send data to at most one worker at time t the master can receive data from at most one worker at time t a worker can start computing only once the reception from the master has terminated Loris Marchal (LIP) Divisible load scheduling with return messages 11/ 32

29 Model Framework Model Standard model in DLS for communications the master can send data to at most one worker at time t the master can receive data from at most one worker at time t a worker can start computing only once the reception from the master has terminated No idle time in the operation of each worker? reasonable without return messages send results to the master just after computation? but maybe the communication medium is not free general scheme: allow idle time Loris Marchal (LIP) Divisible load scheduling with return messages 11/ 32

30 Framework Linear Program for a given scenario Linear Program for a given scenario A scenario describes: which workers participate in which order are performed the communications (sending data and receiving results) Then we can suppose that: the master sends data as soon as possible the workers start computing as soon as possible return communications are performed as late as possible Loris Marchal (LIP) Divisible load scheduling with return messages 12/ 32

31 Framework Linear Program for a given scenario Linear Program for a given scenario first messages sent to P 1, P 2,..., P n, permutation σ for return messages P i α i c i α i w i x i α i d i Consider worker P i : Loris Marchal (LIP) Divisible load scheduling with return messages 13/ 32

32 Framework Linear Program for a given scenario Linear Program for a given scenario first messages sent to P 1, P 2,..., P n, permutation σ for return messages P i α i c i α i w i x i α i d i Consider worker P i : starts receiving its data at t recv i = α j c j j, j<i Loris Marchal (LIP) Divisible load scheduling with return messages 13/ 32

33 Framework Linear Program for a given scenario Linear Program for a given scenario first messages sent to P 1, P 2,..., P n, permutation σ for return messages P i α i c i α i w i x i α i d i Consider worker P i : starts receiving its data at t recv i = α j c j j, j<i starts execution at t recv i + α i c i Loris Marchal (LIP) Divisible load scheduling with return messages 13/ 32

34 Framework Linear Program for a given scenario Linear Program for a given scenario first messages sent to P 1, P 2,..., P n, permutation σ for return messages P i α i c i α i w i x i α i d i Consider worker P i : starts receiving its data at t recv i = α j c j j, j<i starts execution at t recv i + α i c i terminates execution at t term i = t recv i + α i c i + α i w i Loris Marchal (LIP) Divisible load scheduling with return messages 13/ 32

35 Framework Linear Program for a given scenario Linear Program for a given scenario first messages sent to P 1, P 2,..., P n, permutation σ for return messages P i α i c i α i w i x i α i d i Consider worker P i : starts receiving its data at t recv i = α j c j j, j<i starts execution at t recv i + α i c i terminates execution at t term i starts sending results at t back i = t recv i = T + α i c i + α i w i α j d j j, σ(j) σ(i) Loris Marchal (LIP) Divisible load scheduling with return messages 13/ 32

36 Framework Linear Program for a given scenario Linear Program for a given scenario first messages sent to P 1, P 2,..., P n, permutation σ for return messages P i α i c i α i w i x i α i d i Consider worker P i : starts receiving its data at t recv i = α j c j j, j<i starts execution at t recv i + α i c i terminates execution at t term i starts sending results at t back i idle time: x i = t back i t term i 0 = t recv i = T + α i c i + α i w i α j d j j, σ(j) σ(i) Loris Marchal (LIP) Divisible load scheduling with return messages 13/ 32

37 Framework Linear Program for a given scenario Linear Program for a given scenario With a fixed T, we get the following LP: Maximize i α i, under the constraints { α i 0 t back i t term i 0 (1) It gives the optimal throughput Given a set of resources and the communications ordering not possible to test all configurations Even if we force the order of the return messages to be the same as for the initial messages (FIFO), still an exponential number of scenarios Loris Marchal (LIP) Divisible load scheduling with return messages 14/ 32

38 Framework Counter Examples Not all processors always participate to the computation P 0 c 1 = 1 d 1 = 1 c 2 = 1 d 2 = 1 c 3 = 5 d 3 = 5 P 1 P 2 P 3 w 1 = 1 w 2 = 1 w 3 = 5 P 1 P 1 P 2 P 2 P 3 P 3 LIFO, throughput ρ = 61/135 FIFO with 2 processors, (best among schedules with 3 processors) optimal throughput ρ = 1/2 Loris Marchal (LIP) Divisible load scheduling with return messages 15/ 32

39 Framework Counter Examples Optimal schedule might be non-lifo and non-fifo P 0 P 1 c 1 = 7 c 3 = 12 c 2 = 8 d 1 = 7 d 3 = 12 d 2 = 8 P 1 P 2 P 3 P 2 P 3 w 1 = 6 w 2 = 5 w 3 = 5 Optimal schedule (ρ = 38/ ) P 1 P 1 P 2 P 2 P 3 P 3 best FIFO schedule best LIFO schedule (ρ = 47/ ) (ρ = 43/ ) Loris Marchal (LIP) Divisible load scheduling with return messages 16/ 32

40 Outline Case study of some scenarios 1 Introduction 2 Framework 3 Case study of some scenarios LIFO strategies FIFO strategies 4 Simulations 5 Conclusion Loris Marchal (LIP) Divisible load scheduling with return messages 17/ 32

41 LIFO strategies Case study of some scenarios LIFO strategies LIFO = Last In First Out Processor that receives first its data is the last one sending its results The ordering of return messages is the reverse of the ordering of initial messages P 1 P 2 P i α i c i α i w i x i α i d i P p Loris Marchal (LIP) Divisible load scheduling with return messages 18/ 32

42 LIFO strategies Case study of some scenarios LIFO strategies Theorem In the optimal LIFO solution, then All processors participate to the execution Initial messages must be sent by non-decreasing values of c i + d i There is no idle time, i.e. x i = 0 for all i. Proof: modify the platform: c i c i + d i and d i 0 P 1 P 1 P 2 P i αici αiwi xi αidi P 2 P i αi(di + ci) αiwi xi P p P p reduces to a classic DLS problem without return messages Loris Marchal (LIP) Divisible load scheduling with return messages 19/ 32

43 FIFO strategies Case study of some scenarios FIFO strategies FIFO = First In First Out The ordering of return messages is the same as the ordering of initial messages P 1 P 2 P i α i c i α i w i x i α i d i P p We restrict to the case where d i = z c i (z < 1) Loris Marchal (LIP) Divisible load scheduling with return messages 20/ 32

44 FIFO strategies Case study of some scenarios FIFO strategies Theorem In the optimal FIFO solution, then Initial messages must be sent by non-decreasing values of c i + d i The set of participating processors is composed of the first q processors for the previous ordering, where q can be determined in linear time There is no idle time, i.e. x i = 0 for all i. Loris Marchal (LIP) Divisible load scheduling with return messages 21/ 32

45 FIFO strategies Case study of some scenarios FIFO strategies Consider processor P i in the schedule: j, j<i α j c j previous initial messages So we have: j, j i α i c i α i c i + α i w i α i w i x i α i d i j, σ(j)>σ(i) α j d j j, σ(j) σ(i) following return messages α i d i + x i = T A = Aα + x = T 1, where: c 1 + w 1 + d 1 d 2 d 3... d k c 1 c 2 + w 2 + d 2 d 3... d k. c 2 c 3 + w 3 + d dk c 1 c 2 c 3... c k + w k + d k Loris Marchal (LIP) Divisible load scheduling with return messages 22/ 32

46 FIFO strategies Case study of some scenarios FIFO strategies We can write A = L + 1d T, with: L = c 1 + w c 1 d 1 c 2 + w c 2 d 2 c 3 + w c 1 d 1 c 2 d 2 c 3 d 3... c k + w k and d= d 1 d 2.. d k Matrix 1d t is a rank-one matrix, so we can use the Sherman-Morrison formula to compute the inverse of A: A 1 = (L + 1d t ) 1 = L 1 L 1 1d t L d t L 1 1 Loris Marchal (LIP) Divisible load scheduling with return messages 23/ 32

47 FIFO strategies Case study of some scenarios FIFO strategies Using this formula for A 1, we are able to: prove that for all processor P i, either α i = 0 (processor does not compute at all) or x i = 0 (no idle time) derive an analytical formula for the throughput ρ(t ) = i α i prove that throughput is better when c 1 c 2 c 3... c n throughput is best when only processors with d i 1 ρ opt Loris Marchal (LIP) Divisible load scheduling with return messages 24/ 32

48 Case study of some scenarios FIFO strategies - special cases FIFO strategies Until now, we have suppose that d i = z c i, with z < 1 If z > 1, symmetric solution (send initial messages by decreasing value of d i + c i, select the first q processors in this order) z = 1 order has no importance in this case Loris Marchal (LIP) Divisible load scheduling with return messages 25/ 32

49 Outline Simulations 1 Introduction 2 Framework 3 Case study of some scenarios 4 Simulations 5 Conclusion Loris Marchal (LIP) Divisible load scheduling with return messages 26/ 32

50 Simulations Simulations Not possible to compute the optimal schedule in the general case (for 100 processors (100!) 2 linear programs with 100 unknowns to solve... ) Use optimal FIFO ordering as a comparison basis Also compute optimal LIFO solution Some FIFO heuristic, with all processors : ordered by increasing value of c i (fastest communicating worker first) ordered by increasing value of wi (fastest computing worker first) Loris Marchal (LIP) Divisible load scheduling with return messages 27/ 32

51 Simulations Simulations 100 processors, z = 80% relative throughput OPT-FIFO OPT-LIFO FIFO-INC-C FIFO-INC-W ratio w/c x-axis: ratio between average communication and computation costs (w/c) y-axis: throughput versus optimal FIFO throughput Loris Marchal (LIP) Divisible load scheduling with return messages 28/ 32

52 Simulations Simulations Number of processors used by the optimal FIFO schedule: nb of processors really used ratio w/c x-axis: ratio between average communication and computation costs (w/c) Loris Marchal (LIP) Divisible load scheduling with return messages 29/ 32

53 Simulations Simulations z=1 (same size for initial messages and return messages) relative throughput OPT-FIFO OPT-LIFO FIFO-INC-C FIFO-INC-W ratio w/c x-axis: ratio between average communication and computation costs (w/c) Loris Marchal (LIP) Divisible load scheduling with return messages 30/ 32

54 Outline Conclusion 1 Introduction 2 Framework 3 Case study of some scenarios 4 Simulations 5 Conclusion Loris Marchal (LIP) Divisible load scheduling with return messages 31/ 32

55 Conclusion Conclusion Divisible Load Scheduling with return messages on star platforms Natural extension to classical studies Leads to considerable difficulties (quite unexpected in the linear model) Complexity of the problem is still open Characterization of optimal FIFO and LIFO solutions Future work: Investigate the general case Extend results to the unidirectional one-port model (master cannot send AND receive at the same time) Loris Marchal (LIP) Divisible load scheduling with return messages 32/ 32

Scheduling divisible loads with return messages on heterogeneous master-worker platforms

Scheduling divisible loads with return messages on heterogeneous master-worker platforms Scheduling divisible loads with return messages on heterogeneous master-worker platforms Olivier Beaumont 1, Loris Marchal 2, and Yves Robert 2 1 LaBRI, UMR CNRS 5800, Bordeaux, France Olivier.Beaumont@labri.fr

More information

FIFO scheduling of divisible loads with return messages under the one-port model

FIFO scheduling of divisible loads with return messages under the one-port model INSTITUT NATIONAL DE RECHERCHE EN INFORMATIQUE ET EN AUTOMATIQUE FIFO scheduling of divisible loads with return messages under the one-port model Olivier Beaumont Loris Marchal Veronika Rehn Yves Robert

More information

Divisible Load Scheduling

Divisible Load Scheduling Divisible Load Scheduling Henri Casanova 1,2 1 Associate Professor Department of Information and Computer Science University of Hawai i at Manoa, U.S.A. 2 Visiting Associate Professor National Institute

More information

Divisible load theory

Divisible load theory Divisible load theory Loris Marchal November 5, 2012 1 The context Context of the study Scientific computing : large needs in computation or storage resources Need to use systems with several processors

More information

Scheduling Divisible Loads on Star and Tree Networks: Results and Open Problems

Scheduling Divisible Loads on Star and Tree Networks: Results and Open Problems Laboratoire de l Informatique du Parallélisme École Normale Supérieure de Lyon Unité Mixte de Recherche CNRS-INRIA-ENS LYON n o 5668 Scheduling Divisible Loads on Star and Tree Networks: Results and Open

More information

IN this paper, we deal with the master-slave paradigm on a

IN this paper, we deal with the master-slave paradigm on a IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS, VOL. 15, NO. 4, APRIL 2004 1 Scheduling Strategies for Master-Slave Tasking on Heterogeneous Processor Platforms Cyril Banino, Olivier Beaumont, Larry

More information

On scheduling the checkpoints of exascale applications

On scheduling the checkpoints of exascale applications On scheduling the checkpoints of exascale applications Marin Bougeret, Henri Casanova, Mikaël Rabie, Yves Robert, and Frédéric Vivien INRIA, École normale supérieure de Lyon, France Univ. of Hawai i at

More information

Divisible Load Scheduling on Heterogeneous Distributed Systems

Divisible Load Scheduling on Heterogeneous Distributed Systems Divisible Load Scheduling on Heterogeneous Distributed Systems July 2008 Graduate School of Global Information and Telecommunication Studies Waseda University Audiovisual Information Creation Technology

More information

Scheduling Lecture 1: Scheduling on One Machine

Scheduling Lecture 1: Scheduling on One Machine Scheduling Lecture 1: Scheduling on One Machine Loris Marchal October 16, 2012 1 Generalities 1.1 Definition of scheduling allocation of limited resources to activities over time activities: tasks in computer

More information

The impact of heterogeneity on master-slave on-line scheduling

The impact of heterogeneity on master-slave on-line scheduling Laboratoire de l Informatique du Parallélisme École Normale Supérieure de Lyon Unité Mixte de Recherche CNRS-INRIA-ENS LYON-UCBL n o 5668 The impact of heterogeneity on master-slave on-line scheduling

More information

Scheduling multiple bags of tasks on heterogeneous master-worker platforms: centralized versus distributed solutions

Scheduling multiple bags of tasks on heterogeneous master-worker platforms: centralized versus distributed solutions Laboratoire de l Informatique du Parallélisme École Normale Supérieure de Lyon Unité Mixte de Recherche CNRS-INRIA-ENS LYON-UCBL n o 5668 Scheduling multiple bags of tasks on heterogeneous master-worker

More information

PAPER SPORT: An Algorithm for Divisible Load Scheduling with Result Collection on Heterogeneous Systems

PAPER SPORT: An Algorithm for Divisible Load Scheduling with Result Collection on Heterogeneous Systems IEICE TRANS. COMMUN., VOL.E91 B, NO.8 AUGUST 2008 2571 PAPER SPORT: An Algorithm for Divisible Load Scheduling with Result Collection on Heterogeneous Systems Abhay GHATPANDE a), Student Member, Hidenori

More information

Static Load-Balancing Techniques for Iterative Computations on Heterogeneous Clusters

Static Load-Balancing Techniques for Iterative Computations on Heterogeneous Clusters Static Load-Balancing Techniques for Iterative Computations on Heterogeneous Clusters Hélène Renard, Yves Robert, and Frédéric Vivien LIP, UMR CNRS-INRIA-UCBL 5668 ENS Lyon, France {Helene.Renard Yves.Robert

More information

Assessing the impact and limits of steady-state scheduling for mixed task and data parallelism on heterogeneous platforms

Assessing the impact and limits of steady-state scheduling for mixed task and data parallelism on heterogeneous platforms Laboratoire de l Informatique du Parallélisme École Normale Supérieure de Lyon Unité Mixte de Recherche CNRS-INRIA-ENS LYON-UCBL n o 5668 Assessing the impact and limits of steady-state scheduling for

More information

Scheduling multiple bags of tasks on heterogeneous master- worker platforms: centralized versus distributed solutions

Scheduling multiple bags of tasks on heterogeneous master- worker platforms: centralized versus distributed solutions Scheduling multiple bags of tasks on heterogeneous master- worker platforms: centralized versus distributed solutions Olivier Beaumont, Larry Carter, Jeanne Ferrante, Arnaud Legrand, Loris Marchal, Yves

More information

Steady-state scheduling of task graphs on heterogeneous computing platforms

Steady-state scheduling of task graphs on heterogeneous computing platforms Laboratoire de l Informatique du Parallélisme École Normale Supérieure de Lyon Unité Mixte de Recherche CNRS-INRIA-ENS LYON n o 5668 Steady-state scheduling of task graphs on heterogeneous computing platforms

More information

FOR decades it has been realized that many algorithms

FOR decades it has been realized that many algorithms IEEE TRANSACTIONS ON AEROSPACE AND ELECTRONIC SYSTEMS 1 Scheduling Divisible Loads with Nonlinear Communication Time Kai Wang, and Thomas G. Robertazzi, Fellow, IEEE Abstract A scheduling model for single

More information

Scheduling Lecture 1: Scheduling on One Machine

Scheduling Lecture 1: Scheduling on One Machine Scheduling Lecture 1: Scheduling on One Machine Loris Marchal 1 Generalities 1.1 Definition of scheduling allocation of limited resources to activities over time activities: tasks in computer environment,

More information

On the Complexity of Multi-Round Divisible Load Scheduling

On the Complexity of Multi-Round Divisible Load Scheduling INSTITUT NATIONAL DE RECHERCHE EN INFORMATIQUE ET EN AUTOMATIQUE On the Complexity of Multi-Round Divisible Load Scheduling Yang Yang Henri Casanova Maciej Drozdowski Marcin Lawenda Arnaud Legrand N 6096

More information

How to deal with uncertainties and dynamicity?

How to deal with uncertainties and dynamicity? How to deal with uncertainties and dynamicity? http://graal.ens-lyon.fr/ lmarchal/scheduling/ 19 novembre 2012 1/ 37 Outline 1 Sensitivity and Robustness 2 Analyzing the sensitivity : the case of Backfilling

More information

Energy-aware checkpointing of divisible tasks with soft or hard deadlines

Energy-aware checkpointing of divisible tasks with soft or hard deadlines Energy-aware checkpointing of divisible tasks with soft or hard deadlines Guillaume Aupy 1, Anne Benoit 1,2, Rami Melhem 3, Paul Renaud-Goud 1 and Yves Robert 1,2,4 1. Ecole Normale Supérieure de Lyon,

More information

Scheduling algorithms for heterogeneous and failure-prone platforms

Scheduling algorithms for heterogeneous and failure-prone platforms Scheduling algorithms for heterogeneous and failure-prone platforms Yves Robert Ecole Normale Supérieure de Lyon & Institut Universitaire de France http://graal.ens-lyon.fr/~yrobert Joint work with Olivier

More information

On the Complexity of Multi-Round Divisible Load Scheduling

On the Complexity of Multi-Round Divisible Load Scheduling On the Complexity of Multi-Round Divisible Load Scheduling Yang Yang, Henri Casanova, Maciej Drozdowski, Marcin Lawenda, Arnaud Legrand To cite this version: Yang Yang, Henri Casanova, Maciej Drozdowski,

More information

Overlay networks maximizing throughput

Overlay networks maximizing throughput Overlay networks maximizing throughput Olivier Beaumont, Lionel Eyraud-Dubois, Shailesh Kumar Agrawal Cepage team, LaBRI, Bordeaux, France IPDPS April 20, 2010 Outline Introduction 1 Introduction 2 Complexity

More information

On the Complexity of Mapping Pipelined Filtering Services on Heterogeneous Platforms

On the Complexity of Mapping Pipelined Filtering Services on Heterogeneous Platforms On the Complexity of Mapping Pipelined Filtering Services on Heterogeneous Platforms Anne Benoit, Fanny Dufossé and Yves Robert LIP, École Normale Supérieure de Lyon, France {Anne.Benoit Fanny.Dufosse

More information

Energy-efficient scheduling

Energy-efficient scheduling Energy-efficient scheduling Guillaume Aupy 1, Anne Benoit 1,2, Paul Renaud-Goud 1 and Yves Robert 1,2,3 1. Ecole Normale Supérieure de Lyon, France 2. Institut Universitaire de France 3. University of

More information

Data redistribution algorithms for homogeneous and heterogeneous processor rings

Data redistribution algorithms for homogeneous and heterogeneous processor rings Data redistribution algorithms for homogeneous and heterogeneous processor rings Hélène Renard, Yves Robert and Frédéric Vivien LIP, UMR CNRS-INRIA-UCBL 5668 ENS Lyon, France {Helene.Renard Yves.Robert

More information

Scheduling Parallel Iterative Applications on Volatile Resources

Scheduling Parallel Iterative Applications on Volatile Resources Scheduling Parallel Iterative Applications on Volatile Resources Henry Casanova, Fanny Dufossé, Yves Robert and Frédéric Vivien Ecole Normale Supérieure de Lyon and University of Hawai i at Mānoa October

More information

Che-Wei Chang Department of Computer Science and Information Engineering, Chang Gung University

Che-Wei Chang Department of Computer Science and Information Engineering, Chang Gung University Che-Wei Chang chewei@mail.cgu.edu.tw Department of Computer Science and Information Engineering, Chang Gung University } 2017/11/15 Midterm } 2017/11/22 Final Project Announcement 2 1. Introduction 2.

More information

A Strategyproof Mechanism for Scheduling Divisible Loads in Distributed Systems

A Strategyproof Mechanism for Scheduling Divisible Loads in Distributed Systems A Strategyproof Mechanism for Scheduling Divisible Loads in Distributed Systems Daniel Grosu and Thomas E. Carroll Department of Computer Science Wayne State University Detroit, Michigan 48202, USA fdgrosu,

More information

Divisible Load Scheduling: An Approach Using Coalitional Games

Divisible Load Scheduling: An Approach Using Coalitional Games Divisible Load Scheduling: An Approach Using Coalitional Games Thomas E. Carroll and Daniel Grosu Dept. of Computer Science Wayne State University 5143 Cass Avenue, Detroit, MI 48202 USA. {tec,dgrosu}@cs.wayne.edu

More information

Scheduling communication requests traversing a switch: complexity and algorithms

Scheduling communication requests traversing a switch: complexity and algorithms Scheduling communication requests traversing a switch: complexity and algorithms Matthieu Gallet Yves Robert Frédéric Vivien Laboratoire de l'informatique du Parallélisme UMR CNRS-ENS Lyon-INRIA-UCBL 5668

More information

Lecture 2: Scheduling on Parallel Machines

Lecture 2: Scheduling on Parallel Machines Lecture 2: Scheduling on Parallel Machines Loris Marchal October 17, 2012 Parallel environment alpha in Graham s notation): P parallel identical Q uniform machines: each machine has a given speed speed

More information

Scheduling Parallel Iterative Applications on Volatile Resources

Scheduling Parallel Iterative Applications on Volatile Resources Scheduling Parallel Iterative Applications on Volatile Resources Henri Casanova University of Hawai i at Mānoa, USA henric@hawaiiedu Fanny Dufossé, Yves Robert, Frédéric Vivien Ecole Normale Supérieure

More information

Revisiting Matrix Product on Master-Worker Platforms

Revisiting Matrix Product on Master-Worker Platforms Revisiting Matrix Product on Master-Worker Platforms Jack Dongarra 2, Jean-François Pineau 1, Yves Robert 1,ZhiaoShi 2,andFrédéric Vivien 1 1 LIP, NRS-ENS Lyon-INRIA-UBL 2 Innovative omputing Laboratory

More information

Data redistribution algorithms for heterogeneous processor rings

Data redistribution algorithms for heterogeneous processor rings Data redistribution algorithms for heterogeneous processor rings Hélène Renard, Yves Robert, Frédéric Vivien To cite this version: Hélène Renard, Yves Robert, Frédéric Vivien. Data redistribution algorithms

More information

Resilient and energy-aware algorithms

Resilient and energy-aware algorithms Resilient and energy-aware algorithms Anne Benoit ENS Lyon Anne.Benoit@ens-lyon.fr http://graal.ens-lyon.fr/~abenoit CR02-2016/2017 Anne.Benoit@ens-lyon.fr CR02 Resilient and energy-aware algorithms 1/

More information

SCHEDULING STRATEGIES FOR MIXED DATA AND TASK PARALLELISM ON HETEROGENEOUS CLUSTERS

SCHEDULING STRATEGIES FOR MIXED DATA AND TASK PARALLELISM ON HETEROGENEOUS CLUSTERS Parallel Processing Letters, Vol. 3, No. 2 (2003) 225 244 c World Scientific Publishing Company SCHEDULING STRATEGIES FOR MIXED DATA AND TASK PARALLELISM ON HETEROGENEOUS CLUSTERS O. BEAUMONT LaBRI, UMR

More information

Embedded Systems 15. REVIEW: Aperiodic scheduling. C i J i 0 a i s i f i d i

Embedded Systems 15. REVIEW: Aperiodic scheduling. C i J i 0 a i s i f i d i Embedded Systems 15-1 - REVIEW: Aperiodic scheduling C i J i 0 a i s i f i d i Given: A set of non-periodic tasks {J 1,, J n } with arrival times a i, deadlines d i, computation times C i precedence constraints

More information

Equal Allocation Scheduling for Data Intensive Applications

Equal Allocation Scheduling for Data Intensive Applications I. INTRODUCTION Equal Allocation Scheduling for Data Intensive Applications KWANGIL KO THOMAS G. ROBERTAZZI, Senior Member, IEEE Stony Brook University A new analytical model for equal allocation of divisible

More information

Algorithm Design. Scheduling Algorithms. Part 2. Parallel machines. Open-shop Scheduling. Job-shop Scheduling.

Algorithm Design. Scheduling Algorithms. Part 2. Parallel machines. Open-shop Scheduling. Job-shop Scheduling. Algorithm Design Scheduling Algorithms Part 2 Parallel machines. Open-shop Scheduling. Job-shop Scheduling. 1 Parallel Machines n jobs need to be scheduled on m machines, M 1,M 2,,M m. Each machine can

More information

Real-time operating systems course. 6 Definitions Non real-time scheduling algorithms Real-time scheduling algorithm

Real-time operating systems course. 6 Definitions Non real-time scheduling algorithms Real-time scheduling algorithm Real-time operating systems course 6 Definitions Non real-time scheduling algorithms Real-time scheduling algorithm Definitions Scheduling Scheduling is the activity of selecting which process/thread should

More information

Mapping tightly-coupled applications on volatile resources

Mapping tightly-coupled applications on volatile resources Mapping tightly-coupled applications on volatile resources Henri Casanova, Fanny Dufossé, Yves Robert, Frédéric Vivien To cite this version: Henri Casanova, Fanny Dufossé, Yves Robert, Frédéric Vivien.

More information

Embedded Systems 14. Overview of embedded systems design

Embedded Systems 14. Overview of embedded systems design Embedded Systems 14-1 - Overview of embedded systems design - 2-1 Point of departure: Scheduling general IT systems In general IT systems, not much is known about the computational processes a priori The

More information

Divisible Job Scheduling in Systems with Limited Memory. Paweł Wolniewicz

Divisible Job Scheduling in Systems with Limited Memory. Paweł Wolniewicz Divisible Job Scheduling in Systems with Limited Memory Paweł Wolniewicz 2003 Table of Contents Table of Contents 1 1 Introduction 3 2 Divisible Job Model Fundamentals 6 2.1 Formulation of the problem.......................

More information

A comparison of sequencing formulations in a constraint generation procedure for avionics scheduling

A comparison of sequencing formulations in a constraint generation procedure for avionics scheduling A comparison of sequencing formulations in a constraint generation procedure for avionics scheduling Department of Mathematics, Linköping University Jessika Boberg LiTH-MAT-EX 2017/18 SE Credits: Level:

More information

Minimizing Mean Flowtime and Makespan on Master-Slave Systems

Minimizing Mean Flowtime and Makespan on Master-Slave Systems Minimizing Mean Flowtime and Makespan on Master-Slave Systems Joseph Y-T. Leung,1 and Hairong Zhao 2 Department of Computer Science New Jersey Institute of Technology Newark, NJ 07102, USA Abstract The

More information

Embedded Systems Design: Optimization Challenges. Paul Pop Embedded Systems Lab (ESLAB) Linköping University, Sweden

Embedded Systems Design: Optimization Challenges. Paul Pop Embedded Systems Lab (ESLAB) Linköping University, Sweden of /4 4 Embedded Systems Design: Optimization Challenges Paul Pop Embedded Systems Lab (ESLAB) Linköping University, Sweden Outline! Embedded systems " Example area: automotive electronics " Embedded systems

More information

Flow Algorithms for Two Pipelined Filtering Problems

Flow Algorithms for Two Pipelined Filtering Problems Flow Algorithms for Two Pipelined Filtering Problems Anne Condon, University of British Columbia Amol Deshpande, University of Maryland Lisa Hellerstein, Polytechnic University, Brooklyn Ning Wu, Polytechnic

More information

CPU SCHEDULING RONG ZHENG

CPU SCHEDULING RONG ZHENG CPU SCHEDULING RONG ZHENG OVERVIEW Why scheduling? Non-preemptive vs Preemptive policies FCFS, SJF, Round robin, multilevel queues with feedback, guaranteed scheduling 2 SHORT-TERM, MID-TERM, LONG- TERM

More information

Module 5: CPU Scheduling

Module 5: CPU Scheduling Module 5: CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms Multiple-Processor Scheduling Real-Time Scheduling Algorithm Evaluation 5.1 Basic Concepts Maximum CPU utilization obtained

More information

Semi-online Task Partitioning and Communication between Local and Remote Processors

Semi-online Task Partitioning and Communication between Local and Remote Processors Semi-online Task Partitioning and Communication between Local and Remote Processors Jaya Prakash Champati and Ben Liang Department of Electrical and Computer Engineering, University of Toronto, Canada

More information

Component-based Analysis of Worst-case Temperatures

Component-based Analysis of Worst-case Temperatures Component-based Analysis of Worst-case Temperatures Lothar Thiele, Iuliana Bacivarov, Jian-Jia Chen, Devendra Rai, Hoeseok Yang 1 How can we analyze timing properties in a compositional framework? 2 Modular

More information

Chapter 6: CPU Scheduling

Chapter 6: CPU Scheduling Chapter 6: CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms Multiple-Processor Scheduling Real-Time Scheduling Algorithm Evaluation 6.1 Basic Concepts Maximum CPU utilization obtained

More information

Overview: Synchronous Computations

Overview: Synchronous Computations Overview: Synchronous Computations barriers: linear, tree-based and butterfly degrees of synchronization synchronous example 1: Jacobi Iterations serial and parallel code, performance analysis synchronous

More information

Computing the throughput of probabilistic and replicated streaming applications

Computing the throughput of probabilistic and replicated streaming applications Computing the throughput of probabilistic and replicated streaming applications Anne Benoit, Matthieu Gallet, Bruno Gaujal, Yves Robert To cite this version: Anne Benoit, Matthieu Gallet, Bruno Gaujal,

More information

Parallel Performance Theory - 1

Parallel Performance Theory - 1 Parallel Performance Theory - 1 Parallel Computing CIS 410/510 Department of Computer and Information Science Outline q Performance scalability q Analytical performance measures q Amdahl s law and Gustafson-Barsis

More information

A Tighter Analysis of Work Stealing

A Tighter Analysis of Work Stealing A Tighter Analysis of Work Stealing Marc Tchiboukdjian Nicolas Gast Denis Trystram Jean-Louis Roch Julien Bernard Laboratoire d Informatique de Grenoble INRIA Marc Tchiboukdjian A Tighter Analysis of Work

More information

Energy Considerations for Divisible Load Processing

Energy Considerations for Divisible Load Processing Energy Considerations for Divisible Load Processing Maciej Drozdowski Institute of Computing cience, Poznań University of Technology, Piotrowo 2, 60-965 Poznań, Poland Maciej.Drozdowski@cs.put.poznan.pl

More information

Scheduling Tightly-Coupled Applications on Heterogeneous Desktop Grids

Scheduling Tightly-Coupled Applications on Heterogeneous Desktop Grids Scheduling Tightly-Coupled Applications on Heterogeneous Desktop Grids Henri Casanova, Fanny Dufossé, Yves Robert, Frédéric Vivien To cite this version: Henri Casanova, Fanny Dufossé, Yves Robert, Frédéric

More information

Algorithms for Temperature-Aware Task Scheduling in Microprocessor Systems

Algorithms for Temperature-Aware Task Scheduling in Microprocessor Systems Algorithms for Temperature-Aware Task Scheduling in Microprocessor Systems Marek Chrobak 1, Christoph Dürr 2, Mathilde Hurand 2, and Julien Robert 3 1 Department of Computer Science, University of California,

More information

Analytical Modeling of Parallel Systems

Analytical Modeling of Parallel Systems Analytical Modeling of Parallel Systems Chieh-Sen (Jason) Huang Department of Applied Mathematics National Sun Yat-sen University Thank Ananth Grama, Anshul Gupta, George Karypis, and Vipin Kumar for providing

More information

P C max. NP-complete from partition. Example j p j What is the makespan on 2 machines? 3 machines? 4 machines?

P C max. NP-complete from partition. Example j p j What is the makespan on 2 machines? 3 machines? 4 machines? Multiple Machines Model Multiple Available resources people time slots queues networks of computers Now concerned with both allocation to a machine and ordering on that machine. P C max NP-complete from

More information

HYBRID FLOW-SHOP WITH ADJUSTMENT

HYBRID FLOW-SHOP WITH ADJUSTMENT K Y BERNETIKA VOLUM E 47 ( 2011), NUMBER 1, P AGES 50 59 HYBRID FLOW-SHOP WITH ADJUSTMENT Jan Pelikán The subject of this paper is a flow-shop based on a case study aimed at the optimisation of ordering

More information

LPT rule: Whenever a machine becomes free for assignment, assign that job whose. processing time is the largest among those jobs not yet assigned.

LPT rule: Whenever a machine becomes free for assignment, assign that job whose. processing time is the largest among those jobs not yet assigned. LPT rule Whenever a machine becomes free for assignment, assign that job whose processing time is the largest among those jobs not yet assigned. Example m1 m2 m3 J3 Ji J1 J2 J3 J4 J5 J6 6 5 3 3 2 1 3 5

More information

Single machine scheduling with forbidden start times

Single machine scheduling with forbidden start times 4OR manuscript No. (will be inserted by the editor) Single machine scheduling with forbidden start times Jean-Charles Billaut 1 and Francis Sourd 2 1 Laboratoire d Informatique Université François-Rabelais

More information

CHAPTER 16: SCHEDULING

CHAPTER 16: SCHEDULING CHAPTER 16: SCHEDULING Solutions: 1. Job A B C A B C 1 5 8 6 row 1 0 3 1 Worker 2 6 7 9 reduction 2 0 1 3 3 4 5 3 3 1 2 0 column reduction A B C 1 0 2 1 Optimum: 2 0 0 3 Worker 1, Job A 3 1 1 0 2 B 3 C

More information

Analysis of Dynamic Scheduling Strategies for Matrix Multiplication on Heterogeneous Platforms

Analysis of Dynamic Scheduling Strategies for Matrix Multiplication on Heterogeneous Platforms of Dynamic Scheduling Strategies for Matrix Multiplication on Heterogeneous Platforms Olivier Beaumont, Loris Marchal To cite this version: Olivier Beaumont, Loris Marchal of Dynamic Scheduling Strategies

More information

TDDB68 Concurrent programming and operating systems. Lecture: CPU Scheduling II

TDDB68 Concurrent programming and operating systems. Lecture: CPU Scheduling II TDDB68 Concurrent programming and operating systems Lecture: CPU Scheduling II Mikael Asplund, Senior Lecturer Real-time Systems Laboratory Department of Computer and Information Science Copyright Notice:

More information

A Dynamic Programming algorithm for minimizing total cost of duplication in scheduling an outtree with communication delays and duplication

A Dynamic Programming algorithm for minimizing total cost of duplication in scheduling an outtree with communication delays and duplication A Dynamic Programming algorithm for minimizing total cost of duplication in scheduling an outtree with communication delays and duplication Claire Hanen Laboratory LIP6 4, place Jussieu F-75 252 Paris

More information

More on Input Distributions

More on Input Distributions More on Input Distributions Importance of Using the Correct Distribution Replacing a distribution with its mean Arrivals Waiting line Processing order System Service mean interarrival time = 1 minute mean

More information

This means that we can assume each list ) is

This means that we can assume each list ) is This means that we can assume each list ) is of the form ),, ( )with < and Since the sizes of the items are integers, there are at most +1pairs in each list Furthermore, if we let = be the maximum possible

More information

Integer Programming Part II

Integer Programming Part II Be the first in your neighborhood to master this delightful little algorithm. Integer Programming Part II The Branch and Bound Algorithm learn about fathoming, bounding, branching, pruning, and much more!

More information

Parallelization of Multilevel Preconditioners Constructed from Inverse-Based ILUs on Shared-Memory Multiprocessors

Parallelization of Multilevel Preconditioners Constructed from Inverse-Based ILUs on Shared-Memory Multiprocessors Parallelization of Multilevel Preconditioners Constructed from Inverse-Based ILUs on Shared-Memory Multiprocessors J.I. Aliaga 1 M. Bollhöfer 2 A.F. Martín 1 E.S. Quintana-Ortí 1 1 Deparment of Computer

More information

Single Machine Models

Single Machine Models Outline DM87 SCHEDULING, TIMETABLING AND ROUTING Lecture 8 Single Machine Models 1. Dispatching Rules 2. Single Machine Models Marco Chiarandini DM87 Scheduling, Timetabling and Routing 2 Outline Dispatching

More information

Real-Time and Embedded Systems (M) Lecture 5

Real-Time and Embedded Systems (M) Lecture 5 Priority-driven Scheduling of Periodic Tasks (1) Real-Time and Embedded Systems (M) Lecture 5 Lecture Outline Assumptions Fixed-priority algorithms Rate monotonic Deadline monotonic Dynamic-priority algorithms

More information

Real-time Scheduling of Periodic Tasks (1) Advanced Operating Systems Lecture 2

Real-time Scheduling of Periodic Tasks (1) Advanced Operating Systems Lecture 2 Real-time Scheduling of Periodic Tasks (1) Advanced Operating Systems Lecture 2 Lecture Outline Scheduling periodic tasks The rate monotonic algorithm Definition Non-optimality Time-demand analysis...!2

More information

Recoverable Robustness in Scheduling Problems

Recoverable Robustness in Scheduling Problems Master Thesis Computing Science Recoverable Robustness in Scheduling Problems Author: J.M.J. Stoef (3470997) J.M.J.Stoef@uu.nl Supervisors: dr. J.A. Hoogeveen J.A.Hoogeveen@uu.nl dr. ir. J.M. van den Akker

More information

More Approximation Algorithms

More Approximation Algorithms CS 473: Algorithms, Spring 2018 More Approximation Algorithms Lecture 25 April 26, 2018 Most slides are courtesy Prof. Chekuri Ruta (UIUC) CS473 1 Spring 2018 1 / 28 Formal definition of approximation

More information

Revisiting Matrix Product on Master-Worker Platforms

Revisiting Matrix Product on Master-Worker Platforms INSTITUT NATIONAL DE RECHERCHE EN INFORMATIQUE ET EN AUTOMATIQUE Revisiting Matrix Product on Master-Worker Platforms Jack Dongarra Jean-François Pineau Yves Robert Zhiao Shi Frédéric Vivien N 6053 December

More information

Parallel Performance Theory

Parallel Performance Theory AMS 250: An Introduction to High Performance Computing Parallel Performance Theory Shawfeng Dong shaw@ucsc.edu (831) 502-7743 Applied Mathematics & Statistics University of California, Santa Cruz Outline

More information

Arnaud Legrand, Olivier Beaumont, Loris Marchal, Yves Robert

Arnaud Legrand, Olivier Beaumont, Loris Marchal, Yves Robert Arnaud Legrand, Olivier Beaumont, Loris Marchal, Yves Robert École Nor 46 Allée d I Télé Téléc Adresse é when using a general gr comes from the decompo are used concurrently to that a concrete scheduli

More information

2/5/07 CSE 30341: Operating Systems Principles

2/5/07 CSE 30341: Operating Systems Principles page 1 Shortest-Job-First (SJR) Scheduling Associate with each process the length of its next CPU burst. Use these lengths to schedule the process with the shortest time Two schemes: nonpreemptive once

More information

Distributed Systems Byzantine Agreement

Distributed Systems Byzantine Agreement Distributed Systems Byzantine Agreement He Sun School of Informatics University of Edinburgh Outline Finish EIG algorithm for Byzantine agreement. Number-of-processors lower bound for Byzantine agreement.

More information

Multiprocessor Scheduling I: Partitioned Scheduling. LS 12, TU Dortmund

Multiprocessor Scheduling I: Partitioned Scheduling. LS 12, TU Dortmund Multiprocessor Scheduling I: Partitioned Scheduling Prof. Dr. Jian-Jia Chen LS 12, TU Dortmund 22/23, June, 2015 Prof. Dr. Jian-Jia Chen (LS 12, TU Dortmund) 1 / 47 Outline Introduction to Multiprocessor

More information

Algorithms. Outline! Approximation Algorithms. The class APX. The intelligence behind the hardware. ! Based on

Algorithms. Outline! Approximation Algorithms. The class APX. The intelligence behind the hardware. ! Based on 6117CIT - Adv Topics in Computing Sci at Nathan 1 Algorithms The intelligence behind the hardware Outline! Approximation Algorithms The class APX! Some complexity classes, like PTAS and FPTAS! Illustration

More information

Approximation Algorithms for scheduling

Approximation Algorithms for scheduling Approximation Algorithms for scheduling Ahmed Abu Safia I.D.:119936343, McGill University, 2004 (COMP 760) Approximation Algorithms for scheduling Leslie A. Hall The first Chapter of the book entitled

More information

Real-Time Systems. Event-Driven Scheduling

Real-Time Systems. Event-Driven Scheduling Real-Time Systems Event-Driven Scheduling Marcus Völp, Hermann Härtig WS 2013/14 Outline mostly following Jane Liu, Real-Time Systems Principles Scheduling EDF and LST as dynamic scheduling methods Fixed

More information

MPI Implementations for Solving Dot - Product on Heterogeneous Platforms

MPI Implementations for Solving Dot - Product on Heterogeneous Platforms MPI Implementations for Solving Dot - Product on Heterogeneous Platforms Panagiotis D. Michailidis and Konstantinos G. Margaritis Abstract This paper is focused on designing two parallel dot product implementations

More information

arxiv:cs/ v1 [cs.ds] 18 Oct 2004

arxiv:cs/ v1 [cs.ds] 18 Oct 2004 A Note on Scheduling Equal-Length Jobs to Maximize Throughput arxiv:cs/0410046v1 [cs.ds] 18 Oct 2004 Marek Chrobak Christoph Dürr Wojciech Jawor Lukasz Kowalik Maciej Kurowski Abstract We study the problem

More information

Feasibility Pump Heuristics for Column Generation Approaches

Feasibility Pump Heuristics for Column Generation Approaches 1 / 29 Feasibility Pump Heuristics for Column Generation Approaches Ruslan Sadykov 2 Pierre Pesneau 1,2 Francois Vanderbeck 1,2 1 University Bordeaux I 2 INRIA Bordeaux Sud-Ouest SEA 2012 Bordeaux, France,

More information

Revisiting Matrix Product on Master-Worker Platforms

Revisiting Matrix Product on Master-Worker Platforms Laboratoire de l Informatique du Parallélisme École Normale Supérieure de Lyon Unité Mixte de Recherche NRS-INRIA-ENS LYON-UBL n o 5668 Revisiting Matrix Product on Master-Worker Platforms Jack Dongarra,

More information

3.8 Strong valid inequalities

3.8 Strong valid inequalities 3.8 Strong valid inequalities By studying the problem structure, we can derive strong valid inequalities which lead to better approximations of the ideal formulation conv(x ) and hence to tighter bounds.

More information

Logic-based Benders Decomposition

Logic-based Benders Decomposition Logic-based Benders Decomposition A short Introduction Martin Riedler AC Retreat Contents 1 Introduction 2 Motivation 3 Further Notes MR Logic-based Benders Decomposition June 29 July 1 2 / 15 Basic idea

More information

Fundamental Algorithms for System Modeling, Analysis, and Optimization

Fundamental Algorithms for System Modeling, Analysis, and Optimization Fundamental Algorithms for System Modeling, Analysis, and Optimization Edward A. Lee, Jaijeet Roychowdhury, Sanjit A. Seshia UC Berkeley EECS 144/244 Fall 2010 Copyright 2010, E. A. Lee, J. Roydhowdhury,

More information

Off-line and on-line scheduling on heterogeneous master-slave platforms

Off-line and on-line scheduling on heterogeneous master-slave platforms INSTITUT NATIONAL DE RECHERCHE EN INFORMATIQUE ET EN AUTOMATIQUE Off-line and on-line scheduling on heterogeneous master-slave platforms Jean-François Pineau Yves Robert Frédéric Vivien N 5634 July 2005

More information

Reclaiming the energy of a schedule: models and algorithms

Reclaiming the energy of a schedule: models and algorithms Author manuscript, published in "Concurrency and Computation: Practice and Experience 25 (2013) 1505-1523" DOI : 10.1002/cpe.2889 CONCURRENCY AND COMPUTATION: PRACTICE AND EXPERIENCE Concurrency Computat.:

More information

Computing the throughput of replicated workflows on heterogeneous platforms

Computing the throughput of replicated workflows on heterogeneous platforms Laboratoire de l Informatique du Parallélisme École Normale Supérieure de Lyon Unité Mixte de Recherche CNRS-INRIA-ENS LYON-UCBL n o 5668 Computing the throughput of replicated workflows on heterogeneous

More information

Energy-Efficient Broadcast Scheduling. Speed-Controlled Transmission Channels

Energy-Efficient Broadcast Scheduling. Speed-Controlled Transmission Channels for Speed-Controlled Transmission Channels Joint work with Christian Gunia from Freiburg University in ISAAC 06. 25.10.07 Outline Problem Definition and Motivation 1 Problem Definition and Motivation 2

More information

TDDI04, K. Arvidsson, IDA, Linköpings universitet CPU Scheduling. Overview: CPU Scheduling. [SGG7] Chapter 5. Basic Concepts.

TDDI04, K. Arvidsson, IDA, Linköpings universitet CPU Scheduling. Overview: CPU Scheduling. [SGG7] Chapter 5. Basic Concepts. TDDI4 Concurrent Programming, Operating Systems, and Real-time Operating Systems CPU Scheduling Overview: CPU Scheduling CPU bursts and I/O bursts Scheduling Criteria Scheduling Algorithms Multiprocessor

More information