Season Finale: Which one is better?

Similar documents
Module 5: CPU Scheduling

Chapter 6: CPU Scheduling

CPU scheduling. CPU Scheduling

CS 550 Operating Systems Spring CPU scheduling I

Process Scheduling. Process Scheduling. CPU and I/O Bursts. CPU - I/O Burst Cycle. Variations in Bursts. Histogram of CPU Burst Times

CPU Scheduling. CPU Scheduler

Comp 204: Computer Systems and Their Implementation. Lecture 11: Scheduling cont d

Che-Wei Chang Department of Computer Science and Information Engineering, Chang Gung University

2/5/07 CSE 30341: Operating Systems Principles

TDDI04, K. Arvidsson, IDA, Linköpings universitet CPU Scheduling. Overview: CPU Scheduling. [SGG7] Chapter 5. Basic Concepts.

LSN 15 Processor Scheduling

CSCE 313 Introduction to Computer Systems. Instructor: Dezhen Song

CSE 380 Computer Operating Systems

Revamped Round Robin Scheduling Algorithm

Simulation of Process Scheduling Algorithms

Scheduling I. Today. Next Time. ! Introduction to scheduling! Classical algorithms. ! Advanced topics on scheduling

CPU Scheduling Exercises

CS 370. FCFS, SJF and Round Robin. Yashwanth Virupaksha and Abhishek Yeluri

Scheduling. Uwe R. Zimmer & Alistair Rendell The Australian National University

CPU SCHEDULING RONG ZHENG

TDDB68 Concurrent programming and operating systems. Lecture: CPU Scheduling II

Design and Performance Evaluation of a New Proposed Shortest Remaining Burst Round Robin (SRBRR) Scheduling Algorithm

Dynamic Time Quantum based Round Robin CPU Scheduling Algorithm

Scheduling I. Today Introduction to scheduling Classical algorithms. Next Time Advanced topics on scheduling

CPU Scheduling. Heechul Yun

Real-time operating systems course. 6 Definitions Non real-time scheduling algorithms Real-time scheduling algorithm

ENHANCING CPU PERFORMANCE USING SUBCONTRARY MEAN DYNAMIC ROUND ROBIN (SMDRR) SCHEDULING ALGORITHM

UC Santa Barbara. Operating Systems. Christopher Kruegel Department of Computer Science UC Santa Barbara

Journal of Global Research in Computer Science

Improvising Round Robin Process Scheduling through Dynamic Time Quantum Estimation

CHAPTER 5 - PROCESS SCHEDULING

COMPARATIVE PERFORMANCE ANALYSIS OF MULTI-DYNAMIC TIME QUANTUM ROUND ROBIN (MDTQRR) ALGORITHM WITH ARRIVAL TIME

DETERMINING THE VARIABLE QUANTUM TIME (VQT) IN ROUND ROBIN AND IT S IMPORTANCE OVER AVERAGE QUANTUM TIME METHOD

Last class: Today: Threads. CPU Scheduling

February 2011 Page 23 of 93 ISSN

Journal of Global Research in Computer Science

A NEW PROPOSED DYNAMIC DUAL PROCESSOR BASED CPU SCHEDULING ALGORITHM

ODSA: A Novel Ordering Divisional Scheduling Algorithm for Modern Operating Systems

Half Life Variable Quantum Time Round Robin (HLVQTRR)

Efficient Dual Nature Round Robin CPU Scheduling Algorithm: A Comparative Analysis

ENHANCING THE CPU PERFORMANCE USING A MODIFIED MEAN- DEVIATION ROUND ROBIN SCHEDULING ALGORITHM FOR REAL TIME SYSTEMS.

Aperiodic Task Scheduling

A new Hybridized Multilevel Feedback Queue Scheduling with Intelligent Time Slice and its Performance Analysis

An Improved Round Robin Approach using Dynamic Time Quantum for Improving Average Waiting Time

Improved Deadline Monotonic Scheduling With Dynamic and Intelligent Time Slice for Real-time Systems

Networked Embedded Systems WS 2016/17

Real-Time Systems. Event-Driven Scheduling

Queueing systems. Renato Lo Cigno. Simulation and Performance Evaluation Queueing systems - Renato Lo Cigno 1

NATCOR: Stochastic Modelling

Advanced Computer Networks Lecture 3. Models of Queuing

Determining the Optimum Time Quantum Value in Round Robin Process Scheduling Method

Logistical and Transportation Planning. QUIZ 1 Solutions

Single Machine Models

Non-Work-Conserving Non-Preemptive Scheduling: Motivations, Challenges, and Potential Solutions

Lecture 6. Real-Time Systems. Dynamic Priority Scheduling

Analysis of Round-Robin Implementations of Processor Sharing, Including Overhead

CSE101: Design and Analysis of Algorithms. Ragesh Jaiswal, CSE, UCSD

process arrival time CPU burst time priority p1 0ms 25ms 3 p2 1ms 9ms 1 p3 20ms 14ms 4 p4 32ms 4ms 2

Real-Time Systems. Event-Driven Scheduling

PBS: A Unified Priority-Based CPU Scheduler

Computing the Signal Duration to Minimize Average Waiting Time using Round Robin Algorithm

ENHANCING THE CPU PERFORMANCE USING A MODIFIED MEAN- DEVIATION ROUND ROBIN SCHEDULING ALGORITHM FOR REAL TIME SYSTEMS

Greedy Algorithms. CSE 101: Design and Analysis of Algorithms Lecture 10

Servers. Rong Wu. Department of Computing and Software, McMaster University, Canada. Douglas G. Down

CS418 Operating Systems

Scheduling IoT on to the Cloud : A New Algorithm

Load Regulating Algorithm for Static-Priority Task Scheduling on Multiprocessors

Research Article Designing of Vague Logic Based 2-Layered Framework for CPU Scheduler

ENHANCED ROUND ROBIN ALGORITHM FOR PROCESS SCHEDULING USING VARYING QUANTUM PRECISION

CS 374: Algorithms & Models of Computation, Spring 2017 Greedy Algorithms Lecture 19 April 4, 2017 Chandra Chekuri (UIUC) CS374 1 Spring / 1

Embedded Systems 15. REVIEW: Aperiodic scheduling. C i J i 0 a i s i f i d i

How to deal with uncertainties and dynamicity?

EXTRA THRESHOLD IN ROUND ROBIN ALGORITHM IN MULTIPROCESSOR SYSTEM

CSE 421 Greedy Algorithms / Interval Scheduling

Scheduling Lecture 1: Scheduling on One Machine

Embedded Systems 14. Overview of embedded systems design

International Journal of Advanced Research in Computer Science and Software Engineering

A Utilization Bound for Aperiodic Tasks and Priority Driven Scheduling

Analysis of Software Artifacts

Andrew Morton University of Waterloo Canada

Massachusetts Institute of Technology

Queueing Systems: Lecture 3. Amedeo R. Odoni October 18, Announcements

Notation. Bounds on Speedup. Parallel Processing. CS575 Parallel Processing

B. Maddah INDE 504 Discrete-Event Simulation. Output Analysis (1)

Queuing Networks. - Outline of queuing networks. - Mean Value Analisys (MVA) for open and closed queuing networks

3. Scheduling issues. Common approaches 3. Common approaches 1. Preemption vs. non preemption. Common approaches 2. Further definitions

MASSACHUSETTS INSTITUTE OF TECHNOLOGY Department of Electrical Engineering and Computer Science

Real-Time Scheduling and Resource Management

RCPSP Single Machine Problems

Discrete Event Simulation. Motive

The preemptive uniprocessor scheduling of mixed-criticality implicit-deadline sporadic task systems

Queueing with redundant requests: exact analysis

Task Models and Scheduling

Non-Preemptive and Limited Preemptive Scheduling. LS 12, TU Dortmund

Real-Time and Embedded Systems (M) Lecture 5

1.225 Transportation Flow Systems Quiz (December 17, 2001; Duration: 3 hours)

An Energy-efficient Task Scheduler for Multi-core Platforms with per-core DVFS Based on Task Characteristics

Part A [10 points] spindle read/write head. block cylinder. 1) Choose the term from the list that matches each description.

APTAS for Bin Packing

Introduction to Queueing Theory

Transcription:

CS4310.01 Introduction to Operating System Spring 2016 Dr. Zhizhang Shen Season Finale: Which one is better? 1 Background In this lab, we will study, and compare, two processor scheduling policies via simulation. They are the FCFS(First-come-first-serve), and the more sophisticated RR(Round Robin) policies. The former is easy to understand, while the latter is perhaps the most widely used among all the scheduling algorithms. Below shows a general, but simplified model, of processor scheduling: More specifically, with the FCFS algorithm, every process, when it joins the Ready list for the first time, is given a time stamp. The dispatcher simply picks up the process with the earliest arrival time stamp. Also, this is an example of the non-preemptive algorithm in the sense that, once a process is picked up, it will stay in the processor until it is completed. On the other hand, with the RR algorithm, for a given n processes, p 0, p 1,..., p n 1, the dispatcher will assign the processor to each process a certain amount of time 1, referred to as time slice henceforth. When a new process arrives, it is added into the list with its arrival time as a time stamp, and when the time slice of the currently running process is used up, but itself is not completed yet, this process will be put back to the list with a new time stamp, i.e., the system time when this round is done. The dispatcher just picks the one that has stayed in the list the longest time, or the one with the smallest time stamp, as the next process to run. This is certainly an example of the preemptive algorithm in the sense that a process could be put back into the Ready list before it is completed. For a description of these two policies, please check out Chapter 9, Uniprocessor scheduling, of thetext book, and pages13 through 17 of the lecture notes. For discussion of a comprehensive example, please check through pages 26 through 31 in the lecture notes, as well as Figure 9.5 and Table 9.5 in in the book. The objective of such a scheduling policy is certainly to achieve a desirable compromise among minimal average waiting time, maximum throughput and processor utilization for the forthcoming processes. Since there is no way to know that beforehand, the study of such policies is often carried ot through simulation when the data are randomly generated. We want to gain such an experience through this project, when studying the impact of such a 1 It is an approximation since we also have to consider the cost of scheduling, and context switching time, referred to as context switching time in later discussion.

policy on the average normalized turnaround time of a collection of processes with randomly generated arrival time and service time. You can use either C or Java to complete this project. It is clear that the appropriate data structure to organize the Ready queue for the RR algorithm is the priority queue as we discussed in the algorithm class on Heapsort for CS3221), while that for the FCFS is just the usual queue. My lecture notes for the algorithm course are available via the following link: turing.plymouth.edu/~zshen/webfiles/cs3221spring2016.html. 2 Scheduling algorithms We can characterize a scheduling algorithm with the following: Given a process, p i, i [0, n), by T s (i), the service time associated with p i, we mean the amount of processing time it takes p i to complete, by T r (i), the turnaround time for p i, we mean the total amount of time that p i has to stay in the system, running, ready, blocked, etc., and, by the normalized turnaround time for p i, we mean T r (i)/t s (i). It certainly true that, for all i [0, n), T r (i)/t s (i) 1, and the closer this ratio to 1, the better. For example, we can characterize the two algorithms with a given load, as shown in Table 1, assuming that all of them arrive at t = 0. Table 1: An example of process load information Index (i) Service time (T s (i)) 0 350 1 125 2 475 3 250 4 75 Notice that in the example that we went through in the class, i.e., Figure 9.5, the five processes arrive at different time. 2.1 For the FCFS algorithm The following figure shows the scheduled assignment to various processors, under the FCFS algorithm, assuming all of them arrive at t = 0. 0 350 475 950 1200 1275 p 0 p 1 p 2 p 3 p 4 It is easy to calculate both the turnaround time T r, for those processes as follows: T r (p 0 ) = 350. 2

T r (p 1 ) = 475 (= 350 + 125). T r (p 2 ) = 950 (= 475 + 475). T r (p 3 ) = 1200 (= 950 + 250). T r (p 1 ) = 1275 (= 1200 + 75). For example, process p 1, although arriving at t = 0, has to wait p 1 to finish at t = 350, then start to run, since it is the second in the queue. Hence the average turnaround time for each process is the sum of the above turnaround time divided by the number of the processes, i.e., 850. From the above two charts, we can also easily calculate the respective normalized turnaround time, T r /T s, as follows: T Trnd (p 0 ) = T r (p 0 )/T s (p 0 ) = 350/350 = 1. T Trnd (p 1 ) = T r (p 1 )/T s (p 1 ) = 475/125 = 3.8 T Trnd (p 2 ) = 2 T Trnd (p 3 ) = 4.8 T Trnd (p 1 ) = 17. As a result, the average normalized turnaround time is 5.72. Moreover, the following Table 2 shows that this policy favors longer processes, consistent with our observation as made in Figure 9.14 in the book. Table 2: Turnaround time in terms of service time with FCFS Index (i) Service time (T s (i)) T N.Trnd (i) 4 75 17 1 125 3.8 3 250 4.8 0 350 1 2 475 2 2.2 For the RR algorithm Again, we first give the following figure, showing the scheduled assignment to various processes, under the RR algorithm. We first ignore the context switching time, namely, the overhead the system has to kick in to do the process switching in and out. We also assume a time slice of 50 units. 0 100 200 300 400 475 550 650 p 0 p 1 p 2 p 3 p 4 p 0 p 1 p 2 p 3 p 4 p 0 p 1 p 2 p 3 750 800 900 1000 1100 1200 1300 1325 p 0 p 2 p 3 p 0 p 2 p 3 p 0 p 2 p 0 p 2 p 2 p 2 p 2 3

For example, p 0 starts at t = 0, and stops at t = 50, then p 1 starts, and stops at t = 100,..., p 4 will complete at 475, and half of that slice will be wasted, and p 0 starts at t = 500, which will eventually complete at t = 1150, when p 2 will start and continue until t = 1325, which also wraps up the whole show. Table 3: Turnaround time in terms of service time with RR(50) Index (i) Service time (T s (i)) T Trnd (i) T N.Trnd (i) 4 75 475 6.3 1 125 600 4.8 3 250 1000 4 0 350 1150 3.31 2 475 1325 2.79 Following exactly the same approach, we can calculate that the average turnaround time for all the processes is 905 = (4525/5), and the average normalized turnaround time is 4.20. Thus, compared with the FCFS approach, the RR algorithm, with the above assumptions, leads to about the same average turnaround time, but a 26.6% less average normalized turnaround time for this set of data. Moreover, as shown in Table 3, this policy also favors longer processes, but the results are not as far apart as what we saw with the FCFS policy. This observation is also consistent with what is demonstrated in Figure 9.14. 2.2.1 How about the context switching time? A RR based dispatcher rotates among processes, thus process switching becomes an issue. If we assume that each process switching takes 10 units of time, then the initial segment of the process will look like the following: p 0 starts at t = 0, stops at t = 50, and p 1 starts at t = 60, after the system spends 10 units of time on switching context for these two processes, and stops at t = 110, when p 2 kicks off, and stops at t = 160, etc. 0 110 230 350 410 p 0 c p 1 c p 2 c p 3 c p 4 c p 0 c p 1 c 3 What to do in this project? Besides implementing the FCFS and RR algorithms, you will also compare their performances by studying the impact of the length of time slice, and that of the context switching time, on the average turnaround time, and the average wait time for all the processes, when dispatched by these two policies. 4

1. As the input data of the process loading information, you should randomly generate an input consisting of a large collection, e.g., n = 1000, of process load, similar to that given in Table 1. Each process is characterized with two pieces of information, besides it identifier: arrival time, and service time 2. For example, the first few lines might look as follows: 1 30 0.783560 2 54 17.282004 3 97 32.814522 4 133 39.986750... This means that process p 0 comes at time 30 and requests 0.783560 seconds, i.e., 785 ms, of CPU time, etc.. Notice that the above lines are sorted on the arrival time, when you can delete those with duplicated arrival time, if there are any. 2. Find out the average turnaround time, and, more importantly, the average normalized turnaround time for the such a process load as dispatched by FCFS, and the RR policies, respectively. For the latter policy, consider switching time ranges among 0, 5, 10, 15, 20 and 25 ms; and time slice ranges among 50, 100, 250 and 500 ms. 3. Sort the 1,000 processes according to the length of their service time, and then classify them into 20 groups of 50 processes each. For each group, find out the average normalized turnaround time under both the FCFS policy and the RR policy when the length of the time slice equal 50, 250 and 500 ms; and switching time is 10 ms. Thus, the average normalized turnaround time for the first group will be that of the 50 shortest processes, and that of the last group will be the average normalized turnaround time for the 50 longest processes, under the respective policy. All in all, there should be four rows of data, 20 pieces for each row, giving the average normalized turnaround time of the 20 process groups according to their relative length. For example, the first row gives the average normalized turnaround time of the 20 process groups, arranged in term of their length, under the FCFS policy; the second row gives the average normalized turnaround time of the 20 process groups, arranged in term of their length, under the RR policy when the time slice is 50 microseconds, and the process switching time is 10 microseconds, etc.. 4. Besides tabulating the results, demonstrate the results graphically, with, e.g., Microsoft Excel, using Figure 9.14 as an example. 4 What to send in? Email me the source code, and a lab report, containing 1) a brief but complete description as how you have implemented the two scheduling algorithms, especially the Round Robin policy 2 You might use a random number generator with a scope of [0, 10000) to generate the arrival time, and use another random number generator with a scope of [0, 500) to generate the service time. 5

in terms of a priority queue, with and without taking consideration of the context switching time; 2) the process of collecting the data; 3) the tabulated data; 4) the comparative chart(s); and 5) your conclusion as which policy looks better in terms of average normalized turnaround time; and, in terms of RR policy, which combination of context switching time and time slice, thus most promising for this set of data. Below is a general guideline for grading: 3(+/ ): A serious effort, as demonstrated via your program and the lab report, is made to implement the aforementioned two scheduling algorithms, and address all the aforementioned five issues in your report. 4(+/ ): Based on the implemented policies and a randomly generated process load, various combinations of time slice and context switching time are investigated, and non-trivial results are obtained. 5: Tabulated data and comparative chart(s) on average (normalized) turnaround time are provided for the chosen samples, based on which a justified conclusion is reached. 6