Simulation of Process Scheduling Algorithms

Similar documents
CPU scheduling. CPU Scheduling

Module 5: CPU Scheduling

Chapter 6: CPU Scheduling

CSCE 313 Introduction to Computer Systems. Instructor: Dezhen Song

Che-Wei Chang Department of Computer Science and Information Engineering, Chang Gung University

Last class: Today: Threads. CPU Scheduling

CPU Scheduling. CPU Scheduler

CPU SCHEDULING RONG ZHENG

CS 550 Operating Systems Spring CPU scheduling I

Comp 204: Computer Systems and Their Implementation. Lecture 11: Scheduling cont d

TDDI04, K. Arvidsson, IDA, Linköpings universitet CPU Scheduling. Overview: CPU Scheduling. [SGG7] Chapter 5. Basic Concepts.

Scheduling I. Today Introduction to scheduling Classical algorithms. Next Time Advanced topics on scheduling

UC Santa Barbara. Operating Systems. Christopher Kruegel Department of Computer Science UC Santa Barbara

CSE 380 Computer Operating Systems

CPU Scheduling. Heechul Yun

Process Scheduling. Process Scheduling. CPU and I/O Bursts. CPU - I/O Burst Cycle. Variations in Bursts. Histogram of CPU Burst Times

Real-time operating systems course. 6 Definitions Non real-time scheduling algorithms Real-time scheduling algorithm

TDDB68 Concurrent programming and operating systems. Lecture: CPU Scheduling II

CHAPTER 5 - PROCESS SCHEDULING

Scheduling I. Today. Next Time. ! Introduction to scheduling! Classical algorithms. ! Advanced topics on scheduling

CPU Scheduling Exercises

2/5/07 CSE 30341: Operating Systems Principles

LSN 15 Processor Scheduling

Dynamic Time Quantum based Round Robin CPU Scheduling Algorithm

Improvising Round Robin Process Scheduling through Dynamic Time Quantum Estimation

ENHANCING CPU PERFORMANCE USING SUBCONTRARY MEAN DYNAMIC ROUND ROBIN (SMDRR) SCHEDULING ALGORITHM

Journal of Global Research in Computer Science

Design and Performance Evaluation of a New Proposed Shortest Remaining Burst Round Robin (SRBRR) Scheduling Algorithm

Season Finale: Which one is better?

ODSA: A Novel Ordering Divisional Scheduling Algorithm for Modern Operating Systems

COMPARATIVE PERFORMANCE ANALYSIS OF MULTI-DYNAMIC TIME QUANTUM ROUND ROBIN (MDTQRR) ALGORITHM WITH ARRIVAL TIME

Revamped Round Robin Scheduling Algorithm

Scheduling. Uwe R. Zimmer & Alistair Rendell The Australian National University

CS 370. FCFS, SJF and Round Robin. Yashwanth Virupaksha and Abhishek Yeluri

Half Life Variable Quantum Time Round Robin (HLVQTRR)

DETERMINING THE VARIABLE QUANTUM TIME (VQT) IN ROUND ROBIN AND IT S IMPORTANCE OVER AVERAGE QUANTUM TIME METHOD

Lecture Note #6: More on Task Scheduling EECS 571 Principles of Real-Time Embedded Systems Kang G. Shin EECS Department University of Michigan

February 2011 Page 23 of 93 ISSN

Real-Time Systems. Event-Driven Scheduling

Networked Embedded Systems WS 2016/17

Lecture 13. Real-Time Scheduling. Daniel Kästner AbsInt GmbH 2013

Journal of Global Research in Computer Science

ENHANCING THE CPU PERFORMANCE USING A MODIFIED MEAN- DEVIATION ROUND ROBIN SCHEDULING ALGORITHM FOR REAL TIME SYSTEMS.

A new Hybridized Multilevel Feedback Queue Scheduling with Intelligent Time Slice and its Performance Analysis

Lecture 6. Real-Time Systems. Dynamic Priority Scheduling

How to deal with uncertainties and dynamicity?

Efficient Dual Nature Round Robin CPU Scheduling Algorithm: A Comparative Analysis

Embedded Systems Development

Non-Preemptive and Limited Preemptive Scheduling. LS 12, TU Dortmund

ENHANCED ROUND ROBIN ALGORITHM FOR PROCESS SCHEDULING USING VARYING QUANTUM PRECISION

An Improved Round Robin Approach using Dynamic Time Quantum for Improving Average Waiting Time

NATCOR: Stochastic Modelling

Determining the Optimum Time Quantum Value in Round Robin Process Scheduling Method

A NEW PROPOSED DYNAMIC DUAL PROCESSOR BASED CPU SCHEDULING ALGORITHM

Embedded Systems 14. Overview of embedded systems design

CIS 4930/6930: Principles of Cyber-Physical Systems

Scheduling IoT on to the Cloud : A New Algorithm

Non-preemptive Fixed Priority Scheduling of Hard Real-Time Periodic Tasks

Clock-driven scheduling

ENHANCING THE CPU PERFORMANCE USING A MODIFIED MEAN- DEVIATION ROUND ROBIN SCHEDULING ALGORITHM FOR REAL TIME SYSTEMS

Real-Time and Embedded Systems (M) Lecture 5

Real-Time Systems. Event-Driven Scheduling

Online Energy-Aware I/O Device Scheduling for Hard Real-Time Systems with Shared Resources

3. Scheduling issues. Common approaches 3. Common approaches 1. Preemption vs. non preemption. Common approaches 2. Further definitions

A 2-Approximation Algorithm for Scheduling Parallel and Time-Sensitive Applications to Maximize Total Accrued Utility Value

There are three priority driven approaches that we will look at

Aperiodic Task Scheduling

Andrew Morton University of Waterloo Canada

Process Scheduling for RTS. RTS Scheduling Approach. Cyclic Executive Approach

Improved Deadline Monotonic Scheduling With Dynamic and Intelligent Time Slice for Real-time Systems

Priority-driven Scheduling of Periodic Tasks (1) Advanced Operating Systems (M) Lecture 4

Section 1.2: A Single Server Queue

Real Time Operating Systems

Real-Time Scheduling. Real Time Operating Systems and Middleware. Luca Abeni

Task Models and Scheduling

Shared resources. Sistemi in tempo reale. Giuseppe Lipari. Scuola Superiore Sant Anna Pisa -Italy

Probabilistic Preemption Control using Frequency Scaling for Sporadic Real-time Tasks

Scheduling Slack Time in Fixed Priority Pre-emptive Systems

Embedded Systems 15. REVIEW: Aperiodic scheduling. C i J i 0 a i s i f i d i

A Dynamic Real-time Scheduling Algorithm for Reduced Energy Consumption

Scheduling Algorithms for Multiprogramming in a Hard Realtime Environment

Deadlock. CSE 2431: Introduction to Operating Systems Reading: Chap. 7, [OSC]

process arrival time CPU burst time priority p1 0ms 25ms 3 p2 1ms 9ms 1 p3 20ms 14ms 4 p4 32ms 4ms 2

International Journal of Advanced Research in Computer Science and Software Engineering

EXTRA THRESHOLD IN ROUND ROBIN ALGORITHM IN MULTIPROCESSOR SYSTEM

Real Time Operating Systems

Real-time Scheduling of Periodic Tasks (1) Advanced Operating Systems Lecture 2

EECS 571 Principles of Real-Time Embedded Systems. Lecture Note #7: More on Uniprocessor Scheduling

Scheduling Lecture 1: Scheduling on One Machine

Sub-Optimal Scheduling of a Flexible Batch Manufacturing System using an Integer Programming Solution

Queueing systems. Renato Lo Cigno. Simulation and Performance Evaluation Queueing systems - Renato Lo Cigno 1

Exercises Solutions. Automation IEA, LTH. Chapter 2 Manufacturing and process systems. Chapter 5 Discrete manufacturing problems

Computing the Signal Duration to Minimize Average Waiting Time using Round Robin Algorithm

SPT is Optimally Competitive for Uniprocessor Flow

Real-Time Scheduling and Resource Management

CS 453 Operating Systems. Lecture 7 : Deadlock

Rate Monotonic Analysis (RMA)

COMPUTER SCIENCE TRIPOS

Non-Work-Conserving Non-Preemptive Scheduling: Motivations, Challenges, and Potential Solutions

CEC 450 Real-Time Systems

Transcription:

Simulation of Process Scheduling Algorithms Project Report Instructor: Dr. Raimund Ege Submitted by: Sonal Sood Pramod Barthwal

Index 1. Introduction 2. Proposal 3. Background 3.1 What is a Process 4. Life cycle of a Process 5. Scheduling Algorithms 6. Simulation 6.1 A simple class diagram 7. Analysis of algorithms 7.1 First Come First Served 7.2 Round Robin 7.3 Shortest Process First 7.4 Highest Response-Ratio-Next 7.5 Shortest Remaining 8. Graphical representation 9. Conclusion 10. Further work

1. Introduction Scheduling is a fundamental operating-system function. Whenever the CPU becomes idle, the operating system must select one of the processes in the ready queue to be executed. The selection process is carried out by the short-term scheduler. The scheduler selects from among the processes in memory that are ready to execute, and allocates the CPU to one of them. All processes in the ready queue are lined up waiting for a chance to run on the CPU. The records are generally the PCBs (Process Control Block) of the processes. Another important component involved in the CPU scheduling function is the dispatcher. The dispatcher is the module that gives control of the CPU to the processes selected by the shortterm scheduler. This function involves: Switching context Jumping to the proper location in the user program to restart that program. Our goal is to simulate the process scheduling algorithms to get a more accurate evaluation on how choice of a particular scheduling algorithm can effect CPU utilization and how a scheduler decides when processors should be assigned, and to which processes. Different CPU scheduling algorithms have different properties and may favor one class of processes over another. We have programmed a model of the computer system and implemented scheduling algorithms using Software data structures which represent the major components of the system which we have discussed in section 6. 2. Proposal When system has a choice of processes to execute, it must have a strategy -called a Process Scheduling Policy-for deciding which process to run at a given time.a scheduling policy should attempt to satisfy certain performance criteria, such as maximizing: Throughput Latency Preventing Indefinite postponement of Process Maximizing Process Utilization It is the job of the scheduler or dispatcher to assign a processor to the selected process. In our project various Process Scheduling Algorithms that determine at runtime which process runs next.these algorithms decide when and for how long each process runs; they make choices about Preemptibility Priorities Running time -to-completion Fairness We will be simulating these Scheduling Algorithms and comparing them against various parameters mentioned above

3. Background 3.1. What is Process A process is the locus of control of a procedure in execution that is manifested by the existence of a data structure called Process Control Block. Each process has its own address space, which typically consists of Text region, Data region and Stack region. The Text region stores the code that the processor executes. The Data region stores the variables and dynamically allocated memory that the process uses during execution. The Stack region stores instructions and local variables for active procedure calls. The contents of the Stack grow as the process issues nested procedure calls and shrink as procedures return. 4. Life Cycle of a Process During its lifetime, a process moves through a series of discrete process states. These different states are as follows: Running State: A process is said to be in running state when it is executing on a processor. Ready State: A process is said to be in ready state if it could execute on a processor if one were available. Blocked State: A process is said to be in blocked state if it is waiting for some event to happen e.g. I/O completion event. Suspended Process: A process is said to be in suspended state if it is indefinitely removed from contention for time on a processor without being delayed. Suspended Ready: A process can suspend itself or a ready process or a blocked process. The suspended when again returns in contention for the processor time, after completing the suspended block, it is placed in the suspended ready queue. Suspended Block: A process when suspended from the contending for the processor time is place in Suspended Block. Life Cycle of a Process Awake Asleep Runnin g Dispatch Block r Run out Ready Wakeup Blocked

What is Thread? A thread is a Lightweight Process that shares many resources of the Heavyweight process such as the address space of the process to improve the efficiency with which they perform their tasks. They generally represent a single thread of instructions or thread of control. Threads within a process execute concurrently to attain a common goal. Need for scheduling The main objective of multiprogramming is to have some process running all the times, so as to maximize CPU utilization. In the uni-processor system, there is only one process running at a time. Others wait till CPU is free. If the process being executed requires an I/O then in that time period processor remains idle. All this waiting time is wasted. With multiprogramming, we try to use this time productively. Several processes are kept in the memory. When one process has to wait, operating system can take away CPU from that process and gives it to another process. CPU scheduling is the foundation of multiprogramming. The scheduling helps in the reduction of the waiting time and response time of the processes. Along with, it also increases the throughput of the system. What is Processor Scheduling Policy? When a system as a choice of processes to execute, it must have a strategy for deciding which process to run at a given time. This strategy is known as Processor Scheduling Policy. Different process scheduling algorithms have different properties and may favor one class of processes over another. In choosing which algorithm to use in a particular situation, we compare the following characteristics to compare the algorithms. CPU Utilization We want to keep the CPU as busy as possible. It ranges from 0 to 100%. In real systems it ranges from 40% to 90%. For the purpose of this simulation we have assumed that CPU utilization is 100%. Throughput The work done by the CPU is directly proportional to the CPU utilization. The number of processes completed per unit time, called throughput, is the measure of work done by the CPU. Algorithms should try to maximize the throughput. Turnaround time The time interval from submission of job to the completion of job is termed as the turnaround time. It includes waiting time of the process and the service time of the process. Waiting time The amount of time process spent waiting in the ready queue is termed as Waiting time. Any algorithm does not affect the service time of the process but does affect the waiting time of the process. Waiting time should be kept to the minimum. Response time The time interval from the submission of the process to the ready queue until the process receives the first response is known as Response time. Response time should always be kept minimum.

Besides the above features, a scheduling algorithm must also have the following properties: Fairness Predictability Scalability 5. Scheduling Algorithms First Come First Served This is the simplest process-scheduling algorithm. In this, the process that requests the CPU first is allocated the CPU first. the implementation of this algorithm consists of a FIFO queue. The process enters the ready queue and gradually moves to the top of the ready queue. When it reaches to the top of the queue, it is allocated the processor when it becomes free. This algorithm generally has long average waiting time. Round Robin Round-robin is best suited for time sharing systems. It is very similar to FCFS, expect that the preemption has been added to switch between the processes. A time called quantum is introduced in this algorithm, which is the time for which a process runs on the processor. After the quantum the process is preempted, and the new process takes control of the processor for the next quantum time. Shortest Process First This algorithm associates with each process the length of the latter s next CPU burst. When the CPU is available, it is assigned to the process that has the smallest next CPU burst. If two processes have the same length next CPU burst, FCFS scheduling is used to break the tie. The SPF scheduling algorithm is optimal as it gives the minimum average waiting time for the given set of processes. It does that by moving the shortest processes first ahead of the long processes, thus decreasing their waiting time more than increasing the waiting time of the long processes. Consequently average waiting time reduces. Highest-Response-Ratio-Next This algorithm corrects some of the weakness of the SPF. The SPF algorithm is biased towards the processes with short service time. This keeps the longer processes waiting in the ready queue for the longer time, despite of arriving in the ready queue before the short jobs. It is a non-preemptive scheduling algorithm in which the priority is the function of not only the service time but also of the time spent by the process waiting in the ready queue. Once the process obtains the control of the processor, it completes to completion. The priority is calculated by the formula Priority = (Waiting + Service )/Service In this algorithm too, short processes receive preference. But longer processes that have been waiting in the ready queue are also given the favorable treatment. Shortest Remaining This is the preemptive algorithm which acts on the principles of SPF. It gives preference to the processes with the smaller service time. If a process is using the process and in the mean time a new process arrives whose service time is less than the currently running, then it preempts the currently running process and gives processor control to the new process. This algorithm is no longer useful in today s operating systems.

6. Simulation We have programmed a model of the computer system using Software data structures which represent the major components of the system discussed above.the Ready Queue and the memory are simulated using Vectors in which we store objects of class Process. A Process object contains information about the Process which is also updated when the process runs. In the real system we call this entity as PCB (Process control block). Ready Queue contains the list of ready processes which are ready to execute. Ready queue is maintained in a priority order, which depends on the algorithm we are using to calculate the priority. In our simulation the ready queue has been programmed to serve the processes in the First in First out, Round Robin, Shortest Process first, Highest Response Ration Next and also Shortest Remaining time. The simulator has a variable representing a clock; as this variables value is increased, the simulator modifies the system state to reflect the activities of the devices, the processes, and the scheduler. Our system has a function called ProcessReady which checks which processes are ready to enter the system depending on the current clock. Preemption is performed based on the current clock. If based on the algorithm if the next process in the ready queue should get the CPU the current process is pushed into the queue and the next process, based on how the priority of the processes is calculated in ready queue, is taken and given the CPU time. We call this in real systems as context switch.we will be providing this overhead a simple variable which we fill add to a process when it is preempted. The scheduler is an abstract class in which we have defined the basic components which are needed by the scheduler like ready queue.fifo, RR, SPF, SRT and HRRN are the classes which extend this scheduler class and implement the ready queue based on specific scheduler. As we run the simulations the statistics that indicate algorithms performance are gathered and printed. The analysis is shown in the section 7. The data that we are using to drive the simulation is generated using a random-number generator. The generator is programmed to generate processes, CPU-burst times, Arrivals and Finish time. The process PCB in our simulation consists of following attributes: Process Id Process Service Process Arrival Process Finish Process Response The same set of processes is feed into the scheduling algorithm to evaluate the algorithms effect on the processes and CPU. These are initialized for all the processes that we randomly generate.once the process gets the CPU its service time gets updated and if the simulation performs a context switch which preempts the current running process and puts it at the back of the ready queue i.e. we save the PCB of the process. After this the first process in the ready queue is given the block.in the end the system outputs the Arrival, Service, Turn around, Waiting and Response for each process executed by the system. The output formats, the input and the Analysis using this simulation model are shown in the sections that follow:

6.1. A simple Class Diagram Scheduler ReadyQ:Vector FinishQ:Vector ProcessReady() Report() Process Id:Integer Service:Integer Arrival:Integer Finish:Integer Response:Integer getid() getarrival() getservice() getleft() setfinish() getfinish() setresponse() getresponse() servicing() FIFO RR SPF HRRN SRT

7. Analyis 7.1. First Come First Serve Dataset for simulation Process Name Arrival Service Finish Turnaround Waiting Response 0 1 5 6 5 0 0 1 2 4 10 8 4 4 2 3 6 16 13 7 7 3 4 2 18 14 12 12 4 5 7 25 20 13 13 0-1 1-2 2-3 3-4 4-5 P0 P0 P0 P0 5-6 6-7 7-8 8-9 9-10 P0 P1 P1 P1 P1 10-11 11-12 12-13 13-14 14-15 P2 P2 P2 P2 P2 15-16 16-17 17-18 18-19 19-20 P2 P3 P3 P4 P4 20-21 21-22 22-23 23-24 24-25 P4 P4 P4 P4 P4 In First In First Out scheduling algorithm, process P0 arrives into the Ready queue first. Since FCFS allocates the processor to the processes on the basis of their arrival into the ready queue. Hence, P1 is allocated the processor. P1 will execute to completion and finally gets added to the finish queue. Then the processor is allocated to P1, which is next in ready queue. After P1 completes its execution, it is added to finish queue. Then processor is allocated to next in ready queue. Thus after P1, P2, P3 and P4 are allocated the processor time respectively. Hence the processor is allocated to the processes in the order they arrive in the ready queue. Limitations: In FCFS, average waiting time is quite longer. If we have a processor bound job (generally with longer service time) and other I/O bound jobs. And if, processor bound job is allocated the processor time, then it will hold the CPU. As a result, other I/O bound jobs will keep waiting in the ready queue and the I/O devices will remain idle. Like in the test cases we observed, process P3 despite having a very short service time had to wait for long till all the processes ahead of it ran to completion. Average Turn around : 12 Average Waiting : 7.2 Average Response : 7.2

7.2. Round Robin Dataset for Simulation Process Name Arrival Service Finish Turnaround Waiting Response 3 4 2 12 8 6 6 0 1 5 14 13 8 0 1 2 4 18 16 12 2 2 3 6 21 18 12 4 4 5 7 25 20 13 9 0-1 1-2 2-3 3-4 4-5 P0 P0 P0 P1 5-6 6-7 7-8 8-9 9-10 P1 P1 P2 P2 P2 10-11 11-12 12-13 13-14 14-15 P3 P3 P0 P0 P4 15-16 16-17 17-18 18-19 19-20 P4 P4 P1 P2 P2 20-21 21-22 22-23 23-24 24-25 P2 P4 P4 P4 P4 Round Robin is basically FCFS with preemption. In this algorithm the P0 enters the Ready Queue when no other process is there to compete for the processor time. Hence, P0 is given access to the processor and it starts its execution. The quantum has been set to 3-unit time. When P0 has run for the quantum time on the processor, it will relinquish the processor. At this time, the value of the clock is 3. By this time, P1, P2 and P3 has entered the Ready queue. P0 will enter the Ready queue after P3. Since P1 came into the Ready queue after P0, thus the processor will be allocated to P1. Now P1 will run on the processor for the quantum time. By the time, P1 completes its run on processor for the quantum time, P4 will enter the Ready queue after P0. After the quantum expires, P1 will relinquish the processor and will enter in the ready after P4. Now the processor is allocated to P2, P3 and P4 respectively. After relinquishing the processor, they all will enter the ready queue after P1 in the sequence P2, P3 and P4. When the process completes its execution, it will be removed from the ready queue and will enter into finish queue. Advantages: Round Robin algorithm exhibits fairness. All the processes are treated equally and are given equal processor time. As compared to FCFS, the average waiting time is considerably reduced in Round Robin algorithm. Like the process P3, waited for 16 unit time in FCFS, had to wait for 10 unit time to gain access to processor and ran to completion in the quantum period only. This reduced the total number of processes waiting in the ready queue. Limitations: The performance of the system implementing Round Robin mainly depends upon the value of the quantum. If we set the quantum to very high value, then it will proceed as the FCFS. As a result the system performance will be sluggish. If we keep the quantum value low, more overhead will be produced because of frequent context switch.round Robin with low quantum is generally suitable for the interactive system. However, to determine the optimal quantum time is a tedious task.

Average Turn around : 15 Average Waiting : 8.2 Average Response : 4.2 7.3. Shortest Process First Dataset taken for simulation Process Name Arrival Service Finish Turnaround Waiting Response 0 1 5 6 5 0 0 3 4 2 8 4 2 2 1 2 4 12 10 6 6 2 3 6 18 15 9 9 4 5 7 25 20 13 13 0-1 1-2 2-3 3-4 4-5 P0 P0 P0 P0 5-6 6-7 7-8 8-9 9-10 P0 P3 P3 P1 P1 10-11 11-12 12-13 13-14 14-15 P1 P1 P2 P2 P2 15-16 16-17 17-18 18-19 19-20 P2 P2 P2 P4 P4 20-21 21-22 22-23 23-24 24-25 P4 P4 P4 P4 P4 Process P0 arrives in the ready queue when no other process is in the queue to compete for the processor time. Thus P0 is given the processor time immediately. Since this is a nonpreemptive algorithm, P0 will execute to its completion and is added to finish queue. By the time P0 completes other processes enter into the ready queue namely, P1, P2, P3 and P4. When P0 completes its execution, scheduler searches for the process with the minimum service time. Since of the four processes in the ready queue, P3 has the minimum service time (= 2), hence P3 is allocated the processor time. When P3 executes to completion, the process with next least service time is allocated the processor. This time it is P1.This continues until all the processes have finished. This clearly demonstrates that SPF gives preference to the processes with short service time. As a result of this processes with longer service time have to wait for longer period of time for execution, despite of entering the queue before the shorter process. This might cause indefinite postponement of processes with higher service time. To avoid this from happening we have yet another scheduling algorithm in series which gives favorable treatment to the processes which have been waiting longer in the ready queue.hrrn is discussed next. Advantages: Shorter processes are given preference. If the ready queue contains Processor bound processes and some I/O bound processes, then the I/O bound will be given more preference. As a result the system throughput increases. Average waiting time of the processes decreases. Like in the test case, the process P3 waited for only 6 seconds compared to 10 seconds in RR and 16 seconds in FCFS. Limitations: The algorithm is more biased towards the processes with shorter service time. As a result the processes with longer service time many a times are kept waiting in the ready queue for longer time and may cause indefinite postponement. Since it is a non-preemptive algorithm therefore it does not exhibit fairness.

Average Turn around : 10.8 Average Waiting : 6 Average Response : 6 7.4. Highest Response Ratio Next Dataset for Simulation Process Name Arrival Service Finish Turnaround Waiting Response 0 1 5 6 5 0 0 1 2 4 10 8 4 4 3 4 2 12 8 6 6 2 3 6 18 15 9 9 4 5 7 25 20 13 13 0-1 1-2 2-3 3-4 4-5 P0 P0 P0 P0 5-6 6-7 7-8 8-9 9-10 P0 P1 P1 P1 P1 10-11 11-12 12-13 13-14 14-15 P3 P3 P2 P2 P2 15-16 16-17 17-18 18-19 19-20 P2 P2 P2 P4 P4 20-21 21-22 22-23 23-24 24-25 P4 P4 P4 P4 P4 In Highest Response Ratio Next (HRRN) algorithm the P0 enters the Ready Queue when no other process is there to compete for the processor time. Hence, P0 is given access to the processor and it starts its execution. Till this point the algorithm works as FCFS. However, by the time P0 completes its execution, other processes P1, P2, P3 and P4 arrive in the Ready queue. Now HRRN algorithm based upon the Service time and the Waiting time of the processes will determine the priority of the processes. The process with highest priority will be given access of the processor to execute. When the P0 relinquishes the processor, at that time the value of the clock is 6. The calculation to determine the priority of the process is done in the following steps: At clock=6, P1 P2 P3 P4 Arrival time 2 3 4 5 Waiting 4 3 2 1 Service 4 6 2 7 Priority 1.5 1.5 1.5 1.1 At this point the priority of process P1, P2 and P3 are equal and of P4 is less than the other three. So, P4 is out of the competition for the processor time. Since the three processes have the equal priority, here the algorithm will stick to FCFS basis. So, P1 will be given access of the processor. Now when P1 relinquishes the processor, at that time the value of the clock is 10. Following are the calculation at this time.

P2 P3 P4 Arrival time 3 4 5 Waiting 7 6 5 Service 6 2 7 Priority 2.2 4 1.7 Based upon the priority calculated at this time, P3 is allocated the process. When P3 relinquishes the processor, the value of clock is 12. So now new calculations will be P2 P3 Arrival time 3 5 Waiting 9 7 Service 6 7 Priority 2.5 2 Hence, this time P2 has been allocated the processor. When P2 relinquishes the processor, the P4 is finally allocated the processor. Advantages: HRRN overcomes the limitation of SPF by giving favorable treatment to the longer processes. Like in our test case, the process P1 came in the queue before P3. However, as SPF gives preference to shorter process, hence P3 was allocated the process ahead of P1. However, in HRRN, P1 had to wait just for 6 unit time to get access to the processor. HRRN also prevents indefinite postponement. Average Turn around : 11.2 Average Waiting : 6.4 Average Response : 6.4 7.5. Shortest Remaining Dataset for Analysis Process Name Arrival Service Finish Turnaround Waiting Response 0 1 5 6 5 0 0 3 4 2 8 4 4 2 1 2 4 12 10 8 6 2 3 6 18 15 12 9 4 5 7 25 20 18 13 0-1 1-2 2-3 3-4 4-5 P0 P0 P0 P0 5-6 6-7 7-8 8-9 9-10 P0 P3 P3 P1 P1 10-11 11-12 12-13 13-14 14-15 P1 P1 P2 P2 P2 15-16 16-17 17-18 18-19 19-20 P2 P2 P2 P4 P4 20-21 21-22 22-23 23-24 24-25 P4 P4 P4 P4 P4

In Shortest Remaining, when P1 enters the Ready queue, it is the only process in the foray for the processor time. Hence P0 is allocated the processor time. At clock = 2, process P1 enters the ready queue with service time 4. But at this moment the value of the service time of P0 is 4, equal to P1. Hence FCFS is followed in this case. Since P0 arrived in the ready queue before P1, hence P0 retains the processor. At clock =3, process P2 enters the ready queue with service time 6. By this time the updated service time of P0 is 3 which is minimum of the three, hence P0 retains the processor. At clock = 4, process P3 enters the ready queue with service time 2. At this moment, the service time left of P0 is 2. Since both the processes have the same service time, hence FCFS basis is adopted. Hence P0 retains the processor. At clock =5, P4 enters the ready queue with service time 7. P0 continues to retain the processor and executes to completion and is added to finish queue. After P0, the processor is allotted to next process with shortest service time, i.e P3. Now P3 executes to completion. This is followed by P1, P2 and P4 respectively. Upon completion these processes are added to the finish queue. Advantages: It offers the minimum waiting time for the processes. Like the process P3, waited for 6 seconds before getting the processor time. Though this waiting time is equal to that in SPF. But being a preemptive algorithm, SRT scores over SPF by providing even lesser waiting time than the former. Average Turn around : 11 Average Waiting : 6.4 Average Response : 6

8. Graphical Representation The graphs presented below represent the turnaround time, waiting time and response time Turnaround Comparison 25 20 15 10 5 0 FCFS RR SPF HRRN SRT Algorithms P0 P1 P2 P3 P4 Waiting Comparison 20 15 10 5 0 FCFS RR SPF HRRN SRT Algorithms P0 P1 P2 P3 P4 Response Comparison 16 14 12 10 8 6 4 2 0 FCFS RR SPF HRRN SRT Algorithms P0 P1 P2 P3 P4

9. Conclusion From the analysis of the algorithms, we have come up with the conclusion that RR has the best average response time and being the preemptive algorithm, it exhibits fairness. But however, performance of the RR algorithm depends heavily on the size of the quantum. On the one extreme is the time quantum is very large, RR algorithm is same as FCFS policy. But if the time quantum is fairly small, the RR will exhibit fairness but a considerable overhead gets added to the turnaround time due frequent context switch. This fact becomes clear from the RR average turnaround time reading is highest as compared to other algorithms. Hence we observed if majority of the processes are less then the time quantum, the RR will give better response time. Further, SPF has the least average turnaround time and average waiting time as compared to other algorithms. This shows that SPF is provably optimal, in that it gives the minimum average time in the set of processes by moving the short process before a long one. The waiting time of short process decreases more than the waiting time of the long process. Consequently the waiting time decreases. But this algorithm can only be used for systems which are interactive and thereby is biased to short processes and unfavorable to longer ones which may lead to indefinite postponement of longer processes. HRRN has approximately same average turnaround, waiting and response time. It overcomes the limitation of the SPF by giving favorable treatment to the processes waiting for a longer time, and thereby prevents indefinite postponement. SRT exhibits approximately same average response time, waiting time and turnaround time, and may seem to be an effective algorithm for interactive processes if the tasks performed before issuing I/O are short in duration. However, SRT determines priority based on the run time to completion, not the run time to I/O. Some interactive processes such as shell executes for the life time of the session, which would place the shell at the lowest priority level. 10. Further work We have successfully simulated the scheduling algorithms based on which the scheduler decides which processes to allocate to the processor. We have successfully gathered data for analysis, but we still have to take under consideration the overhead which the real world operating systems incur due to context switch while using the preemptive algorithms. The overhead will be incorporated into the simulation before we present it to the class.