Scheduling of processes/work is done to finish the work on time. CPU Scheduling is a process that allows one process to use the CPU while another process is delayed (in standby) due to unavailability of any resources such as I / O etc, thus making full use of the CPU. The purpose of CPU Scheduling is to make the system more efficient, faster, and fairer.
CPU scheduling is a key part of how an operating system works. It decides which task (or process) the CPU should work on at any given time. This is important because a CPU can only handle one task at a time, but there are usually many tasks that need to be processed. In this article, we are going to discuss CPU scheduling in detail.
Tutorial on CPU Scheduling Algorithms in Operating System
Whenever the CPU becomes idle, the operating system must select one of the processes in the line ready for launch. The selection process is done by a temporary (CPU) scheduler. The Scheduler selects between memory processes ready to launch and assigns the CPU to one of them.
Table of Content
In computing, a process is the instance of a computer program that is being executed by one or many threads . It contains the program code and its activity. Depending on the operating system (OS), a process may be made up of multiple threads of execution that execute instructions concurrently.
The process memory is divided into four sections for efficient operation:
To know further, you can refer to our detailed article on States of a Process in Operating system.
Process Scheduling is the process of the process manager handling the removal of an active process from the CPU and selecting another process based on a specific strategy.
Process Scheduling is an integral part of Multi-programming applications. Such operating systems allow more than one process to be loaded into usable memory at a time and the loaded shared CPU process uses repetition time.
There are three types of process schedulers :
CPU scheduling is the process of deciding which process will own the CPU to use while another process is suspended. The main function of the CPU scheduling is to ensure that whenever the CPU remains idle, the OS has at least selected one of the processes available in the ready-to-use line.
In Multiprogramming , if the long-term scheduler selects multiple I / O binding processes then most of the time, the CPU remains an idle. The function of an effective program is to improve resource utilization.
If most operating systems change their status from performance to waiting then there may always be a chance of failure in the system. So in order to minimize this excess, the OS needs to schedule tasks in order to make full use of the CPU and avoid the possibility of deadlock.
Different CPU Scheduling algorithms have different structures and the choice of a particular algorithm depends on a variety of factors. Many conditions have been raised to compare CPU scheduling algorithms.
The criteria include the following:
There are mainly two types of scheduling methods:
Different types of CPU Scheduling Algorithms
Let us now learn about these CPU scheduling algorithms in operating systems one by one:
FCFS considered to be the simplest of all operating system scheduling algorithms. First come first serve scheduling algorithm states that the process that requests the CPU first is allocated the CPU first and is implemented by using FIFO queue .
To learn about how to implement this CPU scheduling algorithm, please refer to our detailed article on First come, First serve Scheduling.
Shortest job first (SJF) is a scheduling process that selects the waiting process with the smallest execution time to execute next. This scheduling method may or may not be preemptive. Significantly reduces the average waiting time for other processes waiting to be executed. The full form of SJF is Shortest Job First.
To learn about how to implement this CPU scheduling algorithm, please refer to our detailed article on Shortest Job First.
Longest Job First(LJF) scheduling process is just opposite of shortest job first (SJF), as the name suggests this algorithm is based upon the fact that the process with the largest burst time is processed first. Longest Job First is non-preemptive in nature.
To learn about how to implement this CPU scheduling algorithm, please refer to our detailed article on the Longest job first scheduling .
Preemptive Priority CPU Scheduling Algorithm is a pre-emptive method of CPU scheduling algorithm that works based on the priority of a process. In this algorithm, the editor sets the functions to be as important, meaning that the most important process must be done first. In the case of any conflict, that is, where there is more than one process with equal value, then the most important CPU planning algorithm works on the basis of the FCFS (First Come First Serve) algorithm.
To learn about how to implement this CPU scheduling algorithm, please refer to our detailed article on Priority Preemptive Scheduling algorithm .
Round Robin is a CPU scheduling algorithm where each process is cyclically assigned a fixed time slot. It is the preemptive version of First come First Serve CPU Scheduling algorithm . Round Robin CPU Algorithm generally focuses on Time Sharing technique.
To learn about how to implement this CPU scheduling algorithm, please refer to our detailed article on the Round robin Scheduling algorithm .
Shortest remaining time first is the preemptive version of the Shortest job first which we have discussed earlier where the processor is allocated to the job closest to completion. In SRTF the process with the smallest amount of time remaining until completion is selected to execute.
To learn about how to implement this CPU scheduling algorithm, please refer to our detailed article on the shortest remaining time first .
The longest remaining time first is a preemptive version of the longest job first scheduling algorithm. This scheduling algorithm is used by the operating system to program incoming processes for use in a systematic way. This algorithm schedules those processes first which have the longest processing time remaining for completion.
To learn about how to implement this CPU scheduling algorithm, please refer to our detailed article on the longest remaining time first .
Highest Response Ratio Next is a non-preemptive CPU Scheduling algorithm and it is considered as one of the most optimal scheduling algorithms. The name itself states that we need to find the response ratio of all available processes and select the one with the highest Response Ratio. A process once selected will run till completion.
Response Ratio = (W + S)/S
Here, W is the waiting time of the process so far and S is the Burst time of the process.
To learn about how to implement this CPU scheduling algorithm, please refer to our detailed article on Highest Response Ratio Next .
Processes in the ready queue can be divided into different classes where each class has its own scheduling needs. For example, a common division is a foreground (interactive) process and a background (batch) process. These two classes have different scheduling needs. For this kind of situation Multilevel Queue Scheduling is used.
The description of the processes in the above diagram is as follows:
To learn about how to implement this CPU scheduling algorithm, please refer to our detailed article on Multilevel Queue Scheduling .
Multilevel Feedback Queue Scheduling (MLFQ) CPU Scheduling is like Multilevel Queue Scheduling but in this process can move between the queues. And thus, much more efficient than multilevel queue scheduling.
To learn about how to implement this CPU scheduling algorithm, please refer to our detailed article on Multilevel Feedback Queue Scheduling .
Here is a brief comparison between different CPU scheduling algorithms: