Computer Number system Note for IBPS PO

IBPS SO IT Officer Notes for Process Scheduling

Jan 31 • IBPS Specialist Officer • 2319 Views • No Comments on IBPS SO IT Officer Notes for Process Scheduling

IBPS SO IT Officer Notes for Process Scheduling

Process scheduling

The process scheduling is the activity to manage processing of running process from the CPU and the selection of another process on the basis of a particular strategy.

Process scheduling is an important part of a Multi-programming operating systems. Operating systems allow more than one process loaded into the executable memory at a time and the loaded process shares the CPU using time multiplexing.

Process Scheduling Queues

The OS maintains all PCBs in Process Scheduling Queues. The OS maintains a separate queue for each of the process states and PCBs of all processes in the same execution state are placed in the same queue. When the state of a process is changed, its PCB is unlinked from its current queue and moved to its new state queue.

The Operating System maintains the following process scheduling queues −

  • Job queue −  All the processes  kept in queue.
  • Ready queue − This queue keeps a set of all processes residing in main memory.Ready and waiting to execute. A new process is always put in this queue.

Device queues − The processes which are blocked due to unavailability of an I/O device originate this queue.

Capture

The OS manage different policies for each queue FIFO, Round Robin, Priority, etc. The OS scheduler determines how to move processes between the ready and run queues.

Schedulers

Schedulers are system software which handle process scheduling in various ways. Their main task is to select the jobs to be submitted into the system and  decide which process to run. Schedulers are three types −

  • Long-Term Scheduler
  • Short-Term Scheduler
  • Medium-Term Scheduler

Long Term Scheduler IBPS SO IT Officer Notes for Process Scheduling

It is also called a job scheduler. It selects processes from the queue and loads them into memory for execution. Process loads into the memory for CPU scheduling.

The primary objective of the job scheduler is to provide a balanced jobs, such as I/O bound and processor bound. It also controls the degree of multi-programming. If the degree of multi-programming is stable, then the average rate of process must be equal to the average departure rate of processes.

On some systems, the long-term scheduler may not be available or minimal. Time-sharing operating systems have no long term scheduler. When a process changes the state from new to ready state then there is use of long-term scheduler.

Short Term Scheduler

It is also called as CPU scheduler. Its main objective is to increase system performance according to the chosen set of criteria. It is the change of ready state to running state of the process. CPU scheduler selects a process among the processes that are ready to execute and allocates CPU to one of them.Short-term schedulers, also known as dispatchers, make the decision of which process to execute next. Short-term schedulers are faster than long-term schedulers.

Best Books for Competitive exam preparation.

Medium Term Scheduler

Medium-term scheduling is a part of swapping. It removes the processes from the memory. It reduces the degree of multiprogramming. The medium-term scheduler is in-charge of handling the swapped out-processes.

A running process may become suspended if it makes an I/O request. A suspended processes cannot make any progress towards completion. In this condition, to remove the process from memory and make space for other processes, the suspended process is moved to the secondary storage. This process is called swapping, and the process is said to be swapped out or rolled out.

Comparison among Scheduler

S.N. Long-Term Scheduler Short-Term Scheduler Medium-Term Scheduler
1 It is a job scheduler It is a CPU scheduler It is a process swapping scheduler.
2 Speed is lesser than short term scheduler Speed is fastest among other two Speed is in between both short and long term scheduler.
3 It controls the degree of multi-programming It provides lesser control over degree of multi-programming It reduces the degree of multi-programming.
4 It is almost absent or minimal in time sharing system It is also minimal in time sharing system It is a part of Time sharing systems.
5 It selects processes from pool and loads them into memory for execution It selects those processes which are ready to execute It can re-introduce the process into memory and execution can be continued.

Context Switch

A context switch is the technique to store and restore the state or context of a CPU in Process Control block so that a process execution can be resumed from the same point at a later time. It enables multiple processes to share a single CPU. Context switching is an essential part of a multitasking operating system features.

When  scheduler switches the CPU from executing one process to another the state from the current running process is stored into the process control block. After this, the state for the process to run next is loaded from its own PCB and used to set the Program Counter, registers, etc. At that point, the second process can start executing.

Capture

Context switches are computationally intensive when register and memory state must be saved and restored. To avoid the amount of context switching time, some hardware systems employ two or more sets of processor registers. following information is stored  for future When the process is switched:.

  • Program Counter
  • Scheduling information
  • Base and limit register value
  • Currently used register
  • Changed State
  • I/O State information
  • Accounting information

Join Best Exam Preparation group on Telegram

First Come First Serve (FCFS)

  • Jobs are executed on first come  first serve basis.
  • It is a non-preemptive and preemptive scheduling algorithm.
  • It is Easy to understand and implement.
  • Implementation is based on FIFO queue.
  • Average waiting time is high.

Capture

Wait time of each process follows:−

Process Wait Time : Service Time – Arrival Time
P0 0 – 0 = 0
P1 5 – 1 = 4
P2 8 – 2 = 6
P3 16 – 3 = 13

 

Average Wait Time: (0+4+6+13) / 4 = 5.75

Shortest Job first (SJF)

  • This is a non-preemptive and preemptive scheduling algorithm.
  • minimum waiting time.
  • Easy to implement in Batch systems where required CPU time is known in advance.
  • Impossible to implement in interactive systems where required CPU time is not known.
  • The processor must know in advance how much time process will take.

Capture

 

Process Wait Time : Service Time – Arrival Time
P0 3 – 0 = 3
P1 0 – 0 = 0
P2 16 – 2 = 14
P3 8 – 3 = 5

Average Wait Time: (3+0+14+5) / 4 = 5.50

Priority Based Scheduling

  • Priority scheduling is a non-preemptive algorithm
  • Each process is assigned a priority. Process with highest priority is to be executed first and so on.
  • Processes with same priority are executed on first come first served basis.
  • Priority can be decided based on memory requirements, time requirements or any other requirement.

Capture

Wait time of each process follows −

Process Wait Time : Service Time – Arrival Time
P0 9 – 0 = 9
P1 6 – 1 = 5
P2 14 – 2 = 12
P3 0 – 0 = 0

Average Wait Time: (9+5+12+0) / 4 = 6.5

Shortest Remaining Time

  • Shortest remaining time (SRT) is the preemptive version of the SJF algorithm.
  • The processor is allocated to the job closest to completion but it can be preempted by a newer ready job with shorter time to completion.
  • Impossible to implement in interactive systems where required CPU time is not known.
  • It is used in batch environments where short jobs need to be give preference.

Round Robin Scheduling

  • Round Robin is the preemptive scheduling algorithm.
  • Each process is provided a fix time to execute called a quantum.
  • Once a process is executed for a given time period, it is preempted and other process executes for a given time period.

Capture

Wait time of each process follows −

Process Wait Time : Service Time – Arrival Time
P0 (0 – 0) + (12 – 3) = 9
P1 (3 – 1) = 2
P2 (6 – 2) + (14 – 9) + (20 – 17) = 12
P3 (9 – 3) + (17 – 12) = 11

Average Wait Time: (9+2+12+11) / 4 = 8.5

Multiple-Level Queues Scheduling

Multiple-level queues are not an independent scheduling algorithm. They make use of other existing algorithms to group and schedule jobs with common characteristics.

  • Multiple queues are maintained for processes with common characteristics.
  • Each queue can have its own scheduling algorithms.
  • Priorities are assigned to each queue.

Thread

A thread is also called a lightweight process. Threads provide a way to improve application performance through parallelism.

Capture

Difference between Process and Thread

S.N. Process Thread
1 Process is heavy weight or resource intensive. Thread is light weight, taking lesser resources than a process.
2 Process switching needs interaction with operating system. Thread switching does not need to interact with operating system.
3 In multiple processing environments, each process executes the same code but has its own memory and file resources. All threads can share same set of open files, child processes.
4 If one process is blocked, then no other process can execute until the first process is unblocked. While one thread is blocked and waiting, a second thread in the same task can run.
5 Multiple processes without using threads use more resources. Multiple threaded processes use fewer resources.
6 In multiple processes each process operates independently of the others. One thread can read, write or change another thread’s data.

Advantages of Thread

  • Threads minimize the context switching time.
  • Use of threads provides concurrency within a process.
  • Efficient communication.
  • It is more economical to create and context switch threads.
  • Threads allow utilization of multiprocessor architectures to a greater scale and efficiency.

Types of Thread

  • User Level Threads − User managed threads without involving kernel..
  • Kernel Level Threads − Operating System managed threads acting on kernel.

User Level Threads

kernel is not aware of the existence of threads. The thread library contains code for creating and destroying threads for passing message and data between threads. The application starts with a single thread.

Capture

Advantages

  • Thread switching does not require Kernel.
  • User level thread can run on any operating system.
  • User level threads are fast to create and manage.

Disadvantages

  • In operating system, most system calls are blocking.
  • Multi-threaded application cannot take advantage of multiprocessing.

Kernel Level Threads

Thread management is done by the Kernel. Kernel threads are supported directly by the operating system. Any application can be programmed to be multi-threaded. The Kernel maintains context information for the process Scheduling by the Kernel is done on a thread basis. The Kernel performs thread creation, scheduling and management in Kernel space. Kernel threads are slower to create and manage than the user threads.

Advantages

  • Kernel simultaneously schedule multiple threads from the same process on multiple processes.
  • If one thread in a process is blocked, the Kernel can schedule another thread of the same process.

Disadvantages

  • Kernel threads are generally slower to create and manage than the user threads.
  • Transfer of control from one thread to another within the same process requires a mode switch to the Kernel.

Multi-threading Models

Join us on Telegram and Facebook for BANK exam is also asked in other government exam like BANK IBPS SO RRB SSC. This note has been prepared by Supriya Kundu is of one of best teacher in this field.If any question please ask in below.

Related Posts

Leave a Reply

Your email address will not be published. Required fields are marked *

« »