IBPS SO IT Officer Notes for Multi threading Models
Multi-threading models are three types
- Many to many.
- Many to one.
- One to one.
Many to Many Model
The many-to-many model multiplexes any number of user threads onto an equal or smaller number of kernel threads.
One to One Model
There is one-to-one relationship of user-level thread to the kernel-level thread. This model provides more concurrency than the many-to-one model. It also allows another thread to run when a thread makes a blocking system call. It supports multiple threads to execute in parallel on microprocessors.
Disadvantage of this model is that creating user thread requires the corresponding Kernel thread. OS/2, windows NT and windows 2000 use one to one relationship model.
Difference between User-Level & Kernel-Level Thread
|S.N.||User-Level Threads||Kernel-Level Thread|
|1||User-level threads are faster to create and manage.||Kernel-level threads are slower to create and manage.|
|2||Implementation is by a thread library at the user level.||Operating system supports creation of Kernel threads.|
|3||User-level thread is generic and can run on any operating system.||Kernel-level thread is operating system specific..|
|4||Multi-threaded applications cannot take advantage of multiprocessing.||Kernel routines themselves can be multi-threaded.|
When operating system which handles primary memory.Memory management keeps track of each and every memory location.
It checks how much memory is to be allocated to processes and decides which process will get memory at what time.
Process Address Space
The process address space is the set of logical addresses and set of all logical addresses generated by a program is referred as a logical address space. The set of all physical addresses corresponding to these logical addresses is referred to as a physical address space.
The run-time mapping from virtual to physical address is done by the memory management unit which is a hardware device.
Static vs Dynamic Loading
Static loading when the absolute program is loaded into memory in order for execution to start.
Dynamic loading dynamic routines of the library are stored on a disk in relocatable form and are loaded into memory only when they are needed by the program.
Swapping is a mechanism in which a process could swapped temporarily out of main memory to secondary storage and make that memory available to other processes. Later system swaps back the process from the secondary storage to main memory.
Performance is usually affected by swapping process but it helps in running multiple and big processes in parallel. Swapping is also known as a technique for memory compaction.
IBPS SO IT Officer Notes for Multi threading Models
When processes are loaded and removed from memory and free memory space is broken down into small pieces. After sometimes that processes cannot be allocated to memory blocks by considering their small size and memory blocks remains unused. This problem is known as Fragmentation.
Total memory space is enough to satisfy a request but it is not contiguous, so it cannot be used.
Memory block assigned to process is bigger. Some portion of memory is left unused as it cannot be used by another process.
External fragmentation can be reduced by compaction
Set of blocked processes each holding a resource and waiting to for a resource held by another process.
Deadlocks can be avoided by some conditions. Such as:
Resources must be allocated to processes at any time in an exclusive manner not shared basis for a deadlock to be possible.
Hold and Wait
In this condition processes holding one or more resources while simultaneously waiting for one or more .
Preemption of process resource allocations can avoid the condition of deadlocks, whenever possible.
Circular wait can be avoided if we number all resources, and require that processes request resources only in strictly increasing(or decreasing) order.
Following three strategies used to remove deadlock after its occurrence.
Take a resource from one process and give it to other resolve the deadlock situation.
IBPS SO IT Officer Notes for Multi threading Models
Make a record of the state of each process and when deadlock occurs, roll everything back to the last checkpoint, and restart, but allocate resources differently so that deadlock does not occur.
Kill one or more processes
What is a Live-lock?
Variant of deadlock called live-lock. When two or more processes continuously change their state without doing any useful work. This is similar to deadlock in that no progress is made but differs in that neither process is blocked or waiting for anything.
When a computer can address more memory than the amount physically installed on the system. This extra memory is actually called virtual memory Paging technique plays an important role in implementing virtual memory.
Paging is technique in which process address space is broken down into blocks of the same size called pages.The size of the process is measured by the number of pages.
Main memory is divided into small fixed-sized blocks of memory called frames and the size of a frame is e same as that of a page to have optimum utilization of the main memory and to avoid external fragmentation.
Page address is called logical address and which is represented by page number and the offset.
Logical Address = Page number + page offset
Frame address is called physical address which is represented by a frame number and the offset.
Physical Address = Frame number + page offset
A data structure called page map table is used to keep track of the relation between a page of a process to a frame in physical memory.
When the system allocates a frame to any page it translates its logical address into a physical address and create entry into the page table to be used during execution of the program.
When a process is to be executed, its corresponding pages are loaded into available memory frames.When a computer runs out of RAM, the operating system will move unwanted pages of memory to secondary memory to free up RAM for other processes and brings them back when needed by the program.
This process continues during the whole execution of the program where the OS keeps removing idle pages from main memory and write them onto the secondary memory and bring back when required by the program.
- Paging reduces external fragmentation, but still suffer from internal fragmentation.
- Paging is simple to implement.
- Swapping becomes very easy due to equal size of the pages and frames.
- Page table requires extra memory space.
Segmentation is a memory management technique in which each job is divided into multiple segments of different sizes. Each segment is have a different logical address space of the program.
When a process is executed its corresponding segmentation are loaded into non-contiguous memory by every segment is loaded into a contiguous block of available memory.
Segmentation memory management works similar to paging but here segments are of variable-length.
The operating system maintains a segment map table for every process and a list of free memory blocks along with segment numbers, size and memory locations in main memory. For each segment, the table stores the starting address of the segment and the length of the segment.
A demand paging system is similar to a paging system with swapping where processes reside in secondary memory and pages are loaded only when demand.
- Large virtual memory.
- More efficiently use of memory.
- No limit on degree of multiprogramming.
Page Replacement Algorithm
Page replacement algorithms are the techniques in which an Operating System decides which memory pages to swap out
First In First Out (FIFO) algorithm
- Oldest page in main memory is the one which will be selected for replacement.
- Easy to implement replace pages from the tail and add new pages at the head.
- Suffer from belady’s anomaly.
Optimal Page algorithm
- An optimal page-replacement algorithm has the lowest page-fault rate.
- An optimal page-replacement algorithm exists and called OPT or MIN.
- Replace the page that will not be used for the longest period of time. Use the time when a page is to be used
Least Recently Used (LRU) algorithm
- Page which has not been used for the longest period of time in main memory.
- Keep a list, replace pages by looking back into time.
Page Buffering algorithm
- To start process quickly, keep a pool of free frames.
- On page fault, select a page to be replaced.
- Write the new page in the frame of free pool and mark the page table to restart the process.
- Now write the dirty page out of disk and place the frame holding replaced page in free pool.
Least frequently Used(LFU) algorithm
- The page with the smallest count is one which will be selected for replacement.
- This algorithm suffers from the situation in which a page is used heavily during the initial phase of a process, but then is never again used.
Most frequently Used(MFU) algorithm
- This algorithm is based on the statement that the page with the smallest count was probably just brought in and has yet to be used.
Block devices −Which have mass storage. Example, Hard disks, USB cameras, Disk-On-Key etc.
Character devices − Communication by sending and receiving single characters. For example, serial ports, parallel ports, sounds cards etc
Direct Memory Access (DMA)
DMA means CPU grants I/O module authority to read from or write to memory. DMA module controls exchange of data between main memory and the I/O device. CPU is only involved in the beginning and ending of the transfer.
Direct Memory Access needs a special hardware called DMA controller (DMAC). Controllers are programmed with source and destination pointers to read/write the data.
A file is a collection of related information that are recorded on secondary storage such as magnetic disks, magnetic tapes and optical disks.
A File Structure is a format that the operating system can understand.
- A file has a definite structure according to its type.
- A text file is a sequence of characters organized into lines.
- A source file is a sequence of procedures, functions and structure.
- An object file is a sequence of bytes understandable by the machine.
Operating systems support many types of different files. Such as MS-DOS and UNIX have the following types of files −
- files that contain user information.
- Contain text, databases or executable program.
- The user can apply various operations on such files like add, modify, delete entire file.
- Contain list of file names and other information of these files.
- These files are also known as device files.
- Which represent physical device like disks, terminals, printers, networks, tape drive etc.
File Access Mechanisms
The manner in which the records of a file may be accessed.
- Sequential access
- Direct/Random access
- Indexed sequential access
A sequential access is that in which the records are accessed in sequence.
- Accessing the records directly.
- Each record has its own address on the file with the help of it file can be directly accessed for reading or writing.
- The records need not be in any sequence.
Indexed sequential access
- This mechanism is built up on base of sequential access.
- An index is created for each file which contains pointers to various blocks.
- Sequentially searched and pointers are used to access the file directly.
Files are allocated disk spaces by operating system.
- Contiguous Allocation
- Linked Allocation
- Indexed Allocation
- Each file contains a contiguous address space on disk.
- Assigned disk address is in linear order.
- Easy to implement.
- External fragmentation is a major problem with this type of allocation technique.
- Directory contains link to first block of a file.
- No external fragmentation
- Effectively used in sequential access file.
- Inefficient when used direct access file.
- Provides a solution to the problems of contiguous and linked allocation.
- A index block is creating pointers to files.
- Each file has its own index block which stores the addresses of disk space contained by the file.
- Directory contains the addresses of index blocks of files.