Process Management MCQs & GATE PYQs - Operating Systems
Master Process Management for your Operating Systems course and the GATE exam. This guide covers key concepts, essential MCQs, and previous year questions (PYQs) to sharpen your skills in CPU scheduling, synchronization, and deadlock handling.
Process Management Concepts, tricks , Tips
1. The Process & Its Structure
A process is an active and dynamic program during its execution, residing in main memory. In contrast, a program is a passive set of instructions stored in secondary memory.
Process vs. Program
Feature
Program
Process
Nature
Passive entity, a set of instructions.
Active and dynamic entity.
Location
Resides in secondary memory.
Resides in main memory.
Resources
Is not allocated any resources.
Allocated resources (CPU, memory) by the OS.
Process Memory Layout:
Stack: For temporary data like function parameters, return addresses, and local variables.
Heap: Memory that is dynamically allocated during runtime.
Data: Contains global and static variables.
Text: The compiled, executable code of the program.
Process Control Block (PCB):
The OS's data structure for managing a process.
Key Contents: Process State, Program Counter, CPU Registers, Priority, Memory limits, List of open files/devices.
Implementation: All PCBs are stored in main memory, often implemented using a double linked list data structure.
2. Process vs. Thread 👨👦
A thread is a light-weight process. Threads of the same process share the user address space, files, and signal handlers.
Feature
Process (Heavyweight)
Thread (Lightweight)
Address Space
Each process has its own separate address space.
Threads share the address space of their parent process.
Communication
Requires Inter-Process Communication (IPC).
Can communicate directly through shared data.
Context Switching
Slow, as the OS must change memory maps.
Fast.
Creation
Slow and resource-intensive.
Fast and economical.
Fault Isolation
If one process fails, it does not affect others.
If one thread fails, it can crash the entire process.
3. Process States & Transitions
Process Location:
Main Memory: A process resides here when it's in the Ready, Run, or Wait state.
Secondary Memory: A process resides here when it's in the Suspend Ready or Suspend Block state.
Key Identifier in State Diagrams:
A diagram with a Running → Ready transition indicates a preemptive system.
A diagram without a Running → Ready transition indicates a non-preemptive system.
State Occupancy:
At any given time, only one process can exist in the Run state on a single CPU.
4. Key Process State Triggers (Cause & Effect)
Understanding what event causes which state transition is crucial.
Event
Cause
State Transition
Process Creation
fork() system call is executed.
A new process is created and placed in the Ready state.
I/O Request
Process makes a blocking system call.
Running → Blocked (Waiting)
Preemption
Timer interrupt occurs (time slice ends).
Running → Ready
Event Completion
I/O operation finishes for a process.
Blocked (Waiting) → Ready
5. Scheduling & Execution Flow 🚦
Schedulers:
Long-Term Scheduler (Job Scheduler): Selects processes to be brought into main memory. It controls the degree of multiprogramming.
Medium-Term Scheduler (Swapper): Handles swapping processes between main and secondary memory.
Short-Term Scheduler (CPU Scheduler): Selects a process from the ready queue to be executed on the CPU (Ready → Run transition).
Dispatcher:
The module that gives control of the CPU to the process selected by the short-term scheduler. It is the component that performs the context switch.
6. Context Switching & Interrupts 🔄
A context switch is the mechanism to switch the CPU from one process to another, forming the basis of multi-tasking.
What MUST be Saved:
Program Counter, CPU Registers, Memory Management Info.
What is NOT Saved (but Flushed):
Translation Look-aside Buffer (TLB). The TLB's contents are specific to the old process's address space and are invalid for the new process.
Interrupts:
A signal (usually hardware) that triggers the OS. The Scheduler is software that responds to interrupts; it does not cause them. Common hardware sources include I/O devices, the system timer, and power failure.
7. Memory Management & Special Techniques
Swap Space:
What: An extension of RAM.
Where: Resides on the Disk (HDD or SSD).
Why: To store inactive processes or memory pages when physical RAM is full.
Crucial Distinction:Swapping is a heavy memory-management operation done under memory pressure. It is NOT a routine part of every context switch.
Memory Protection:
Goal: To prevent a process from accessing memory outside its allocated address space.
Mechanism: A hardware-enforced mechanism using base and limit registers.
Checkpointing:
Goal: Fault tolerance for long-running jobs.
Mechanism: Periodically saving the process's state to persistent storage (disk). Think of it as a "save point" in a game.
8. Inter-Process Communication (IPC) 💬
Mechanisms for cooperating processes to exchange data.
Shared Memory: Fast, but requires manual synchronization.
Message Passing: Slower (kernel-mediated), but easier to manage.
9. Special Process Types
Zombie Process 🧟: A terminated process whose entry still exists in the process table because its parent has not yet called wait() to read its exit status.
Orphan Process 🧍: A process whose parent has terminated. It is adopted by the init process (PID 1).
Practice MCQs & GATE PYQs
1. Suppose in a multiprogramming environment, the following C program segment is executed. A process goes into I/O queue whenever an I/O related operation is performed. Assume that there will always be a context switch whenever a process requests for an I/O, and also whenever the process returns from an I/O. The number of times the process will enter the ready queue during its lifetime (not counting the time the process enters the ready queue when it is run initially) is _______. (Answer in integer) [ GATE CSE 2025 SET-1 ]
int main()
{
int x=0,i=0;
scanf("%d",&x);
for(i=0; i<20; i++)
{
x = x + 20;
printf("%d\n",x);
}
return 0;
}
Answer & Explanation
Correct Answer: B) 21
A process enters the Ready queue when it moves from the Waiting state (after completing an I/O operation) to being ready to run. Let's trace the I/O calls:
The `scanf("%d",&x);` is the first I/O operation. After it completes, the process moves from Waiting to Ready queue. (Count = 1)
The `for` loop runs 20 times. Inside the loop, `printf("%d\n",x);` is an I/O operation. This means the process will perform an I/O operation 20 times within the loop. Each time a `printf` call is completed, the process moves from the Waiting state to the Ready queue. (Count = 20)
Total number of times the process enters the ready queue = 1 (for scanf) + 20 (for printf) = 21.
Trick to Remember
Count every standard input/output function (`scanf`, `printf`, `gets`, etc.) as a trip to the I/O queue. The return journey from the Waiting state (after I/O) always lands the process back in the Ready queue.
2. Consider a process P running on a CPU. Which one or more of the following events will always trigger a context switch by the OS that results in process P moving to a non-running state (e.g., ready, blocked)? [ GATE CSE 2024 SET-2 ]
Answer & Explanation
Correct Answer: A and B
A context switch that moves the current process 'P' to a non-running state is guaranteed only when the process itself cannot continue its execution. Let's analyze the options:
A) Blocking System Call: Correct. When process P makes a blocking call (like for I/O), it cannot proceed. The OS must move P to the Blocked/Waiting state and switch the CPU to another process.
B) Page Fault: Correct. A major page fault occurs when the required memory page must be fetched from the disk. This is a slow I/O operation. Process P is moved to the Blocked/Waiting state until the page is loaded, which always triggers a context switch.
Why C and D are incorrect:
C) Interrupt for another process: Incorrect. An interrupt for another process's I/O completion is often handled by the DMA (Direct Memory Access) controller. The OS will handle the interrupt, but the currently running process P can often resume execution immediately afterward without being moved to a non-running state.
D) Timer Interrupt: Incorrect. A timer interrupt does not always cause a context switch. For example, in non-preemptive scheduling algorithms, timer interrupts are ignored. Even in preemptive scheduling, if a high-priority process is running, the scheduler might ignore intermediate timer ticks and only perform a context switch after the process's full time slice has expired.
Trick to Remember
A context switch is guaranteed only when the currently running process P cannot continue. This happens when it explicitly waits for I/O (A) or is forced to wait for a page from disk (B). External events like timer interrupts (D) or I/O for other processes (C) can be handled by the OS without always stopping the current process.
3. Which of the following process state transitions is/are NOT possible? [ GATE CSE 2024 SET-1 ]
Answer & Explanation
Correct Answer: B and C
Let's look at a standard process state diagram to understand the valid transitions.
Based on the diagram, we can analyze the options:
A) Running to Ready: This is a possible transition. It occurs when the process's time slice expires (timer interrupt) or a higher-priority process becomes ready, causing preemption.
B) Waiting to Running: This is NOT possible. A process in the Waiting (or Blocked) state moves to the Ready state after its I/O operation is complete. It cannot bypass the Ready queue to go directly to Running.
C) Ready to Waiting: This is NOT possible. A process must be in the Running state to make an I/O request and move to the Waiting state. A process in the Ready state is simply waiting for CPU time and cannot perform I/O.
D) Running to Terminated: This is a possible transition. It happens when the process finishes its execution normally or is terminated by the operating system due to an error.
Trick to Remember
Think of the states like this: To request something (like I/O to go to Waiting), you must be active (Running). After you're done waiting, you have to get back in line (Ready) before you can be active again.
4. Dispatch latency is defined as.. [ ISRO CSE 2020 ]
Answer & Explanation
Correct Answer: C) the time to stop one process and start running another one
Dispatch latency is the total time overhead required for the operating system's dispatcher to perform a context switch. It's the "downtime" during which no useful user-level work is being done.
This process includes:
Stopping the currently running process and saving its context (e.g., program counter, registers).
Loading the context of the new process that is scheduled to run.
Switching from kernel mode back to user mode and jumping to the correct location in the new process to resume its execution.
Trick to Remember 🚕
Think of the CPU as a taxi stand and processes as passengers. Dispatch latency is the total time it takes for one passenger to get out of the taxi and for the next passenger in line to get in and start their ride.
5. The operating system and the other processes are protected from being modified by an already running process because. [ ISRO CSE 2020 ]
Answer & Explanation
Correct Answer: D) every address generated by the CPU is being checked against the relocation and limit parameters.
This describes the mechanism of memory protection using base and limit registers. For each process, the OS sets a base register (the starting physical address) and a limit register (the size of the allowed memory range). Every time the process tries to access memory, the hardware (the Memory Management Unit, or MMU) checks if the address is between the base and (base + limit). If it's not, the hardware triggers a trap to the operating system, preventing the process from accessing memory that doesn't belong to it.
Concept to Remember
Memory protection is a hardware-enforced boundary check, not a software algorithm. The OS sets the rules (the base and limit values) when it loads a process, but the CPU hardware enforces those rules on every single memory access.
6. Consider the following statements about process state transitions for a system using preemptive scheduling.
A running process can move to ready state.
A ready process can move to running state.
A blocked process can move to running state.
A blocked process can move to ready state.
Which of the above statements are TRUE? [ GATE CSE 2020 ]
Answer & Explanation
Correct Answer: C) I, II and IV only
Let's analyze each transition based on a standard process state model:
I. Running to Ready:TRUE. This is the definition of preemption. A process is moved from Running to Ready if its time slice expires or a higher-priority process enters the Ready state.
II. Ready to Running:TRUE. This transition is performed by the dispatcher when it allocates the CPU to a process from the ready queue.
III. Blocked to Running:FALSE. This is an impossible transition. A process that has finished waiting for an event (e.g., I/O) must first go to the Ready state to wait for its turn on the CPU. It cannot bypass the Ready queue.
IV. Blocked to Ready:TRUE. This is the correct transition for a process when the event it was waiting for has occurred.
Therefore, only statements I, II, and IV are correct.
Concept to Remember 🛡️
A process that finishes waiting (Blocked/Waiting) must always get back in line (Ready) before it can run again. It never gets to cut the line and go straight from Blocked to Running.
7. Working Set (defined by a time window 't') at an instant of time is [ ISRO CSE 2015 ]
Answer & Explanation
Correct Answer: D) the set of pages that have been referenced in the last t time units
The Working Set Model is a memory management strategy based on the principle of locality of reference. It assumes that the set of pages a process will need in the near future is closely approximated by the set of pages it has used in the recent past.
The "working set" is the set of unique pages a process has referenced over a specific, backward-looking time window (denoted by 't' or Δ). The OS ensures that all pages in a process's current working set are kept in main memory to reduce page faults.
Options A and B are incorrect because the model looks at the past, not the future.
Option C is incorrect because the model is based on recency (referenced within the window), not frequency (how many times it was referenced).
Concept to Remember 📚
Think of the Working Set as a student's desk. The pages and books currently on the desk are the ones they've used recently. This is their 'working set' of materials. It's not about what they'll use tomorrow (future) or which book they've opened most often (frequency), but what's relevant right now based on their activity in the recent past.
8. Suppose a system contains 'n' processes and the system uses the round-robin algorithm for CPU scheduling. Which data structure is best suited for the ready queue of the processes? [ ISRO CSE 2015 ]
Answer & Explanation
Correct Answer: C) circular queue
The Round-Robin (RR) scheduling algorithm is inherently fair and uses a First-In, First-Out (FIFO) approach. The ready queue is treated as a circle of processes.
Here’s how it works:
The scheduler picks the process at the head of the queue to run.
After its time quantum expires, the process is moved to the tail of the queue.
A circular queue is the most efficient data structure for this task. It implements the FIFO policy and naturally handles the "wrap-around" logic of moving an element from the front to the back without needing to shift all other elements, which would be inefficient.
A stack (LIFO) would cause starvation.
A standard queue is functionally correct, but a circular queue is a more efficient implementation for this specific use case.
A tree is used for priority-based scheduling, not the equal-priority nature of RR.
Concepts to Remember 🔄
Think of the name "Round-Robin" itself. It implies a circular or round-table fashion. A circular queue perfectly models this behavior of processes taking turns one after another and then "going to the back of the line" to wait for their next turn.
9. The maximum number of processes that can be in the Ready state for a computer system with 'n' CPUs is [ GATE CSE 2015 SET-3 ]
Answer & Explanation
Correct Answer: D) Independent of n
It's crucial to distinguish between the Running state and the Ready state.
The number of CPUs (n) limits the maximum number of processes that can be in the Running state at any single moment. If you have 'n' CPUs, you can run 'n' processes simultaneously.
The Ready state, however, is a queue for processes that are fully prepared to run and are just waiting for a CPU to become free. The size of this queue is not constrained by the number of CPUs; it is primarily limited by the amount of available main memory (RAM) in the system.
Therefore, a system can have a very large number of processes in the Ready state, regardless of how many CPUs it has.
Concept to Remember 🛒
Think of CPUs as checkout counters in a supermarket and processes as shoppers. The number of counters (n) limits how many shoppers can be actively checking out at once (the Running state). However, the number of shoppers waiting in line (the Ready state) is only limited by the size of the store (system memory), not by the number of counters.
10. The state of a process after it encounters an I/O instruction is? [ ISRO CSE 2013 ]
Answer & Explanation
Correct Answer: B) Blocked
When a process currently using the CPU (in the Running state) needs to perform a slow I/O operation (like reading from a file or waiting for network data), it cannot continue executing.
The operating system moves the process to the Blocked state (also called the Waiting state) to wait for the I/O to complete. This allows the CPU scheduler to pick another process from the Ready state to run, ensuring the CPU stays busy.
Concept to Remember 👨🍳
Think of a chef (the process) cooking on a stove (the CPU). When the chef needs to wait for an ingredient to bake in the oven (an I/O operation), they don't just stand there staring at it. They move away from the stove (enter the Blocked state) so another chef can use it.
11. There are three processes in the ready queue. When the currently running process requests for I/O, how many process switches take place? [ ISRO CSE 2011 ]
Answer & Explanation
Correct Answer: B) 2
When a running process requests I/O, a two-step context switch sequence occurs:
Switch 1 (Save State): The operating system saves the context of the currently running process and moves it to the Blocked/Waiting state.
Switch 2 (Load State): The operating system then selects a process from the ready queue and loads its context to begin its execution.
The number of processes waiting in the ready queue (three, in this case) does not affect the number of switches required for this specific event.
Concept to Remember 🏎️
Think of it like a driver swap in a relay race. When one driver (the running process) needs to refuel (requests I/O), two actions happen: 1. The current driver gets out of the car. 2. The new driver gets into the car. It's a two-step swap, regardless of how many other drivers are waiting in line.
12. A process is [ ISRO CSE 2009 ]
Answer & Explanation
Correct Answer: C) A program in execution
This is the fundamental definition in operating systems. It's crucial to distinguish between a program and a process:
A Program is a passive, static entity. It's an executable file stored on the disk, containing a list of instructions.
A Process is an active, dynamic entity. It's an instance of a program that has been loaded into memory and is currently running. It has its own resources, such as a program counter, stack, and memory space.
Concept to Remember 📜 vs 👨🍳
Think of a program as a recipe in a cookbook (a static set of instructions). A process is the chef actively cooking that recipe—gathering ingredients, using the stove, and creating the dish (a dynamic activity).
13. Special software to create a job queue is called a [ ISRO CSE 2009 ]
Answer & Explanation
Correct Answer: B) Spooler
A spooler is a program that manages jobs in a queue for a device, such as a printer, that can only handle one task at a time. The term "spool" is an acronym for Simultaneous Peripheral Operations On-Line.
The most common example is a print spooler. When you print multiple documents, the spooler saves them to a buffer (a queue on the disk) and then sends them to the printer one by one. This allows you to continue working on your computer without having to wait for the slow printing process to finish.
Concept to Remember 🎢
Think of a spooler like the operator of a single-person rollercoaster. They take requests from many people (the 'jobs'), line them up in a queue, and let one person on the ride (the 'device') at a time. This keeps everything orderly and efficient.
14. Which is the correct definition of a valid process transition in an operating system? [ ISRO CSE 2009 ]
Answer & Explanation
Correct Answer: B) Dispatch: ready → running
This is the correct term for the action of the scheduler (specifically the dispatcher module) selecting a process from the ready queue and allocating the CPU to it.
Let's review why the other options are incorrect:
Wake Up: This term describes a process moving from the Blocked/Waiting state to the Ready state after an event it was waiting for (like I/O completion) has occurred.
Block: This is when a process moves from the Running state to the Blocked/Waiting state because it requested an I/O operation.
Timer runout: This occurs in preemptive scheduling when a process's time slice expires, causing it to move from the Running state to the Ready state.
Concept to Remember 👨💼
Think of the OS Dispatcher as a manager at a service counter. When a customer is waiting in line (Ready), the manager 'dispatches' them to an available cashier to be served (Running).
15. In the following process state transition diagram for a uniprocessor system, assume that there are always some processes in the ready state:
Now consider the following statements:
If a process makes a transition D, it would result in another process making transition A immediately.
A process P2 in blocked state can make transition E while another process P1 is in running state.
The OS uses preemptive scheduling.
The OS uses non-preemptive scheduling.
Which of the above statements are TRUE? [ GATE CSE 2009 ]
Answer & Explanation
Correct Answer: C) II and III
Let's analyze each statement based on the diagram:
I. If a process makes transition D (Terminate), another makes A (New -> Ready) immediately:FALSE. Transition D (termination) simply frees up the CPU. The OS will then schedule another process from the ready queue. This does not cause a new process to be created and admitted (Transition A).
II. A process P2 can make transition E (Blocked -> Ready) while P1 is running:TRUE. Transition E happens when an I/O operation completes. This is an external event handled by the OS and hardware (like DMA) and is independent of the process currently running on the CPU.
III. The OS uses preemptive scheduling:TRUE. The existence of transition C (Running → Ready) is the defining characteristic of preemptive scheduling. This transition occurs when a process is forced to give up the CPU, for example, due to a timer interrupt or a higher-priority process becoming ready.
IV. The OS uses non-preemptive scheduling:FALSE. Since statement III is true, this must be false. A non-preemptive system would not have the Running → Ready transition.
Concept to Remember 🧐
The key to this question is the transition from Running to Ready. If this path exists, the system is preemptive. Also, remember that I/O operations and their completion are asynchronous events that don't depend on the currently running process.
16. Which of the following need not necessarily be saved on a Context Switch between processes? 💾 [ ISRO CSE 2008 ]
Answer & Explanation
Correct Answer: B) Translation look-aside buffer (TLB)
During a context switch, the OS must save the state of the outgoing process so it can be resumed later. This state includes the general-purpose registers (which hold current calculations), the program counter (the next instruction to run), and the stack pointer (the state of function calls). These are all essential for the process's execution context.
The Translation Look-aside Buffer (TLB), however, is a hardware cache that stores recent virtual-to-physical address translations for the *currently running process*. When the OS switches to a different process, these cached translations become invalid because the new process has its own, separate address space. Instead of being saved, the TLB is typically flushed (cleared) on a context switch.
Concept to Remember
A context switch saves the unique software state of a process. Hardware caches like the TLB, which are tied to the current memory map, are flushed because their contents become irrelevant after switching to a new process with a different memory map.
17. Checkpointing a job... 💾⏱️ [ ISRO CSE 2008 ]
Answer & Explanation
Correct Answer: B) allows it to continue executing later.
Checkpointing is a fault tolerance technique used for long-running computational jobs. It involves periodically saving the entire state of the process (memory, CPU registers, etc.) to persistent storage.
If the system crashes or the job fails for any reason, it doesn't need to start over from the beginning. Instead, it can be restarted from the most recent checkpoint, saving hours or even days of computation time.
Concept to Remember
Think of checkpointing as creating a "save point" in a video game. You don't save because you've made an error; you save proactively so that if you fail later, you can resume from that point instead of starting the whole game over.
18. A task in a blocked state... 🚦 [ ISRO CSE 2007 ]
Answer & Explanation
Correct Answer: D) is waiting for some temporarily unavailable resources.
In the process state model, a task (or process) enters the Blocked (or Waiting) state when it cannot continue execution until some external event occurs. This event is typically the completion of an I/O operation, acquiring a lock, or waiting for user input.
While blocked, the process is ineligible to be run by the CPU, even if the CPU is free. It is moved out of the running state to allow other, "ready" processes to run. Once the event it was waiting for completes, it transitions to the "Ready" state.
Concept to Remember
Think of the process states like this:
Running: Actively using the CPU.
Ready: Has everything it needs to run, just waiting for its turn on the CPU.
Blocked: Cannot run because it's waiting for an external resource (like the disk or network).
19. What is the name of the technique in which the operating system of a computer executes several programs concurrently by switching back and forth between them? 🔄 [ ISRO CSE 2007 ]
Answer & Explanation
Correct Answer: B) Multi-tasking
Multi-tasking is the core concept that allows a user to run multiple applications (like a web browser, a text editor, and a music player) at the same time. On a system with a single CPU core, the OS achieves this by rapidly switching its attention between the different programs in a process called context switching. This happens so quickly that it creates the illusion of parallel execution.
The other options are different OS concepts:
Partitioning & Paging: These are memory management techniques.
Windowing: This is a feature of a graphical user interface (GUI).
Concept to Remember
Think of multi-tasking like a chef in a kitchen. The chef switches between chopping vegetables, stirring a sauce, and checking the oven. They are only doing one action at any precise moment, but by switching tasks quickly, they make progress on the entire meal concurrently.
20. The process state transition diagram of an operating system is as given below.
Which of the following must be FALSE about the above operating system? 🤔 [ GATE IT 2006 ]
Answer & Explanation
Correct Answer: B) It uses preemptive scheduling
The key to this question is looking at the transitions from the Running state. In this diagram, a process can only leave the Running state by transitioning to Exit.
A preemptive scheduling system allows the operating system to forcibly stop a running process and move it back to the Ready state (for example, when its time slice expires). This would be represented by an arrow from "Running" to "Ready". Since that arrow is missing, the system cannot be preemptive. Therefore, the statement that it uses preemptive scheduling must be false.
Because there is no preemption, it must be a non-preemptive system (C is True). The existence of a "Ready" state implies multiple processes can be ready to run, which is the definition of multiprogramming (A is plausible/True).
Concept to Remember
In a process state diagram, the arrow from Running → Ready is the defining feature of preemptive scheduling. If that arrow is missing, the scheduling is non-preemptive.
21. What is the swap space in the disk used for? 💾↔️🧠 [ GATE CSE 2005 ]
Answer & Explanation
Correct Answer: B) Saving process data
Swap space is a dedicated area on a hard disk that the operating system uses as an extension of the main memory (RAM). This is a key component of a virtual memory system.
When the physical RAM becomes full, the OS's memory manager can move inactive memory pages or entire processes from RAM to the swap space. This action, known as "swapping out," frees up RAM for active processes. When the data is needed again, it is "swapped in" back to RAM. This allows the system to run more or larger programs than it could fit into physical RAM alone.
Concept to Remember
Think of RAM as your physical desk and swap space as a nearby filing cabinet. When your desk gets cluttered with documents you aren't currently working on, you move them to the cabinet (swapping out) to make room. When you need a document again, you retrieve it from the cabinet and put it back on your desk (swapping in).
22. Which of the following need not necessarily be saved on a context switch between processes? 🔄 [ GATE CSE 2000 ]
Answer & Explanation
Correct Answer: B) Translation look-aside buffer
During a context switch, the OS must save the execution state of the outgoing process so it can be resumed later. This state includes the general-purpose registers, the program counter, and the stack pointer.
The Translation Look-aside Buffer (TLB), however, is a hardware cache that stores recent virtual-to-physical address translations for the currently running process. When the OS switches to a different process with its own address space, these cached translations become invalid. Instead of being saved, the TLB is simply flushed (cleared) on a context switch.
Concept to Remember
A context switch saves a process's unique software state. Hardware optimizations tied to the current memory map, like the TLB, are flushed because their cached data is irrelevant to the incoming process.
23. Which of the following actions is/are typically not performed by the operating system when switching context from process A to process B? 🤔 [ GATE CSE 1999 ]
Answer & Explanation
Correct Answer: C) Swapping out the memory image of process A to the disk.
A context switch is a very frequent and lightweight operation. Its essential tasks include:
(A) Saving and Restoring Registers: The CPU's state (program counter, general registers) for process A is saved, and the state for process B is loaded. This is the core of a context switch.
(B) Changing Address Translation Tables: The OS tells the MMU to use process B's page tables instead of process A's, effectively changing the virtual memory view.
(D) Invalidating the TLB: The Translation Look-aside Buffer, which caches address translations for process A, is flushed because those translations are invalid for process B.
Swapping, on the other hand, is a much heavier, less frequent memory management operation. It involves moving an entire process's memory from RAM to the disk's swap space and only happens when the system is low on physical memory. It is not a routine part of every context switch.
Concept to Remember
A context switch is like a chef quickly putting down one tool (a knife) to pick up another (a whisk). Swapping is like taking an entire mixing bowl off the counter and putting it in the refrigerator to make space. The first action is fast and frequent; the second is slow and only done when necessary.
24. The process state transition diagram in the below figure is representative of... [ GATE CSE 1996 ]
Answer & Explanation
Correct Answer: B) an operating system with a preemptive scheduler
The most important feature in this diagram is the arrow that goes from the Running state back to the Ready state. This transition represents preemption.
It signifies that the operating system has the power to interrupt a process that is currently running (for instance, because its time slice has expired or a higher-priority process has become ready) and move it back to the ready queue to wait for the CPU again. This is the defining characteristic of a preemptive scheduling system.
Concept to Remember
In a process state diagram, the arrow from Running → Ready is the definitive sign of a preemptive scheduler. Without this arrow, the scheduler would be non-preemptive, meaning a process only leaves the running state when it voluntarily blocks or terminates.
25. Which of the following does not interrupt a running process? 🚫⚡️[GATE CSE 2001 ]
Answer & Explanation
Correct Answer: D) Scheduler process
An interrupt is a signal sent to the CPU, typically by a hardware component, that temporarily stops (interrupts) the currently executing process to handle a more important event.
A device: An I/O device (like a disk or network card) sends an interrupt when it has completed a task.
B Power failure: Hardware sends a high-priority interrupt to allow the OS to shut down gracefully.
C Timer: A hardware timer sends interrupts at regular intervals, which the OS uses for preemptive multitasking.
The scheduler, however, is a piece of software within the OS kernel. It does not generate interrupts itself. Instead, the scheduler is often executed in response to an interrupt. For example, when a timer interrupt occurs, the OS runs an interrupt handler which then calls the scheduler to decide which process should run next.
Concept to Remember
Think of it this way: Hardware events (like the phone ringing) are the cause of the interruption. The scheduler is part of the response (you deciding whether to answer the phone or ignore it). The scheduler doesn't cause the interruption; it's the decision-maker that runs because of it.
26. Where does the swap space reside? 💾 [ GATE CSE 2001 ]
Answer & Explanation
Correct Answer: B) Disk
Swap space is a core component of a virtual memory system. It is a dedicated partition or file on a secondary storage device, such as a hard disk drive (HDD) or solid-state drive (SSD), which is generally referred to as the disk.
The operating system uses this space as an extension of physical memory (RAM). When RAM is full, the OS can move inactive pages of memory from RAM to the swap space on the disk to free up RAM for active processes.
Concept to Remember
Think of RAM as your fast, but limited, desk space. The disk is your large, but slower, filing cabinet. Swap space is a specific drawer in that filing cabinet used exclusively for the overflow from your desk when it gets too full.
27. In a multi-programmed operating system using preemptive scheduling, match the events in List-I with the most appropriate process state transitions in List-II.
List-I (Event)
P. A process makes a blocking system call for I/O.
Q. A timer interrupt occurs.
R. An I/O operation, for which a process was waiting, completes.
S. A running process executes a `fork()` system call.
List-II (State Transition)
Running → Ready
Running → Blocked (Waiting)
Blocked (Waiting) → Ready
A new process is created and put into the Ready state.
Answer & Explanation
Correct Answer: B) P-2, Q-1, R-3, S-4
Let's analyze each event:
P. I/O Request: When a process requests I/O, it cannot continue until the I/O is complete. The OS moves it from the Running state to the Blocked state to wait.
(Matches P → 2)
Q. Timer Interrupt: In a preemptive system, a timer interrupt signals the end of a process's time slice. The OS forcibly stops the process and moves it from the Running state to the Ready state so another process can run.
(Matches Q → 1)
R. I/O Completion: When the I/O device finishes its task for a waiting process, that process is no longer blocked. It has everything it needs to continue, so the OS moves it from the Blocked state to the Ready state to await its next turn on the CPU.
(Matches R → 3)
S. Fork System Call: The `fork()` call creates a new child process. This new process is created and placed in the Ready state, ready to be scheduled by the OS.
(Matches S → 4)
Concept to Remember
This single question tests your knowledge of the complete process life cycle: how processes handle I/O (Running ↔ Blocked), how they are managed by a preemptive scheduler (Running → Ready), how they become eligible to run again (Blocked → Ready), and how they are created (`fork()`). Mastering these fundamental transitions is key to solving complex OS problems.
28. Which of the following statements accurately describe the relationship between processes and threads? 👨👦 (MSQ)
Answer & Explanation
Correct Answer: (A) and (D)
(A) is correct because threads are lightweight and share the memory space of their parent process, including the heap, data, and code segments. They only have their own separate stack and registers.
(B) is incorrect. Thread context switches are much faster because they don't involve changing the memory address space, unlike process context switches.
(C) is incorrect. This is only true for user-level threads. For kernel-level threads, one thread blocking does not block the others in the same process.
(D) is correct. The OS uses a PCB to store the context of a process and a TCB to store the unique context of each thread.
Concept to Remember
Threads share memory (heap, data) but have their own stack. Process switches are slow (memory map change); thread switches are fast.
29. Consider the following C program. Assuming the `fork()` calls are successful, how many times will "GATE 2026" be printed? (MCQ)
The total number of processes created by 'n' `fork()` calls is $2^n$.
Initially, there is 1 process.
The first `fork()` creates 1 child, making a total of 2 processes.
The second `fork()` is executed by both of these processes, so each creates a new child. This results in $2 \times 2 = 4$ total processes.
Each of the 4 processes will execute the `printf` statement.
Concept to Remember
For 'n' `fork()` calls in a simple program, the total number of processes becomes $2^n$. Each process will execute the code following the forks.
30. In an operating system, which component is directly responsible for performing the context switch by loading the state of a new process onto the CPU? (MCQ)
Answer & Explanation
Correct Answer: (D) Dispatcher
The Short-Term Scheduler *selects* the next process to be executed. The Dispatcher is the module that takes the decision from the Short-Term Scheduler and *performs the action* of loading the new process's context onto the CPU.
Concept to Remember
The Short-Term Scheduler decides which process runs next. The Dispatcher acts on that decision, performing the context switch.
31. During a context switch from Process A to Process B, which of the following actions are necessarily performed by the OS? (MSQ)
Answer & Explanation
Correct Answer: (B) and (D)
(A) is incorrect. The TLB contains address translations specific to Process A. These are invalid for Process B, so the TLB is flushed (invalidated), not saved.
(B) is correct. The Program Counter is a critical part of a process's context and must be saved so the process can be resumed later.
(C) is incorrect. Swapping is a heavy memory-management operation that only occurs under memory pressure. It is not a routine part of every context switch.
(D) is correct. To switch the virtual memory view from Process A to Process B, the OS must update the memory management unit (MMU) to use B's address translation tables.
Concept to Remember
A context switch always saves the CPU state (like the PC) and changes the memory view (page table pointers). It does not always involve swapping to disk and it flushes the TLB, it doesn't save it.
32. A process is in the Suspend Blocked state, meaning it is in secondary memory and waiting for an I/O event. If the I/O event for which it was waiting completes, what is the next state for this process? (MCQ)
Answer & Explanation
Correct Answer: (C) Suspend Ready
When the I/O event completes, the process is no longer "Blocked" or "Waiting". However, it cannot be moved to the "Ready" state directly because it is still in secondary memory. It first transitions to the Suspend Ready state. From there, the Medium-Term Scheduler will eventually swap it back into main memory, at which point it will be in the "Ready" state.
Concept to Remember
A process in secondary memory (suspended) cannot move directly to a main memory state (Ready). An event completion moves it from Suspend Blocked to Suspend Ready.
33. A process that has completed its execution but whose entry remains in the process table because the parent process has not yet read its exit status is known as a(n): (MCQ) 🧟
Answer & Explanation
Correct Answer: (C) Zombie Process
A Zombie Process is a terminated process that is awaiting its parent to call `wait()` and collect its status. An Orphan Process is one whose parent has terminated. It gets adopted by the `init` process.
Concept to Remember
A Zombie is dead but not reaped by its parent (`wait()`). An Orphan is alive but its parent has terminated; it gets adopted by `init`.
34. Among the Long-Term, Medium-Term, and Short-Term schedulers, which one is executed most frequently in a typical time-sharing operating system? (MCQ)
Answer & Explanation
Correct Answer: (B) Short-Term Scheduler
The Short-Term (CPU) Scheduler must select a new process to run whenever a CPU decision needs to be made (e.g., after a timer interrupt, I/O completion, or system call). This happens multiple times per second. The Long-Term Scheduler runs much less frequently, only when a new job is submitted to the system. The Medium-Term Scheduler runs even less frequently, only when the system needs to manage the degree of multiprogramming by swapping processes.
Concept to Remember
The Short-Term (CPU) scheduler is the most frequently executed as it makes decisions on a micro-level, often triggered by timer interrupts many times per second.
35. When a process running in user mode executes a system call to read a file, which of the following events occur as part of the system call mechanism? (MSQ)
Answer & Explanation
Correct Answer: (A) and (C)
(A) is correct. To protect the OS, system calls are executed in kernel mode. The hardware switches from user mode (mode bit 1) to kernel mode (mode bit 0) when the system call is invoked.
(B) is incorrect. A system call is a software-generated interrupt, often called a trap, not a hardware interrupt from the timer.
(C) is correct. The user program must pass parameters (like the file name and buffer location) to the kernel so the OS knows what to do. This is a necessary step.
(D) is incorrect. The process will likely move to the Blocked or Waiting state while the file read occurs, not the Terminated state.
Concept to Remember
A system call is a software-triggered 'trap' that switches the CPU from user mode to kernel mode to securely access OS services.
36. A Process Control Block (PCB) contains various pieces of information about a specific process. Which of the following is generally NOT stored within a process's PCB? (MCQ)
Answer & Explanation
Correct Answer: (B) The ready queue
The PCB stores information specific to a single process, such as its registers, priority, and open files. The ready queue is a separate data structure maintained by the operating system that contains pointers to the PCBs of all processes that are in the ready state. While a PCB might contain a pointer to the next PCB in a queue, the entire queue itself is not stored inside one process's PCB.
Concept to Remember
A PCB is the 'ID card' for one process. The ready queue is the 'waiting line' where many processes stand; the line itself is not part of any single person's ID card.
37. The Long-Term Scheduler aims to maintain a good mix of CPU-bound and I/O-bound processes in memory. A process is considered I/O-bound if it: (MCQ)
Answer & Explanation
Correct Answer: (C) Spends more time waiting for I/O operations to complete than doing computations.
The distinction between CPU-bound and I/O-bound relates to how a process spends its time. A CPU-bound process performs long computations with infrequent I/O waits (A and D). An I/O-bound process performs short computations followed by frequent waits for I/O to complete (C). The Long-Term scheduler tries to balance these to keep both the CPU and I/O devices busy.
Concept to Remember
I/O-bound processes are characterized by short CPU bursts and long I/O waits, while CPU-bound processes have long CPU bursts and infrequent I/O waits.
38. A C program contains 4 fork() system calls executed in sequence. If all calls are successful, how many new child processes are created? (MCQ)
Answer & Explanation
Correct Answer: (B) 15
The formula for the number of new child processes created by 'n' successful fork() calls is $2^n - 1$[cite: 97]. In this case, n = 4.
Therefore, the number of child processes is $2^4 - 1 = 16 - 1 = 15$. The total number of processes (including the original parent) would be $2^4 = 16$.
Concept to Remember
Be careful with fork() questions! The number of child processes is $2^n - 1$, while the total number of processes (including the parent) is $2^n$.
39. Based on the distinction between a program and a process in an operating system, which of the following statements are correct? (MSQ)
Answer & Explanation
Correct Answer: (B) and (C)
(A) is incorrect. A program is a passive set of instructions, while a process is an active and dynamic entity[cite: 105].
(B) is correct. A program is stored on a persistent medium like a hard disk (secondary memory), whereas a process is loaded into RAM (main memory) for execution.
(C) is correct. This is the standard definition of a process.
(D) is incorrect. The operating system allocates resources to a process for its execution, not to a passive program[cite: 105].
Concept to Remember
A program is a passive blueprint on disk. A process is the active, running instance of that blueprint in memory, with allocated resources.
40. The transition of a process from the New state to the Ready state is the responsibility of which component? (MCQ)
Answer & Explanation
Correct Answer: (D) Long-Term Scheduler
The Long-Term Scheduler (or Job Scheduler) is responsible for selecting processes from the job pool on the disk and bringing them into the main memory[cite: 170]. This act corresponds to the New → Ready state transition[cite: 173]. The Short-Term Scheduler handles the Ready → Run transition, and the Dispatcher performs the context switch itself.
Concept to Remember
Remember the scheduler roles by their scope: Long-Term (Disk to Memory), Medium-Term (Memory to Disk and back), Short-Term (Memory to CPU).
0 Comments