Master Link List Concepts for your Data Structure course and the GATE exam. This guide covers key concepts, essential MCQs, and previous year questions (PYQs) to sharpen your skills in Link List
Link List Concepts, tricks , Tips
Core Linked List Concepts
Definition: A linear data structure where elements are not stored in contiguous memory locations.
Nodes: Each node contains a data field and a pointer (or link) to the next node.
Singly Linked List (SLL): Each node points only to the next node. Traversal is one-way.
Doubly Linked List (DLL): Each node contains a pointer to the next node and a pointer to the previous node.
Circular Linked List (CLL): A list where all nodes form a circle; the last node's `next` pointer points back to the first node (head). There is no `NULL` end.
Circular DLL (CDLL): Combines DLL and CLL properties. The last node's `next` points to the head, and the head's `prev` points to the last node.
Key Time Complexities (Worst-Case)
This table summarizes the worst-case time complexities for common operations. ($n$ = number of nodes).
Operation
Singly (SLL)
Doubly (DLL)
Circular (CLL w/ Rear Ptr)
Search
$O(n)$
$O(n)$
$O(n)$
Insert (Front)
$O(1)$
$O(1)$
$O(1)$
Insert (End)
$O(n)$
$O(n)$
$O(1)$
Delete (Front)
$O(1)$
$O(1)$
$O(1)$
Delete (End)
$O(n)$
$O(n)$
$O(n)$ (still need `prev`)
Delete (Given Node Ptr)
$O(n)$ (must find `prev`)
$O(1)$ (has `prev` ptr)
$O(n)$ (must find `prev`)
*Note on SLL Delete (End): Even with a `L` (last) pointer, deletion is $O(n)$ because you must access the second-to-last node.
Key Algorithms & Tricks
Sorting a Linked List:Merge Sort is the most efficient choice with $O(n \log n)$ complexity. QuickSort performs poorly due to no random access, and HeapSort is not possible.
Building a Sorted List: Inserting $n$ elements one by one into a sorted list is $O(n^2)$ (equivalent to Insertion Sort).
SLL Reversal (Iterative): The standard method uses three pointers: `prev`, `current`, and `next`. After the loop, `prev` is the new head. Don't forget the final step: *head_ref = prev;.
DLL Reversal (Iterative): Iterate through the list, swapping the `prev` and `next` pointers for each node.
Recursive Traversal:
fun(head->next); printf(...);: Prints the list in reverse order.
printf(...); fun(head->next->next); printf(...);: Prints alternate nodes on the way "down" and "up" (e.g., 1 3 5 5 3 1).
Queue with One Pointer: A Circular Linked List can implement a Queue with $O(1)$ enqueue/dequeue using a single pointer to the rear node.
Move Last to Front: Requires $O(n)$ traversal to find the second-to-last node. The steps are: `q->next = NULL;` (disconnect), `p->next = head;` (link), `head = p;` (update head).
Find "Not Max/Min": Can be done in $O(1)$ time. Pick any 3 elements; the median of that group is a valid answer.
Linked Lists: Dynamic size. Fast $O(1)$ insertion/deletion (at known position). $O(n)$ sequential access (no random access). Uses extra memory for pointers.
Link List Practice MCQs & GATE PYQs
1. In a circular linked list organisation, insertion of a record involves modification of: [ GATE CSE 1987 ]
Answer & Explanation
Correct Answer: B) Two pointers.
When inserting a new node into a circular linked list (for example, inserting `newNode` after `nodeA`), you need to perform two pointer modifications:
1. Set `newNode.next` to point to `nodeA.next`.
2. Set `nodeA.next` to point to `newNode`.
This correctly links the new node into the circle.
Trick to Remember
Think of adding a link to a closed chain. You have to connect the new link to the link after it (pointer 1) and connect the link before it to the new link (pointer 2).
2. Linked lists are not suitable data structures for which one of the following problems? [ GATE CSE 1994 ]
Answer & Explanation
Correct Answer: B) Binary search
Binary search relies on the ability to access the middle element of a data structure directly and efficiently (in constant time, or O(1)). This is known as random access. Linked lists, by their nature, only allow sequential access, meaning you have to traverse the list from the head to reach an element. Finding the middle element of a linked list takes linear time (O(n)), which defeats the purpose and efficiency of the binary search algorithm.
Linked lists are suitable for:
Insertion sort: No need to swap here just find appropriate place and join the link
Polynomial manipulation: Linked List is a natural solution for polynomial manipulation
Radix sort: Here we are putting digits according to same position(unit,tens) into buckets; which can be effectively handled by linked lists.
Concept Required
To answer this, you need to understand the fundamental difference between data access patterns:
Random Access: The ability to access any element in a collection in constant time, regardless of its position. Arrays provide random access (e.g., `myArray[100]`).
Sequential Access: The need to traverse elements in order from the beginning to reach a specific element. Linked lists are a primary example of this.
Concept to Remember
Binary search needs to "jump" to the middle of a list. Arrays are like rulers where you can instantly point to any number. Linked lists are like a chain of clues, where you must follow each one to get to the next.
3. For merging two sorted lists of sizes m and n into a sorted list of size m+n, we require comparisons of: [ GATE CSE 1995 ]
Answer & Explanation
Correct Answer: C)O(m+n)
The standard algorithm to merge two sorted lists involves iterating through both lists with pointers. In the worst-case scenario, you make one comparison for each element you add to the new list until one of the lists is exhausted. The total number of operations is therefore proportional to the sum of the lengths of the two lists, giving a time complexity of O(m+n).
Concept to Remember
Think of merging two lines of people already sorted by height. To create a single sorted line, you'd look at the first person in each line, pick the shorter one, and ask them to move to the new line. You have to repeat this for nearly every person in both lines. The total effort depends on the total number of people (m+n).
4. A circularly linked list is used to represent a Queue. A single variable p is used to access the Queue. To which node should p point such that both the operations enQueue and deQueue can be performed in constant time? [ GATE CSE 2004 ]
Answer & Explanation
Correct Answer: A) rear node
By pointing to the rear node of the circular list, we can access both the rear (for enQueue) and the front (for deQueue) in constant time, O(1). The front node is simply `rear->next`.
Let's see how the operations work with the pointer `p` pointing to the rear node:
EnQueue (Adding an element)
To insert a new node, we link it after the current rear node, and then update `p` to point to this new rear node. This takes constant time.
// p points to the rear node
// 'item' is the new node to be inserted
void enqueue(struct node *item) {
item->next = p->next; // New node points to the old front
p->next = item; // Old rear points to the new node
p = item; // p is updated to be the new rear
}
DeQueue (Removing an element)
To remove the front element, we access it via `p->next`. We then adjust the rear's `next` pointer to skip the old front node and free its memory. This also takes constant time.
// p points to the rear node
void dequeue() {
// temp points to the front node (p->next)
struct node* temp = p->next;
// The rear now points to the second node in the list
p->next = p->next->next;
// Free the memory of the old front node
free(temp);
}
Concept to Remember
In a circular list, the "end" is connected back to the "beginning." By holding a pointer to the **rear** element, you get a free pointer to the **front** element (`rear->next`). This two-for-one access is the key to implementing a queue efficiently.
5. Consider the function f defined below. [ GATE CSE 2003 ]
For a given linked list p, the function f returns 1 if and only if:
Answer & Explanation
Correct Answer: B) the elements in the list are sorted in non-decreasing order of data value
This is a recursive function that checks if a linked list is sorted. Let's break down the return statement:
(p == NULL): This is the first base case. If the list is empty, it's considered sorted, so it returns true (1).
(p->next == NULL): This is the second base case. If the list has only one element, it's also considered sorted, and it returns true (1).
((p->data <= p->next->data) && f(p->next))): This is the recursive step. It returns true only if both conditions are met:
The current node's data is less than or equal to the next node's data.
The rest of the list, starting from the next node, is also sorted (checked by the recursive call f(p->next)).
This logic effectively checks every pair of adjacent elements to ensure they are in non-decreasing (ascending) order.
Concept to Remember
The function's logic is based on a fundamental recursive principle: "A list is sorted if its first element is less than or equal to its second, AND the rest of the list is also sorted." The base cases (empty or single-element lists) provide the stopping point for the recursion.
6. In the worst case, the number of comparisons needed to search a singly linked list of length n for a given element is:[ GATE CSE 2002 ]
Show AnswerTry Again
Answer & Explanation
Correct Answer: D) n
A linked list only supports sequential access. This means you cannot jump to the middle of the list; you must start at the head and traverse from one node to the next. 🧐
In the worst-case scenario, the element you are looking for is either the very last element in the list or it is not in the list at all. In both cases, you have to visit and compare every single one of the n elements to be certain.
Concept to Remember
Think of a linked list like a train with n carriages. 🚂 To find something in the last carriage, you have to walk through all the other carriages from the engine to the end. You can't just teleport to the last one!
7. The concatenation of two lists is to be performed in $O(1)$ time. Which of the following implementations of a list should be used? GATE CSE 1997
A) Singly linked listB) Doubly linked listC) Circular doubly linked listD) Array implementation of list
Show AnswerTry Again
Answer & Explanation
Correct Answer: C) Circular doubly linked list
To achieve concatenation in constant time, $O(1)$, we must be able to access the beginning of the second list and the end of the first list without traversing them. Let's look at the options:
Array: Concatenating two arrays requires creating a new, larger array and copying all elements, making it an $O(m+n)$ operation.
Singly/Doubly Linked List: To join List2 to List1, you must find the last node of List1. This requires traversing List1, which takes $O(n)$ time, unless you maintain an extra pointer to the tail.
Circular Doubly Linked List: This structure is ideal. If you maintain a single pointer to the 'rear' node of each list, you can access both the head (rear->next) and the tail (rear) in $O(1)$ time. Concatenation is just a matter of rearranging a few pointers to join the tail of the first list with the head of the second, and the tail of the second with the head of the first. This takes a constant number of steps.
Concept to Remember
The key to O(1) concatenation is avoiding traversal. Any list implementation that keeps track of its last node (tail) can do this. A circular list is particularly elegant because a single pointer to the tail naturally gives you a direct link back to the head, making the pointer manipulations straightforward.
8. Which of the following statements is true?
As the number of entries in a hash table increases, the number of collisions increases.
Recursive programs are efficient.
The worst case complexity for Quicksort is O(n^2).
Binary search using a linear linked list is efficient.
A) I and IIB) II and IIIC) I and IVD) I and III
Show AnswerTry Again
Answer & Explanation
Correct Answer: D) I and III
Let's break down each statement:
I. Hash Table Collisions (True): As you add more entries to a hash table, its "load factor" increases. With more items occupying the available slots, the probability that a new item will hash to an already occupied slot (a collision) rises significantly. 💥
II. Recursive Programs (False): While often elegant, recursive programs are generally not the most efficient. Each recursive call adds a new frame to the call stack, consuming memory and adding overhead. An iterative solution is often faster and more memory-efficient.
III. Quicksort Worst Case (True): The average-case complexity of Quicksort is an excellent O(n \log n), but its worst-case complexity is $O(n^2)$. This occurs when the chosen pivot is consistently the smallest or largest element, leading to highly unbalanced partitions (e.g., when sorting an already sorted list).
IV. Binary Search on Linked List (False): Binary search is only efficient if you can access the middle element of a collection in constant time (O(1)), i.e., random access. Linked lists only allow sequential access (you must traverse from the beginning), which makes finding the middle an O(n) operation, defeating the purpose of binary search.
Key Concepts Tested
This question hits on several core CS fundamentals: the nature of hash tables, the trade-offs of recursion, the performance characteristics of sorting algorithms, and the importance of data structure access patterns (random vs. sequential).
9. The following C function takes a singly-linked list of integers as a parameter and rearranges the elements of the list. The function is called with the list containing the integers 1, 2, 3, 4, 5, 6, 7 in the given order. What will be the contents of the list after the function completes execution? [ GATE CSE 2008 ]
struct node {
int value;
struct node *next;
};
void rearrange(struct node *list) {
struct node *p, *q;
int temp;
if (!list || !list->next) return;
p = list;
q = list->next;
while (q) {
temp = p->value;
p->value = q->value;
q->value = temp;
p = q->next;
q = p ? p->next : 0;
}
}
A) 1,2,3,4,5,6,7B) 2,1,4,3,6,5,7C) 1,3,2,5,4,7,6D) 2,3,4,5,6,7,1
Show AnswerTry Again
Answer & Explanation
Correct Answer: B) 2,1,4,3,6,5,7
The function works by iterating through the list and swapping the values of adjacent pairs of nodes. It does not change the node pointers, only the `value` fields within the nodes.
Line-by-Line Code Explanation
struct node *p, *q;: Declares two node pointers, `p` and `q`, which will be used to traverse the list.
if (!list || !list->next) return;: A safety check. If the list is empty (`!list`) or has only one node (`!list->next`), there are no pairs to swap, so the function exits.
p = list; q = list->next;: Initializes the pointers. `p` points to the first node, and `q` points to the second node. This sets up the first pair.
while (q): The main loop. It continues as long as `q` is not `NULL`. This ensures we always have a valid pair to process.
temp = p->value; ... q->value = temp;: These three lines perform a classic swap of the integer values between the nodes `p` and `q`.
p = q->next;: `p` is moved forward two steps to the start of the next pair (i.e., it skips over the node it just swapped with).
q = p ? p->next : 0;: This line moves `q` forward two steps. It first checks if `p` is not `NULL`. If `p` is valid, `q` is moved to the node after `p`. If `p` is `NULL` (which happens at the end of a list with an odd number of elements), `q` is set to 0 (`NULL`), which terminates the `while` loop.
Execution Trace
Let's trace the execution with the input list: 1 → 2 → 3 → 4 → 5 → 6 → 7
Iteration 1:
`p` points to 1, `q` points to 2.
Values are swapped. List becomes: 2 → 1 → 3 → 4 → 5 → 6 → 7
`p` moves to node 3. `q` moves to node 4.
Iteration 2:
`p` points to 3, `q` points to 4.
Values are swapped. List becomes: 2 → 1 → 4 → 3 → 5 → 6 → 7
`p` moves to node 5. `q` moves to node 6.
Iteration 3:
`p` points to 5, `q` points to 6.
Values are swapped. List becomes: 2 → 1 → 4 → 3 → 6 → 5 → 7
`p` moves to node 7. `q` tries to move to `p->next`, which is `NULL`. So, `q` becomes `NULL`.
End of Loop: The `while(q)` condition is now false, and the loop terminates. The final node (7) is untouched as it had no pair.
10. The following C function takes a singly-linked list of integers as a parameter and rearranges the elements of the list. The function is called with the list containing the integers 1, 2, 3, 4, 5, 6, 7 in the given order. What will be the contents of the list after the function completes execution? [ GATE IT 2005 ]
struct node {
int value;
struct node *next;
};
void rearrange(struct node *list) {
struct node *p, *q;
int temp;
if (!list || !list->next) return;
p = list;
q = list->next;
while (q) {
temp = p->value;
p->value = q->value;
q->value = temp;
p = q->next;
q = p ? p->next : 0;
}
}
A) 1,2,3,4,5,6,7B) 2,1,4,3,6,5,7C) 1,3,2,5,4,7,6D) 2,3,4,5,6,7,1
Show AnswerTry Again
Answer & Explanation
Correct Answer: B) 2,1,4,3,6,5,7
The function works by iterating through the list and swapping the values of adjacent pairs of nodes. It's important to note that it only swaps the `value` data, not the nodes themselves. ↔️
Line-by-Line Code Explanation
struct node *p, *q;: Declares two node pointers, `p` and `q`, which will act as iterators to traverse the list.
if (!list || !list->next) return;: This is a base case. If the list is empty (`!list`) or has only one node (`!list->next`), there are no pairs to swap, so the function simply returns.
p = list; q = list->next;: This initializes the pointers. `p` points to the first node in a pair, and `q` points to the second.
while (q): The main loop continues as long as `q` is not `NULL`. This ensures that `p` always has a valid partner `q` to swap with.
temp = p->value; ... q->value = temp;: These three lines perform a standard swap of the integer values stored in the nodes pointed to by `p` and `q`.
p = q->next;: After swapping, `p` is advanced two positions to the start of the *next* pair. It jumps from its current position to the node right after `q`.
q = p ? p->next : 0;: This line also advances `q` two positions. The ternary operator `? :` checks if `p` became `NULL` in the previous step. If `p` is valid, `q` is set to the node after `p`. If `p` is `NULL` (which happens at the end of a list with an odd number of elements), `q` is set to 0 (`NULL`), which gracefully terminates the `while` loop.
Execution Trace
Let's trace the execution with the input list: 1 → 2 → 3 → 4 → 5 → 6 → 7
Initial State: `p` is at node 1, `q` is at node 2.
Iteration 1:
The values of node 1 and node 2 are swapped.
List becomes: 2 → 1 → 3 → 4 → 5 → 6 → 7
`p` advances to node 3. `q` advances to node 4.
Iteration 2:
The values of node 3 and node 4 are swapped.
List becomes: 2 → 1 → 4 → 3 → 5 → 6 → 7
`p` advances to node 5. `q` advances to node 6.
Iteration 3:
The values of node 5 and node 6 are swapped.
List becomes: 2 → 1 → 4 → 3 → 6 → 5 → 7
`p` advances to node 7. `q` attempts to advance to `p->next`, which is `NULL`. `q` becomes `NULL`.
End of Loop: The condition `while(q)` is now false, so the loop terminates. The last node (7) is left unchanged because it did not have a pair.
11. Suppose there are
⌈logn⌉ sorted lists of
⌊n/logn⌋ elements each. The time complexity of producing a sorted list of all these elements is: (Hint: Use a heap data structure). { GATE CSE 2005 ]
This problem describes merging multiple sorted lists, a task perfectly suited for a min-heap data structure. Here's the breakdown:
1. Identify the Key Variables
Number of lists (k): The problem states there are k = ceil(log n) lists.
Total elements (N): The total number of elements is N = (number of lists) * (elements per list) = ceil(log n) * floor(n/log n), which is approximately n.
2. The Min-Heap Algorithm
The standard algorithm for k-way merging is as follows:
Create a min-heap of size 'k'.
Insert the first element from each of the 'k' sorted lists into the heap. The time to build this initial heap is O(k log k).
Repeat 'N' times:
Extract the smallest element from the heap. This operation costs O(log k).
Add this element to the final sorted list.
Take the next element from the same list the smallest element came from and insert it into the heap. This also costs O(log k).
3. Calculate the Total Time Complexity
The total time is the sum of building the heap plus the N extract/insert operations.
Total Time = (Time to build heap) + N * (Time per operation)
Total Time = O(k log k) + N * O(log k)
Since N is much larger than k, the dominant term is N * O(log k). So, the complexity is O(N log k).
4. Substitute the Values
Now, we substitute our variables back into the formula O(N log k):
Replace N with n.
Replace k with log n.
Complexity = O(n log (log n)).
Concept to Remember
The time complexity to merge 'k' sorted lists with a total of 'N' elements using a min-heap is always O(N log k). The key to solving these problems is to correctly identify 'N' and 'k' from the problem description.
12. Let P be a singly linked list. Let Q be the pointer to an intermediate node x in the list. What is the worst-case time complexity of the best known algorithm to delete the node x from the list? [ GATE IT 2004 ]
A) O(n)B) O(\log^2 n)C) O(\log n)D) O(1)
Show AnswerTry Again
Answer & Explanation
Correct Answer: A)O(n)
In a singly linked list, to delete a node, you must modify the `next` pointer of the previous node to bypass the node being deleted. The problem is that we only have a pointer `Q` to the node `x` itself, not to its predecessor. 😥
Why not O(1)?
There is a common trick that works in O(1) for *intermediate* nodes: copy the data from the next node (`x->next`) into `x`, and then delete the next node. However, this trick fails in the worst-case scenario: when `x` is the last node in the list. In that case, there is no `next` node to copy from.
The Worst-Case Scenario
When the node to be deleted (`x`) is the last node, the only way to delete it is to find its predecessor and set that node's `next` pointer to `NULL`. To find the predecessor, you have no choice but to start from the head of the list (`P`) and traverse all the way to the end. This traversal takes a number of steps proportional to the length of the list, `n`.
Concept to Remember
The "single" in "singly linked list" is the key. You can only move forward. ➡️ To modify a link, you need a handle on the node *before* that link. Without a pointer to the previous node, you are forced to start from the beginning and search, which is an O(n) operation in the worst case.
13. Consider the following C function that operates on a singly linked list. What is the output when the function is called with the list `1→2→3→4→5→6→7→NULL` and `k = 3`?
struct node {
int data;
struct node* next;
};
struct node* reverseK(struct node* head, int k) {
if (!head)
return NULL;
struct node* current = head;
struct node* next = NULL;
struct node* prev = NULL;
int count = 0;
// Reverse the first k nodes of the linked list
while (current != NULL && count < k) {
next = current->next;
current->next = prev;
prev = current;
current = next;
count++;
}
// 'next' now points to the (k+1)th node.
// Recursively call for the list starting from 'next'.
if (next != NULL) {
head->next = reverseK(next, k);
}
// 'prev' is the new head of the input list.
return prev;
}
A) 7→6→5→4→3→2→1→NULLB) 3→2→1→6→5→4→7→NULLC) 3→2→1→4→5→6→7→NULLD) 1→2→3→4→5→6→7→NULL
Show AnswerTry Again
Answer & Explanation
Correct Answer: B) 3→2→1→6→5→4→7→NULL
The function `reverseK` reverses a linked list in groups of `k` nodes. It uses a combination of iteration (to reverse one group) and recursion (to handle subsequent groups).
Code's Logic Explained
The `while` loop: This part iteratively reverses the first `k` nodes. At the end of the loop, `prev` points to the new head of this reversed group (e.g., node 3 in the first call), and `head` (the original head, e.g., node 1) is now the tail of this group.
The recursive call `if (next != NULL)`: After reversing one group, the original head's `next` pointer (e.g., `1->next`) is set to the result of the recursive call on the rest of the list (which starts at the `(k+1)`th node).
`return prev;`: The function returns `prev`, which is the head of the now-reversed sublist.
Execution Trace (List: 1→2→3→4→5→6→7, k=3)
Initial Call: `reverseK(head=1, k=3)`
The `while` loop runs 3 times, reversing `1→2→3` into `3→2→1`.
After the loop: `prev` is 3, `current` is 4, and the original `head` (node 1) points to `NULL`.
A recursive call is made: `reverseK(head=4, k=3)`. The original `head` (node 1) will have its `next` pointer set to the result of this call.
Recursive Call 1: `reverseK(head=4, k=3)`
The `while` loop runs 3 times, reversing `4→5→6` into `6→5→4`.
After the loop: `prev` is 6, `current` is 7, and the original `head` (node 4) points to `NULL`.
A recursive call is made: `reverseK(head=7, k=3)`. The original `head` (node 4) will have its `next` pointer set to the result of this call.
The `while` loop runs only once, as `current` becomes `NULL` after processing node 7. The "group" `7` is reversed to just `7`.
`prev` becomes 7. There is no `next`, so no more recursive calls.
This call returns `7`.
Returning and Linking:
Call 1 receives `7` as the result. It links its tail (node 4) to 7. So `4→next = 7`. It returns its new head, `6`. The list is now `6→5→4→7`.
The Initial Call receives `6→5→4→7` as the result. It links its tail (node 1) to 6. So `1→next = 6`. It returns its new head, `3`.
Final List: 3→2→1→6→5→4→7→NULL
Concept to Remember
This is a "divide and conquer" approach. The problem is broken down: "Reverse the first k nodes, and then recursively handle the rest of the list." The key is to correctly link the tail of the current reversed group to the head of the next reversed group returned by the recursive call. 🔗
14. Suppose each set is represented as a linked list with elements in arbitrary order. Which of the operations among union, intersection, membership, and cardinality will be the slowest? [ GATE CSE 2004 ]
A) union onlyB) intersection, membershipC) membership, cardinalityD) union, intersection
Show AnswerTry Again
Answer & Explanation
Correct Answer: D) union, intersection
Let's analyze the time complexity for each operation, assuming the two lists have lengths $m$ and $n$. Since the lists are in an arbitrary (unsorted) order, we can't use any shortcuts.
Analysis of Operations
Membership: To check if an element exists in a list of size $m$, you must, in the worst case, traverse the entire list.
Complexity: $O(m)$
Cardinality: To find the number of elements in a list of size $m$, you must traverse the entire list to count each node.
Complexity: $O(m)$
Intersection: To find the intersection, you must take each of the $m$ elements from the first list and, for each one, search the entire second list (of size $n$) to see if it exists.
Complexity: $O(m \times n)$
Union: A common way to perform union is to first copy the first list (an $O(m)$ operation). Then, for each of the $n$ elements in the second list, you must check if it's already in the new list (which has at least $m$ elements) before adding it. This check takes $O(m)$ for each of the $n$ elements.
Complexity: $O(m \times n)$
As you can see, the complexities for union and intersection are quadratic ($O(m \times n)$), which is significantly slower than the linear ($O(n)$) complexities for membership and cardinality.
Concept to Remember
Operations that require comparing every element of one unsorted list against every element of another will typically have a quadratic ($O(m \times n)$) complexity. Operations on a single list are usually linear ($O(n)$). 📜
15. The following steps in a linked list:
p = getnode()
info(p) = 10
next (p) = list
list = p
result in which type of operation? [ ISRO CSE 2013 ]
A) Pop operation in stackB) Removal of a nodeC) Inserting a nodeD) Modifying an existing node
Show AnswerTry Again
Answer & Explanation
Correct Answer: C) Inserting a node
This sequence of operations describes the classic algorithm for inserting a new node at the beginning of a singly linked list.
Line-by-Line Breakdown:
p = getnode(): A new, empty node is allocated in memory, and the pointer `p` is set to point to it.
info(p) = 10: The data (or `info`) field of this new node is assigned the value 10.
next(p) = list: The new node's `next` pointer is set to point to the current head of the list (which is stored in the `list` variable). This links the new node to the rest of the existing list.
list = p: The main `list` pointer (which always points to the head of the list) is updated to point to the new node `p`, making it the new official start of the list.
Concept to Remember
This specific operation (inserting at the front) is also how a push operation is implemented for a stack using a linked list. Option A says "Pop," which is the removal operation, making it incorrect. Option C is the most accurate and fundamental description.
16. The following C function takes a singly-linked list as input argument. It modifies the list by moving the last element to the front of the list and returns the modified list. Some part of the code is left blank.
typedef struct node {
int value;
struct node *next;
} Node;
Node *move_to_front(Node *head) {
Node *p, *q;
if ((head == NULL) || (head->next == NULL)) return head;
q = NULL; p = head;
while (p->next != NULL) {
q = p;
p = p->next;
}
_______________________________
return head;
}
Choose the correct alternative to replace the blank line. [ GATE CSE 2010 ]
A) q = NULL; p->next = head; head = p;B) q->next = NULL; head = p; p->next = head;C) head = p; p->next = q; q->next = NULL;D) q->next = NULL; p->next = head; head = p;
Let's analyze the state of the pointers just before the blank line, after the `while` loop has finished:
The loop while (p->next != NULL) runs until p is pointing to the last node.
The pointer q always points to the node *before* p. Therefore, when the loop ends, q is pointing to the second-to-last node.
Goal: Move the last node (p) to the front of the list.
We need to perform three steps in order:
q->next = NULL;: This is crucial. The second-to-last node (q) must become the new tail of the list. We do this by setting its next pointer to NULL, disconnecting the original last node.
p->next = head;: The node we're moving (p) must now point to the current head of the list. This links the old tail to the old head.
head = p;: Finally, we update the main head pointer to point to p, as p is now the new first element of the modified list.
The only option that performs these three steps in the correct order is D.
Pointer Manipulation
When rearranging linked lists, it's helpful to think in terms of "breaking" old links and "making" new ones. Here, we break the link from `q` to `p` and make two new links: `p` to `head`, and the main `head` pointer to `p`.
17. The minimum number of fields with each node of doubly linked list is: [ ISRO CSE 2008 ]
A) 1B) 2C) 3D) 4
Show AnswerTry Again
Answer & Explanation
Correct Answer: C) 3
A node in a doubly linked list must store three essential pieces of information:
The data (or `value`, `info`) itself.
A pointer to the next node in the list.
A pointer to the previous node in the list.
This is in contrast to a singly linked list, which only requires two fields (data and a `next` pointer).
Concept to Remember
Think "doubly" = two pointers (one forward, one backward). Plus, you always need a field for the actual data. So, 2 (pointers) + 1 (data) = 3 fields minimum. ↔️
18. Which of the following operations is performed more efficiently by doubly linked list than by linear linked list? [ ISRO CSE 2008 ]
A) Deleting a node whose location is givenB) Searching an unsorted list for a given itemC) Inserting a node after the node with a given locationD) Traversing the list to process each node
Show AnswerTry Again
Answer & Explanation
Correct Answer: A) Deleting a node whose location is given
Let's analyze the time complexities for each operation when you are given a pointer to the node (let's call it `x`).
Analysis of Operations
A) Deleting a node (given pointer `x`):
Singly Linked List: To delete `x`, you must modify the `next` pointer of its predecessor (prev->next = x->next). Since a singly linked list cannot go backward, you must traverse from the `head` to find this predecessor. This is an $O(n)$ operation in the worst case.
Doubly Linked List: You can access the predecessor directly using x->prev. You can perform the deletion in constant time: x->prev->next = x->next; and x->next->prev = x->prev;. This is an $O(1)$ operation.
B) Searching an unsorted list:
Both list types must start from the `head` and check each node sequentially. Both are $O(n)$.
C) Inserting a node *after* `x`:
Singly Linked List: You set newNode->next = x->next; and x->next = newNode;. This is $O(1)$.
Doubly Linked List: This is also $O(1)$ (it just involves more pointer updates: newNode->next = x->next;, newNode->prev = x;, x->next->prev = newNode;, x->next = newNode;).
D) Traversing the list:
Both list types start at the `head` and follow the `next` pointers. Both are $O(n)$.
Clearly, deletion is the operation that gains the most significant efficiency from a doubly linked list.
Concept to Remember
The "double" in doubly linked list refers to its two pointers (next and prev). This allows for backward traversal and, most importantly, $O(1)$ access to the previous node, making operations like deletion significantly faster. ⏪
19. The time required to search an element in a linked list of length n is: [ ISRO CSE 2008 ]
A) O(log_2 n)B) O(n)C) O(1)D) O(n^2)
Show AnswerTry Again
Answer & Explanation
Correct Answer: B) $O(n)$
A linked list is a sequential access data structure. This means that to find an element, you must start at the beginning (the `head`) and follow the `next` pointers one by one, checking each node along the way. ⛓️
In the worst-case scenario, the element you are looking for is either the very last element in the list or not in the list at all. In either case, you have to traverse and check all $n$ nodes.
$O(log_2 n)$ would apply to structures that support binary search, like a sorted array or a balanced binary search tree.
$O(1)$ (constant time) would apply if you could access the element directly, like accessing an array element by its index (e..g, `arr[5]`).
Concept to Remember
Think of a linked list as a treasure hunt. You start at clue 1, which leads you to clue 2, and so on. To find the $n$-th clue, you have no choice but to follow all $n-1$ clues before it.
20. Consider the C code fragment given below.
typedef struct node {
int data;
node* next ;
} node;
void join (node* m, node* n) {
node* p=n ;
while (p->next ! =NULL){
p = p -> next ;
}
p-> next = m;
}
Assuming that m and n point to valid NULL-terminated linked lists, invocation of join will: [ GATE CSE 2017 SET-1 ]
A) append list m to the end of list n for all inputs.B) either cause a null pointer dereference or append list m to the end of list n.C) cause a null pointer dereference for all inputs.D) append list n to the end of list m for all inputs.
Show AnswerTry Again
Answer & Explanation
Correct Answer: B) either cause a null pointer dereference or append list m to the end of list n.
This question highlights a critical edge case: an empty list (represented by a NULL pointer).
Case 1: List `n` is NOT empty
If `n` is not empty, it has at least one node.
node* p=n;: `p` is initialized to point to the head of list `n`.
while (p->next != NULL): The loop traverses the list, stopping when `p` points to the last node (the node whose `next` is `NULL`).
p->next = m;: The next pointer of the last node of `n` is set to point to the head of list `m`, correctly appending `m` to `n`.
Case 2: List `n` IS empty (i.e., n = NULL)
A "valid NULL-terminated linked list" can be an empty list.
node* p=n;: `p` is initialized to NULL.
while (p->next != NULL): The code immediately tries to evaluate p->next (which is (NULL)->next).
This is a null pointer dereference and will cause the program to crash. 💥
Because the function works for non-empty lists but crashes for empty lists, option B is the only one that describes this behavior.
Concept to Remember
Always test your list algorithms against the "empty list" (NULL head) and "single-node list" cases. This code is missing a check like if (n == NULL) before the loop.
21. N items are stored in a sorted doubly linked list. For a delete operation, a pointer is provided to the record to be deleted. For a decrease-key operation, a pointer is provided to the record on which the operation is to be performed.
An algorithm performs the following operations on the list in this order: Theta(N) delete, O(\log N) insert, O(\log N) find, and Theta(N) decrease-key. What is the time complexity of all these operations put together? [ GATE CSE 2016 SET-2 ]
A) O(\log^2 N)B) O(N)C) O(N^2)D) \Theta(N^2 \log N)
Show AnswerTry Again
Answer & Explanation
Correct Answer: C) O(N^2)
To find the total complexity, we must first find the cost of each individual operation, and then multiply it by the number of times it's performed.
Cost Analysis of Each Operation
delete (with pointer): A pointer to the node is given. In a doubly linked list, we can access the prev and next nodes in $O(1)$ time to remove the element.
Cost per operation: $O(1)$
insert: The list is sorted. To insert an element, we must first find its correct position. Since a linked list is a sequential access structure, we must traverse the list.
Cost per operation: $O(N)$
find: Just like insertion, finding an element requires traversing the list from the beginning.
Cost per operation: $O(N)$
decrease-key (with pointer): A pointer is given, so we can change the value in $O(1)$. However, the list must remain sorted. After decreasing the key, the node might be in the wrong position. We must move it backward to its new correct spot. This requires:
Removing the node ( $O(1)$ with the pointer).
Finding its new correct position ( $O(N)$ traversal).
Inserting the node ( $O(1)$ once position is found).
The bottleneck is finding the new position.
Cost per operation: $O(N)$
Calculating Total Complexity
Now, we multiply the cost per operation by the number of times it's executed:
The dominant term in this sum is $O(N^2)$. Therefore, the total time complexity of all operations put together is $O(N^2)$.
Concept to Remember
Don't be fooled by "sorted" or "doubly linked"! 🎯 For any linked list, operations that require searching (like find, insert in a sorted list, or decrease-key) are $O(N)$ because you cannot perform a binary search. The $O(N^2)$ complexity comes from performing $N$ of these $O(N)$ operations.
22. An unordered list contains $n$ distinct elements. The number of comparisons to find an element in this list that is neither maximum nor minimum is: [ GATE CSE 2015 SET-2 ]
A) $\Theta(n \log n)$B) $\Theta(n)$C) $\Theta(\log n)$D) $\Theta(1)$
Show AnswerTry Again
Answer & Explanation
Correct Answer: D) $\Theta(1)$
This problem can be solved in constant time, regardless of the size of the list (assuming $n \ge 3$).
The Algorithm:
Take any three distinct elements from the list. For simplicity, let's pick the first three: `e1`, `e2`, and `e3`.
Perform a small number of comparisons (at most 3) to find the median of these three elements.
Compare `e1` and `e2`.
Compare `e2` and `e3`.
Compare `e1` and `e3`.
The element that is the median of this group of three (let's call it `m`) is guaranteed to be neither the maximum nor the minimum of the *entire* list.
Why does this work?
Since the elements are distinct, the median `m` is strictly greater than one of the other two elements (let's call it `min_local`) and strictly less than the other (let's call it `max_local`).
Because `min_local` exists and `min_local < m`, `m` cannot be the minimum of the entire list. 📉
Because `max_local` exists and `max_local > m`, `m` cannot be the maximum of the entire list. 📈
Since we only need to look at three elements and perform a constant number of comparisons (at most 3) to find our answer, the time complexity is constant, or $\Theta(1)$.
Concept to Remember
The "trick" is that the problem doesn't ask you to find the *global* max or min; it just asks for *any* element that isn't one. By sampling just three elements, the median of that sample is a guaranteed valid answer. 🎯
23. An algorithm performs (\log N)^{1/2} find operations, N insert operations, (\log N)^{1/2} delete operations, and (\log N)^{1/2} decrease-key operations on a set of data items with keys drawn from a linearly ordered set. For a delete operation, a pointer is provided to the record that must be deleted. For the decrease-key operation, a pointer is provided to the record that has its key decreased.
Which one of the following data structures is the most suited for the algorithm to use, if the goal is to achieve the best total asymptotic complexity considering all the operations?
A) Unsorted arrayB) Min-heapC) Sorted arrayD) Sorted doubly linked list
Show AnswerTry Again
Answer & Explanation
Correct Answer: A) Unsorted array
To find the best data structure, we must analyze the total time complexity for all operations for each option. The total time is the sum of (Number of Operations) $\times$ (Cost per Operation).
Let's analyze the cost per operation for each data structure:
Operation
Unsorted Array
Min-Heap
Sorted Array
Sorted DLL
Find
O(N)
O(N)
O(log N)
O(N)
Insert
O(1)
O(\log N)
O(N)
O(N)
Delete (with ptr)
O(1)
O(log N)
O(N)
O(1)
Decrease-key (with ptr)
O(1)
O(log N)
O(N)
O(N)
Now, let's calculate the total cost for each data structure by multiplying by the number of operations:
A) Unsorted Array:
Find: $(log N)^{1/2} times O(N) = O(Nsqrt{\log N})$
Insert: $N times O(1) = O(N)$
Delete: $(log N)^{1/2} times O(1) = O(sqrt{log N})$
Since $\sqrt{\log N}$ is asymptotically smaller than $\log N$, the best complexity is $O(N\sqrt{\log N})$, which is achieved by the Unsorted array.
Concept to Remember
The dominant operation in this problem is the $N$ insertions. An unsorted array has $O(1)$ insertion, which is far better than the $O(\log N)$ for a heap or $O(N)$ for sorted structures. This $O(1)$ advantage on the most frequent operation outweighs its $O(N)$ cost on the infrequent `find` operation.
24. Consider a singly linked list where F and L are pointers to the first and last elements, respectively. The time for performing which of the given operations depends on the length of the linked list?
A) Delete the first element of the listB) Interchange the first two elements of the listC) Delete the last element of the listD) Add an element at the end of the list
Show AnswerTry Again
Answer & Explanation
Correct Answer: C) Delete the last element of the list
Let's analyze the time complexity of each operation, given we have a pointer to the first node (F) and the last node (L) in a singly linked list.
A) Delete the first element: This is an $O(1)$ operation. We just need to set F = F->next.
B) Interchange the first two elements: This is an $O(1)$ operation. It involves manipulating the pointers of the first two nodes, which we can access via F and F->next.
D) Add an element at the end of the list: This is an $O(1)$ operation *because* we have the L (last element) pointer. We set L->next = newNode; and then update L = newNode;.
C) Delete the last element of the list: This is an $O(n)$ operation. Although we have a pointer L to the last element, we cannot delete it directly. To delete it, we must set the next pointer of the second-to-last node to NULL. In a singly linked list, we cannot go backward from L. Therefore, we must start from the head (F) and traverse the entire list to find the node whose next pointer points to L. This traversal depends on the length of the list, $n$.
Concept to Remember
The "single" in "singly linked list" is the key. You can only move forward. ➡️ To delete a node, you always need access to its *previous* node. For the last element, getting the previous node requires an $O(n)$ traversal.
25. What is the worst-case time complexity of inserting n elements into an empty linked list, if the linked list needs to be maintained in sorted order? [ GATE CSE 2020 ]
A) Θ(n)B) Θ(n log n)C) Θ(n2)D) Θ(1)
Show AnswerTry Again
Answer & Explanation
Correct Answer: C) Θ(n2)
This scenario is operationally identical to the Insertion Sort algorithm.
Here's the analysis:
To maintain a sorted list, every time you insert a new element, you must first traverse the list to find its correct position.
Cost of Each Insertion (Worst Case):
The worst case occurs when the elements are inserted in reverse sorted order (e.g., 5, 4, 3, 2, 1). Each new element will need to be placed at the beginning, requiring a full traversal of the list as it exists.
Insert 1st element: O(1) (into an empty list)
Insert 2nd element: Must compare with 1 element. O(1)
Insert 3rd element: Must compare with 2 elements. O(2)
Insert 4th element: Must compare with 3 elements. O(3)
...
Insert n-th element: Must compare with n-1 elements. O(n-1)
Total Time
The total time is the sum of these costs:
Total Cost = O(1 + 1 + 2 + 3 + ... + (n-1))
This is an arithmetic series that sums to ((n-1) * n) / 2, which simplifies to (n2 - n) / 2.
In asymptotic notation, this is Θ(n2).
Concept to Remember
Building a sorted linked list by inserting one element at a time is just another way of describing Insertion Sort. The worst-case for Insertion Sort is Θ(n2).
Where Fwd and Bwd represent forward and backward link to the adjacent elements of the list. Which of the following segment of code deletes the node pointed to by X from the doubly linked list, if it is assumed that X points to neither the first nor the last node of the list? [ ISRO CSE 2018 ]
To delete the node `X`, we must "bypass" it by linking its neighbors to each other. We are given that `X` is not the first or last node, so `X->Bwd` (the previous node) and `X->Fwd` (the next node) are both valid.
We need to perform two pointer re-assignments:
Update the previous node's `Fwd` pointer:
The node *before* `X` is X->Bwd. Its `Fwd` pointer, which currently points to `X`, must be updated to point to the node *after* `X` (which is X->Fwd).
The code for this is: X->Bwd->Fwd = X->Fwd;
Update the next node's `Bwd` pointer:
The node *after* `X` is X->Fwd. Its `Bwd` pointer, which currently points to `X`, must be updated to point to the node *before* `X` (which is X->Bwd).
The code for this is: X->Fwd->Bwd = X->Bwd;
Option A (X→Bwd→Fwd = X→Fwd; X→Fwd→Bwd = X→Bwd;) correctly implements both of these steps, assuming `→` is used to represent the `->` (pointer access) operator in C.
Concept to Remember
Think of it as "sewing up the gap" 🔗. You take the previous node's "forward" link and connect it to the next node. Then, you take the next node's "backward" link and connect it to the previous node. This leaves `X` completely unlinked from the list.
27. In a doubly linked list the number of pointers affected for an insertion operation will be: [ ISRO CSE 2017 ]
A) 4B) 0C) 1D) Depends on the nodes of doubly linked list
Show AnswerTry Again
Answer & Explanation
Correct Answer: A) 4
This is a fundamental property of doubly linked lists. When you insert a new node (`newNode`) between two existing nodes (`prevNode` and `nextNode`), you must update four pointers to maintain the list's integrity.
Let's look at the four pointers that are assigned:
newNode->next = nextNode; The new node's "forward" pointer must point to the node that comes after it.
newNode->prev = prevNode; The new node's "backward" pointer must point to the node that comes before it.
prevNode->next = newNode; The previous node's "forward" pointer, which used to point to `nextNode`, must now point to the new node.
nextNode->prev = newNode; The next node's "backward" pointer, which used to point to `prevNode`, must now point to the new node.
Even edge cases like inserting at the head or tail involve a similar number of pointer assignments (e.g., setting a pointer to `NULL`, updating the `head` pointer, etc.). The general case, however, clearly affects 4 pointers.
Concept to Remember
Doubly linked lists mean "double the pointers" to manage. An insertion has to update the links for the new node itself (forward and backward) and for its new neighbors (their forward and backward links). 🔗
28. Given two statements:
Insertion of an element should be done at the last node of the circular list.
Deletion of an element should be done at the last node of the circular list. ISRO CSE 2017
A) Both are trueB) Both are falseC) First is false and second is trueD) None of the above
Show AnswerTry Again
Answer & Explanation
Correct Answer: B) Both are false
Both statements are false because they imply a restriction that does not exist. The primary advantage of a linked list is the flexibility to perform operations at any point.
Statement I (Insertion): Insertion can be performed at *any* location in a circular linked list, including the front, the end, or any intermediate position, simply by rearranging pointers. There is no rule that it "should" be done at the last node.
Statement II (Deletion): Similarly, deletion can be performed at any node. In fact, in the most common application of a circular list—implementing a Queue with a single `rear` pointer—deletion (dequeue) is explicitly done at the front of the list (rear->next), not the last node.
Concept to Remember
A key feature of linked lists (circular or not) is the ability to efficiently insert and delete nodes at *any* position, provided you have a pointer to the relevant node(s). There is no "should" or "must" that restricts these operations to the end of the list. 🎯
29. Consider an unordered list of N distinct integers.
What is the minimum number of element comparisons required to find an integer in the list that is NOT the largest in the list? [ GATE CSE 2025 SET-2 ]
A) 1B) N-1C) ND) 2N-1
Show AnswerTry Again
Answer & Explanation
Correct Answer: A) 1
The problem asks for the *minimum* number of comparisons to find *any* element that is not the maximum. We do not need to find the actual maximum element.
Here is the simple algorithm:
Pick any two distinct elements from the list, let's call them e1 and e2.
Perform one comparison between them.
Since all elements are distinct, one of two outcomes must occur:
Case 1:e1 < e2. In this case, e1 is smaller than e2, so e1cannot be the largest element in the entire list. We have found our answer: e1.
Case 2:e2 < e1. In this case, e2 is smaller than e1, so e2cannot be the largest element in the entire list. We have found our answer: e2.
In either case, we can successfully identify an element that is not the largest with just a single comparison.
Concept to Remember
Don't confuse this with finding the *actual* maximum element, which requires $N-1$ comparisons. This question has a simpler goal: just find *any* element that isn't the max. The smaller of any two elements is a guaranteed correct answer. 🎯
Q29. What does the following function do for a given Linked List with first node as head?
A) Prints all nodes of linked listB) Prints all nodes of linked list in reverse orderC) Prints alternate nodes of Linked ListD) Prints alternate nodes in reverse order
Show AnswerTry Again
Answer & Explanation
Correct Answer: B) Prints all nodes of linked list in reverse order
This is a classic example of using recursion to reverse an operation. The function repeatedly calls itself (`fun1(head->next)`) until it reaches the end of the list (the base case `head == NULL`).
Only after the recursive call returns does the printf statement execute. This means the printf for the last node runs first, then the second-to-last, and so on, back to the head. This "prints on the way back" behavior results in a reversed output. ↩️
Q30. Which of the following points is/are true about Linked List data structure when it is compared with array?
A) Arrays have better cache locality that can make them better in terms of performance.B) It is easy to insert and delete elements in Linked ListC) Random access is not allowed in a typical implementation of Linked ListsD) The size of array has to be pre-decided, linked lists can change their size any time.E) All of the above
Show AnswerTry Again
Answer & Explanation
Correct Answer: E) All of the above
All four statements correctly identify the fundamental trade-offs between arrays and linked lists:
(A) Cache Locality: Array elements are stored in contiguous memory, which is very friendly to the CPU cache. Linked list nodes can be scattered all over memory, leading to more cache misses.
(B) Insertion/Deletion: Linked lists excel at this. You only need to rearrange pointers, an O(1) operation (if you have the location). Arrays require shifting elements, which is an O(n) operation.
(C) Random Access: Arrays allow O(1) random access (e.g., `arr[5]`). Linked lists require O(n) sequential traversal to reach the i-th element.
(D) Dynamic Size: Linked lists use dynamic memory allocation to grow and shrink on demand. Arrays have a fixed, pre-allocated size.
Q31. Consider the following function that takes reference to head of a Doubly Linked List as parameter. Assume that reference of head of following doubly linked list is passed to above function: 1 <--> 2 <--> 3 <--> 4 <--> 5 <-->6. What should be the modified linked list after the function call?
This function reverses a doubly linked list. Let's trace the loop:
temp = current->prev;: Store the old `prev` pointer.
current->prev = current->next;: Set the new `prev` pointer to be the old `next` pointer.
current->next = temp;: Set the new `next` pointer to be the old `prev` pointer.
These three steps effectively swap the `prev` and `next` pointers for the current node. current = current->prev; moves to the next node in the *newly reversed* list. Finally, the head pointer is updated to point to the new head (which was the old tail).
Q32. Which of the following sorting algorithms can be used to sort a random linked list with minimum time complexity?
A) Insertion SortB) Quick SortC) Heap SortD) Merge Sort
Show AnswerTry Again
Answer & Explanation
Correct Answer: D) Merge Sort
Merge Sort is the preferred choice for linked lists. Here's why:
Merge Sort (O(n log n)): Works by splitting the list into halves (which can be done in O(n) by traversing) and merging sorted lists (an efficient O(n) operation). It doesn't rely on random access.
Quick Sort (Poor): Relies heavily on random access for its partitioning step. On a linked list, this is inefficient.
Heap Sort (Impossible): Relies on an array-based heap structure that requires O(1) random access to find parent/child nodes.
Insertion Sort (O(n2)): Can be implemented, but its worst-case is O(n2), which is slower than Merge Sort.
Q33. The following function reverse() is supposed to reverse a singly linked list. There is one line missing at the end of the function. What should be added?
struct node {
int data;
struct node* next;
};
static void reverse(struct node** head_ref) {
struct node* prev = NULL;
struct node* current = *head_ref;
struct node* next;
while (current != NULL)
{
next = current->next;
current->next = prev;
prev = current;
current = next;
}
/*ADD A STATEMENT HERE*/
}
After the while loop terminates, the pointers are in this state:
current is NULL (it's what broke the loop).
next is NULL (from the last iteration).
prev is pointing to what *was* the last node, which is now the first node of the reversed list.
The function takes a double pointer head_ref so it can modify the original head pointer from main(). To complete the reversal, we must update that original head pointer to point to the new head, which is prev. Therefore, *head_ref = prev; is required.
Q34. What is the output of following function in which start is pointing to the first node of the following linked list 1->2->3->4->5->6 ?
This function has two printf statements separated by a recursive call. The first printf executes on the way "down" the recursion, and the second executes on the way "back up."
The recursive call skips every other node (start->next->next).
Trace:
fun(1) prints 1. Calls fun(3).
fun(3) prints 3. Calls fun(5).
fun(5) prints 5. Recursive call fails (start->next is NULL).
fun(5) prints 5 again. Returns.
fun(3) prints 3 again. Returns.
fun(1) prints 1 again. Returns.
Final output: 1 3 5 5 3 1
35. Let SLLdel be a function that deletes a node in a singly-linked list given a pointer to the node and a pointer to the head of the list. Similarly, let DLLdel be another function that deletes a node in a doubly-linked list given a pointer to the node and a pointer to the head of the list.
Let $n$ denote the number of nodes in each of the linked lists. Which one of the following choices is TRUE about the worst-case time complexity of SLLdel and DLLdel?
A) SLLdel is $O(1)$ and DLLdel is $O(n)$B) Both SLLdel and DLLdel are $O(log n)$C) Both SLLdel and DLLdel are $O(1)$D) SLLdel is $O(n)$ and DLLdel is $O(1)$
Show AnswerTry Again
Answer & Explanation
Correct Answer: D) SLLdel is $O(n)$ and DLLdel is $O(1)$
Here’s a breakdown of the complexities for each operation:
SLLdel (Singly Linked List) - $O(n)$
To delete a node `x` from a singly linked list, you must change the `next` pointer of the node *before* it (its predecessor). Even though you have a pointer to `x`, you cannot access its predecessor directly in $O(1)$ time. In the worst case, you must traverse the list from the `head` to find the node `p` such that `p->next` points to `x`. This traversal takes $O(n)$ time.
DLLdel (Doubly Linked List) - $O(1)$
To delete a node `x` from a doubly linked list, you also need to update its predecessor. However, because it's a *doubly* linked list, you can access the predecessor in $O(1)$ time using the `x->prev` pointer. The deletion involves two pointer assignments:
1. x->prev->next = x->next;
2. x->next->prev = x->prev;
Both of these operations are constant time, making the entire deletion $O(1)$.
Concept to Remember
The "double" in doubly linked list refers to its two pointers (next and prev). This prev pointer is what gives it the $O(1)$ advantage for deletion when the node's location is already known. ⏪
36. Consider the following ANSI C program: [ GATE CSE 2021 SET-2 ]
#include <stdio.h>
#include <stdlib.h>
struct Node {
int value;
struct Node *next;
};
int main() {
struct Node *boxE, *head, *boxN;
int index = 0;
boxE = head = (struct Node *)malloc(sizeof(struct Node));
head->value = index;
for (index = 1; index <= 3; index++) {
boxN = (struct Node *)malloc(sizeof(struct Node));
boxE->next = boxN;
boxN->value = index;
boxE = boxN;
}
for (index = 0; index <= 3; index++) {
printf("Value at index %d is %d\n", index, head->value);
head = head->next;
printf("Value at index %d is %d\n", index + 1, head->value);
}
}
Which one of the following statements below is correct about the program?
A) Upon execution, the program creates a linked-list of five nodesB) Upon execution, the program goes into an infinite loopC) It has a missing return which will be reported as an error by the compilerD) It dereferences an uninitialized pointer that may result in a run-time error
Show AnswerTry Again
Answer & Explanation
Correct Answer: D) It dereferences an uninitialized pointer that may result in a run-time error
Let's analyze the code to see why.
List Creation
A `head` node is created with `value = 0`.
The `for` loop runs for `index = 1`, `index = 2`, and `index = 3`, creating three more nodes.
This means a total of 4 nodes are created (with values 0, 1, 2, and 3). This makes option A incorrect.
Crucially: The `next` pointer of the very last node (the one with `value = 3`) is never assigned. It is left uninitialized and contains a garbage value from `malloc`.
List Traversal
The second `for` loop attempts to print the values. Let's trace its final iteration.
...
When index = 2:
The loop prints `head->value` (which is 2).
head advances to the next node (value 3).
The loop prints `head->value` (which is 3).
When index = 3:
The loop prints `head->value` (which is 3).
head = head->next; executes. This sets head to the uninitialized next pointer of the last node. head now holds a garbage address.
printf("... %d\n", head->value); This line attempts to access the value at the garbage address. This is a dereference of an uninitialized pointer, which will likely cause a segmentation fault or other run-time error. 💥
This confirms option D is correct.
Why other options are wrong:
A: It creates 4 nodes, not 5.
B: The loops are controlled by `index`, which increments to a fixed limit. There is no infinite loop.
C: A missing `return` from `main` is not a compiler error in C (it's implicitly `return 0`) and is not the cause of the crash.
Q37. You are given a pointer to the head and a pointer to the tail (last node) of two linked lists: one Singly Linked List (SLL) and one Doubly Linked List (DLL). Both lists have $n$ nodes.
What are the worst-case time complexities to delete the last node from the SLL and the DLL, respectively, using these pointers?
This is a classic GATE "trick" question that tests your understanding of the "previous" pointer.
Singly Linked List (SLL): To delete the last node (pointed to by `tail`), you must set the next pointer of the second-to-last node to NULL. A SLL can only move forward. Even with the `tail` pointer, you have no way to get to the second-to-last node except by starting from the `head` and traversing the entire list. This takes O(n) time.
Doubly Linked List (DLL): Because a DLL has a "previous" pointer, you can find the second-to-last node in O(1) time using tail->prev. The deletion is: tail->prev->next = NULL;. This is a constant time operation, O(1).
Q38. Consider the following C function that operates on a singly linked list. What does this function do?
void fun(struct Node* head)
{
// Base case:
// Do nothing for empty list or 1-node list
if (head == NULL || head->next == NULL)
return;
// Recursively call for the rest of the list
fun(head->next);
// Modify pointers on the way back
head->next->next = head;
head->next = NULL;
}
A) Prints the list in reverse order.B) Deletes the list, freeing all memory.C) Reverses the linked list, but the head pointer is lost.D) Creates a circular linked list.
Show AnswerTry Again
Answer & Explanation
Correct Answer: C) Reverses the linked list, but the head pointer is lost.
This is a classic recursive method for reversing a linked list. Let's trace it with 1->2->3->NULL:
fun(1) calls fun(2).
fun(2) calls fun(3).
fun(3) has head->next == NULL, so it returns.
fun(2) resumes. head is 2.
head->next->next = head; becomes (2->next)->next = 2, which is 3->next = 2.
head->next = NULL; becomes 2->next = NULL.
The list is now 1 -> 2 <- 3.
fun(1) resumes. head is 1.
head->next->next = head; becomes (1->next)->next = 1, which is 2->next = 1.
head->next = NULL; becomes 1->next = NULL.
The list is now NULL <- 1 <- 2 <- 3.
The list is successfully reversed. However, the original head pointer (in main) still points to node 1, which is now the tail. The new head (node 3) is not returned, so the reference to it is lost unless the function is modified to return it.
Q39. You are given a singly linked list. Which data structure and pointer configuration is the most efficient to implement a Queue that supports both enqueue() and dequeue() in O(1) time?
A) A Singly Linked List with a pointer to the head only.B) A Singly Linked List with pointers to the head and tail.C) A Circular Linked List with a single pointer to the rear node.D) A Doubly Linked List with a single pointer to the head node.
Show AnswerTry Again
Answer & Explanation
Correct Answer: C) A Circular Linked List with a single pointer to the rear node.
This is a clever and common implementation:
Let the single pointer be rear.
Because the list is circular, the front node is always rear->next.
Operations:
Enqueue (Add to rear): O(1)
1. Create `newNode`.
2. `newNode->next = rear->next;` (point new node to front)
3. `rear->next = newNode;` (point old rear to new node)
4. `rear = newNode;` (update rear pointer)
Dequeue (Remove from front): O(1)
1. `temp = rear->next;` (get front node)
2. `rear->next = temp->next;` (link rear to new front)
3. `free(temp);`
Option B (SLL with head and tail) fails because dequeue() (deleting the last node) would be O(n) as explained in Q37.
Q40. Consider the following C code snippet that uses the fast and slow pointer method on a singly linked list. What does it do for a list with an even number of nodes, e.g., 1->2->3->4->NULL?
End: The loop terminates. The function returns `slow`, which is pointing to node 3.
Note: For an even list (n=4), it returns the second of the two middle elements (n/2 + 1). For an odd list like `1->2->3`, it would return 2.
Q41. An algorithm operates on a sorted doubly linked list of n nodes. It performs the following sequence of operations:
1. (n)0.5insert operations (inserting a value, maintaining sort).
2. n delete operations (given the value to be deleted, not a pointer).
3. log n search operations.
What is the tightest worst-case time complexity for all these operations put together?
A) O(n log n)B) O(n1.5)C) O(n2)D) O(n)
Show AnswerTry Again
Answer & Explanation
Correct Answer: C) O(n2)
To solve this, we must find the cost of *each individual operation* first. The key trap is that "sorted" and "doubly linked" do not give you O(log n) search. A linked list always requires O(n) traversal for searching.
Cost of Each Operation
Insert (sorted): You must first find the correct position. This search takes O(n). The insertion itself is O(1).
Total cost per insert: O(n).
Delete (by value): You must first find the node. This search takes O(n). The deletion itself (once found) is O(1) because it's a DLL.
Total cost per delete: O(n).
Search: This is a simple traversal.
Total cost per search: O(n).
Q42. Consider two polynomials represented by singly linked lists, where each node stores a non-zero coefficient and an integer exponent. The lists are sorted in decreasing order of exponents.
If the two lists have $m$ and $n$ nodes respectively, what is the tightest worst-case time complexity to add these two polynomials and store the result in a new linked list?
A) O(m+n)B) O(m*n)C) O(m log m + n log n)D) O(n)
Show AnswerTry Again
Answer & Explanation
Correct Answer: A) O(m+n)
This is a classic "missing" question. The algorithm to add two sorted polynomials is identical to the merge step of Merge Sort.
You traverse both lists (P1 and P2) simultaneously:
If P1->exp > P2->exp, add the P1 node to the result and advance P1.
If P2->exp > P1->exp, add the P2 node to the result and advance P2.
If P1->exp == P2->exp, add their coefficients, create a new node with the sum (if non-zero), and advance both P1 and P2.
In each step, you advance at least one pointer. The loop continues until both lists are exhausted. This is a single pass, making the complexity proportional to the sum of the lengths of the two lists.
Total time: O(m+n).
Q43. You are given a pointer `P` to an arbitrary node (not the head or tail) in a Singly Linked List (SLL) and a Doubly Linked List (DLL). Both lists have $n$ nodes and you are also given a pointer to the head of each list.
What are the worst-case time complexities to insert a new node *before* the node P in SLL and DLL, respectively?
This is a fresh twist on the classic "delete node" complexity question, but it tests the exact same concept.
Singly Linked List (SLL): To insert *before* P, you must modify the next pointer of P's predecessor. Since a SLL cannot go backward, you must start from the head and traverse the list to find the node q such that q->next == P. This search takes O(n) time.
Doubly Linked List (DLL): You can access the predecessor in constant time using the prev pointer: P->prev. You can then perform the O(1) pointer updates to insert the new node between P->prev and P. This takes O(1) time.
Q44. A fast-growing startup wants to implement a Least Recently Used (LRU) Cache. The cache must support `get` (access) and `put` (insert/update) operations in O(1) average-case time.
Which combination of data structures is most suitable for this?
A) A Hash Map and a Doubly Linked ListB) A Min-HeapC) A Sorted Array and a Hash MapD) A single Doubly Linked List only
Show AnswerTry Again
Answer & Explanation
Correct Answer: A) A Hash Map and a Doubly Linked List
This is a famous advanced application question. Let's analyze the O(1) requirement:
We need O(1) search (to find if an item is in the cache). A standard DLL is O(n) search. This immediately rules out Option D. A Hash Map provides O(1) average-case search.
When an item is accessed (get) or added (put), it must be moved to the "front" of the list (most recently used). This is an O(1) operation in a DLL.
When the cache is full, we must evict the "end" of the list (least recently used). Deleting the last node from a Doubly Linked List is O(1) (unlike an SLL).
Therefore, we use a Hash Map to store (key, pointer-to-DLL-node) for O(1) search, and a Doubly Linked List to maintain the "used" order and perform O(1) moves and deletions.
Q45. Consider the standard recursive function to merge two sorted linked lists, a and b. A line is missing from the code. What is the correct statement to replace the blank?
struct Node* merge(struct Node* a, struct Node* b)
{
if (a == NULL) return b;
if (b == NULL) return a;
struct Node* result;
if (a->data <= b->data) {
result = a;
// _______________________________
} else {
result = b;
result->next = merge(a, b->next);
}
return result;
}
A) result->next = b;B) result = merge(a->next, b);C) result->next = merge(a->next, b);D) result->next = a->next;
Show AnswerTry Again
Answer & Explanation
Correct Answer: C) result->next = merge(a->next, b);
This function builds the new sorted list recursively. Let's analyze the `if (a->data <= b->data)` block:
result = a;: We've decided that node a is the smallest element, so it will be the head of this merged sub-list.
We now need to decide what a's next pointer should point to.
The rest of the list will be the result of merging the remainder of list a (which is a->next) with all of list b (which hasn't been used yet).
Therefore, we set result->next (which is the same as a->next) to the result of the recursive call: merge(a->next, b).
This is the core logic for Merge Sort on linked lists.
Q46. A singly linked list has a loop (cycle). A "slow" pointer (moves 1 step) and a "fast" pointer (moves 2 steps) are used to detect the loop, starting from the head.
The list is: 1 -> 2 -> 3 -> 4 -> 5 -> 6, and the next pointer of node 6 points back to node 3. At which node will the slow and fast pointers first meet?
A) Node 3B) Node 4C) Node 5D) Node 6
Show AnswerTry Again
Answer & Explanation
Correct Answer: C) Node 5
This is a tracing question for Floyd's Cycle-Finding Algorithm. Let's trace the pointers step-by-step.
List: 1->2->(3->4->5->6)-[loops_to_3]
Start:slow is at 1, fast is at 1.
Step 1: slow moves to 2.
fast moves to 3 (1->2->3).
Step 2: slow moves to 3.
fast moves to 5 (3->4->5).
Step 3: slow moves to 4.
fast moves to 3 (5->6->3).
Step 4: slow moves to 5.
fast moves to 5 (3->4->5).
At Step 4, both pointers are at Node 5, so they meet.
0 Comments