Merge sort is a _______ sorting algorithm that follows the _______ strategy.

  • Bubble
  • Divide and Conquer
  • Dynamic Programming
  • Greedy
Merge sort is a Divide and Conquer sorting algorithm that follows the Divide and Conquer strategy. It recursively divides the array into two halves, sorts them, and then merges them back together.

Explain why binary search is more efficient than linear search for large datasets.

  • Binary search always finds the element in the first comparison
  • Binary search can only be used with small datasets
  • Binary search divides the search space in half at each step, reducing the time complexity to O(log n)
  • Linear search has a time complexity of O(n^2)
Binary search is more efficient for large datasets because it divides the search space in half at each step, resulting in a time complexity of O(log n), which is significantly faster than linear search (O(n)).

BFS guarantees finding the shortest path in an unweighted graph because it explores nodes in _______ order.

  • Increasing
  • Lexicographical
  • Non-decreasing
  • Non-increasing
BFS guarantees finding the shortest path in an unweighted graph because it explores nodes in increasing order. As it systematically traverses nodes level by level, the first time a node is encountered, it is reached through the shortest path.

How can the longest common substring problem be extended to handle multiple strings?

  • Apply the algorithm separately to each pair of strings and combine the results.
  • Extend dynamic programming to a multidimensional array to account for multiple strings.
  • Longest common substring problem cannot be extended to handle multiple strings.
  • Utilize greedy algorithms to find common substrings among multiple strings.
To handle multiple strings in the longest common substring problem, dynamic programming can be extended to a multidimensional array. This array helps store the common substrings for each pair of strings, and the results can then be combined.

The Longest Increasing Subsequence problem can be efficiently solved using _______.

  • Binary Search
  • Bubble Sort
  • Depth-First Search
  • QuickSort
The Longest Increasing Subsequence (LIS) problem can be efficiently solved using Binary Search. The binary search approach allows us to find the length of the LIS in an optimized way, reducing the time complexity.

The time complexity of the dynamic programming solution for the coin change problem is _______.

  • O(n * m)
  • O(n log n)
  • O(n)
  • O(n^2)
The time complexity of the dynamic programming solution for the coin change problem is O(n * m), where 'n' is the target amount and 'm' is the number of coin denominations. This is because the dynamic programming table has dimensions n x m, and each entry is filled in constant time.

Suppose you are designing a database system where frequent insertions and deletions are expected, but the overall tree structure needs to remain balanced. Which type of tree would you choose and why?

  • AVL Tree
  • B-Tree
  • Binary Search Tree (BST)
  • Red-Black Tree
In this scenario, a Red-Black Tree would be chosen. Red-Black Trees provide a good balance between the search and insertion/deletion operations, ensuring that the tree remains balanced. Their self-balancing property makes them suitable for scenarios with frequent modifications while maintaining a relatively balanced structure.

In the Fractional Knapsack Problem, items can be divided to fit into the knapsack partially, whereas in the 0/1 Knapsack Problem, items must be chosen _______.

  • Arbitrarily
  • Completely
  • Exponentially
  • Sequentially
In the 0/1 Knapsack Problem, items must be chosen completely, meaning either an item is included in its entirety or not at all. On the other hand, the Fractional Knapsack Problem allows items to be divided and included partially.

The time complexity of radix sort is _______ in most scenarios.

  • O(k * n)
  • O(n * log n)
  • O(n + k)
  • O(n^2)
The time complexity of radix sort is O(k * n), where 'k' is the number of digits or components in the keys, and 'n' is the number of elements. It is linear and often more efficient.

How does Dijkstra's algorithm guarantee the shortest path in a graph with non-negative edge weights?

  • Always selects the smallest tentative distance
  • Considers random paths
  • Prioritizes longest paths
  • Utilizes heuristics for optimization
Dijkstra's algorithm guarantees the shortest path by always selecting the smallest tentative distance, ensuring that the chosen path at each step is the most optimal. It relies on a greedy approach and the non-negativity of edge weights to consistently find the shortest paths. Heuristics, random paths, or prioritizing longest paths are not part of Dijkstra's algorithm logic.